EU AI Act: Open Source Leaders Unite to Urge Policymakers to Protect Open Innovation

A fresh new coat of protective regulatory law is usually welcome, but perhaps not so much when applied with a broad brush. Is the new European Union Artificial Intelligence Act fair to everyone?

The new EU AI act is necessary, no doubt, as it will be the benchmark for how AI is built, deployed and regulated to keep everyone safe – from its manufacturers to end-customers and users.

What About Open Source?

However, at the heart of AI development sits open source, and because it seems to have been overlooked within the new EU AI act, a group of companies has taken action and has published a policy paper to this effect. The group of companies — a coalition of stakeholders in the open source AI area, including GitHub, Hugging Face, EleutherAI, Creative Commons and more — intend to ensure that AI works for everyone, and call for more open source support in EU AI law.

To gain some more clarification and insight, I took the opportunity to ask Peter Cihon, Senior Policy Manager at GitHub, some questions:

Q. With the first comprehensive regulations set to be enforced by the EU act on artificial intelligence, does the EU need to do more in support of open source, and why?

A. The AI Act offers European policymakers the opportunity to establish a democratic precedent for AI regulation, as policymakers around the world are considering how to approach this. Open source and open science have been at the core of AI innovations for years, pioneering frameworks like PyTorch that make it easier for more people to train AI solutions, and increasingly, open AI models that allow people to analyze and build upon model weights directly.

The AI Act is poised to regulate how AI solutions are built, and for foundation models in particular, impose obligations on developers to register their models with the government, maintain documentation for each for 10 years, and to implement quality management systems. While these measures may be warranted for high risk systems, requiring open source developers to implement measures better suited for companies raises challenges. The EU should take a proportionate approach to regulating open source developers, acknowledging their unique and beneficial contributions to AI innovation in pioneering transparency, inclusive development, scientific reproducibility and enabling competition.

Q. Should open source developers be subject to the same burden as those developing commercial software, and if not, why not?

A. Open source developers are individuals from diverse backgrounds, ranging from students, hobbyists, to employees of nonprofits, start-ups and large companies. These individuals, when they share AI components or software code under an open license, support a global commons of knowledge. Anyone can use these components in their own solutions. This approach to open source infrastructure has been hugely successful in software, with some 96% of all software today including open source components. Licenses that disclaim warranty and liability have been central to enabling individuals to make these contributions.

If a company integrates open source into their product, it is the company who takes on liability. The same approach should govern AI development: Downstream providers integrating open source components have the resources needed to comply with the law. They should not impose work on upstream individuals contributing to open source simply because these companies seek to make a profit. (Note that as a company building AI applications, we are committed to complying with the law.)

As drafted the AI Act may have the unintended consequence of doing precisely this. Thankfully, the Parliament position includes a partial exemption for open source, and this is what the coalition sought to support and improve. If open source developers are subject to regulation better suited for commercial entities, there’s a real risk that they’ll stop contributing and we’ll see a widespread chilling of AI innovation in the EU. So that’s what we want to avoid, and we’re optimistic the final text will protect open innovation.

Q. How can this model be improved in specific relation to open source projects?

A. GitHub and a coalition of leading open culture companies and nonprofits, Hugging Face, EleutherAI, LAION, Creative Commons, and Open Future, wrote our policy paper to support open source developers in the AI Act. We made five concrete recommendations to improve the Parliament text, in addition to offering detail about how the open source AI development model and value chain works and an assessment of the three AI Act proposals that will be reconciled in the Trilogue.

Q. Is open source generally considered incompatible with safe AI development?

A. It’s a misconception to say that open source is incompatible with safe AI development. In fact, open source and open science communities have pioneered many of the best practices for safe AI development, including model documentation and auditing. Looking at the history of open source software and the internet, these tools have been used by bad actors, but have also enabled societal transformations and entire new categories of tools, including protections against the few bad actors that are out there.

Policymakers did not ban email or open source web servers because of spam or spear phishing. We need to bring the same clear-eyed pragmatism to the future of AI.

Q. Can the new EU AI regulations work for open source, as they are? Will open source be required to change in order to fit governance, accountability and compliance in Europe (and beyond) or are the rules around the new legislation considered too burdensome?

A. If the Parliament version of the text is adopted, open source developers will be able to continue contributing within the EU in most cases. However, research non-profits that release open source foundation models could face insurmountable barriers. It’s possible that some organizations will take on these requirements in stride, by hiring compliance personnel as if they were a company.

It’s more likely that open source innovation will simply shift outside of the single market, with collaborative and inclusive development continuing beyond EU borders and requiring measures to effectively block would-be illegal EU participation and access.

Q. The new EU AI act will doubtless set a precedent for AI regulation worldwide. If legislation for open source is to be changed or amended then how important is it to get it right, now?

A. The EU AI Act can set the right precedent for global AI regulation by giving open source developers a seat at the table and by protecting their free and open contributions. The Parliament text is headed in the right direction, and we hope that our coalition paper will give all EU policymakers the information and input they need to get it right in the final negotiations.


Open source software is free to access, use and change without restrictions, and plays a central role in the development and use of artificial intelligence. So, let’s hope that the EU policymakers take on board the suggestions and points made by the coalition of stakeholders, and that open source developers will be able to continue to contribute to AI innovation, unhindered by broadly-sweeping but restrictive regulation.