
The European Union (EU) kicked off enforcement of its groundbreaking artificial intelligence (AI) law Sunday with a ban of AI systems it says pose “unacceptable risk.”
The EU AI Act places tough restrictions on activities such as social scoring and the manipulation of an individual’s decisions with fines of up to $35.8 million or 7% of global annual revenue from the previous fiscal year — whichever is higher. By comparison, fines under the General Data Protection Regulation (GDPR), Europe’s strict digital privacy law, top out at $20.5 million or 4% of annual global turnover for GDPR breaches.
Though the EU AI Act went into effect Sunday, organizations aren’t likely to face fines until August when the EU is expected to formally enforce punishment.
The act specifies AI systems considered unacceptable as those that attempt to profile people with disabilities and the underaged; predict people committing crimes based on appearance; use biometrics to infer a person’s characteristics such as sexual orientation; systems that collect “real time” biometric data in public places for use by law enforcement and AI systems that expands facial recognition databases by scraping images online or from security cameras.
In December, the newly created EU AI Office published a code of practice for general-purpose AI models (GPAI) like OpenAI’s GPT family of large language models (LLMs). It requires developers of “systemic” GPAI models to undergo rigorous risk assessments. An executive order from former President Joe Biden in October 2023 would have done something similar, but President Donald Trump quashed it with his own executive order upon taking office in January.
The tech industry has been bracing for the EU AI Act for nearly a year since it was passed by European Parliament. The act is the most comprehensive legislative by a government organization to regulate AI use, and most American tech companies have indicated they will comply. Last September, more than 100 companies including Amazon, Google, and OpenAI signed the EU AI Pact to voluntarily start applying the principles of the law to identify AI systems likely to be deemed as high-risk. Meta, Apple Inc., and French startup Mistral were among those who didn’t.
Europe’s landmark law comes as the U.S. and China duel to build the biggest AI models, heightened by the overnight emergence of Chinese AI startup DeepSeek.
“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” Diyan Bogdanov, director of engineering intelligence and growth at Payhawk, told CNBC.
“The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation — they’re defining what good looks like,” Bogdanov said.