AI news

Perhaps the most surprising thing about the historic AI Act enabled by the European Union is that it has regulatory teeth, in marked contrast to a long-standing dialogue characterized by fussing over “guidelines.” A new EU AI Office will have the power to set fines for transgressors – as much as seven percent of a company’s annual worldwide turnover or nearly $40 million, whichever is higher.

The key takeaway is an emphasis on greater transparency by AI tech developers while banning some applications outright. Given that even non-EU countries would need to adhere to these rules if they want to do business in Europe, the EU’s rules on AI have a worldwide impact.

“The AI ACT is a global first. A unique legal framework for AI you can trust,” said EU president Ursala von der Leyen in a social media post.

While an EU AI law had been proposed as far back as 2021, the explosion of generative AI systems like ChatGPT in recent months proved to be a hurdle in the late going. The EU AI Act adopts a two-tier approach to regulation with a basic transparency requirement for general purpose AI model and more stringent rules for more powerful, high-risk models.

The two-tier approach can be seen as a way to not choke off AI development within the EU. Europe is keen to protect its own future AI champions. France’s Mistral AI, started in June, 2022, announced on December 10 it has raised $414 million to propel it into the front ranks, along with Germany’s Aleph Alpha, of EU-based AI companies. Mistral AI, in particular, is considered to be a potential rival to OpenAI as it is backed by chipmaker Nvidia and software giant Salesforce.

AWS

What’s Banned is a Fairly Lengthy List:

-Biometric categorization systems that use sensitive characteristics, meaning things like political, religious, philosophical beliefs and sexual orientation.

-Untargeted scrapping of facial images from the internet or CCTC footage to create facial recognition databases

-Emotion recognition in the workplace and educational institutions

-Social scoring based on social behavior or personal characteristics

-AI systems that manipulate human behavior to circumvent their free will

-AI to exploit vulnerabilities of people due to their age, disability, social or economic circumstances.

The use of biometric identification systems did get a narrow exemption for law enforcement purposes. Biometrics can be used for targeted searches of victims of crimes likes kidnapping, trafficking or sexual exploitation, the prevention of a terrorist threat and the investigation of major crimes by criminal organizations as well as perpetrators of environmental crimes. Groups like Amnesty International had lobbied for a complete ban with no exceptions, fearing potential abuses by governmental authorities.

Some banned applications are already in use outside of the EU such as the social scoring system widely reported to be used in China. Controversial companies like Clearview have been cited for unfettered scrapping of facial images. High-risk applications include AI systems that would influence elections and voter behavior, self-driving cars, and medical procedures. Crucially, citizens will have the right to file complaints about AI systems and receive explanations about AI decisions that impact their rights.

Basic transparency requirements include technical documentation, adherence to copyright law and details on materials used for AI training. For AI models with systemic risks, the requirements require model risk and assessment, adversarial testing, cybersecurity robustness and energy use.

Transparency across the AI world is having a moment. The newly-formed AI Alliance, for example, is advocating for open source AI development. Conversely, proprietary black box AIs are attracting more scrutiny, most notably around the various rumors associated with the OpenAI/Sam Altman saga and rampant speculation regarding the development of an advanced AI agent called Q* that may be at or close to an artificial general intelligence (AGI) capable of solving complex math programs and making decisions autonomously, increasing the odds for an AI “misaligned with human interests” in the words of independent industry analyst Shelly Palmer.

The AI Act is generally seen as a win for open source advocates. The AI Act is likely to be formally based next Spring and brought to bear two years later once the required AI regulation infrastructure is in place.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY