AI, governance

The European Parliament on Wednesday overwhelmingly approved the EU AI Act, a sweeping set of rules and regulations designed to ensure the safe use of AI throughout the political bloc and mitigate the risks associated with the rapidly emerging technology.

The vote on the framework, which has been in the works since 2021 and was agreed to by all 27 member states in December, was the most significant hurdle that needed to be cleared and sets the stage for the EU AI Act to be put into effect in May or June, after final approval by the European Council.

It puts the EU at the forefront of legislation aimed at putting controls and guidelines on the development and use of AI software at a time when innovation, adoption, and investments in the technology are furiously expanding with the rise of generative AI, which was put into motion by OpenAI’s release of its ChatGPT chatbot in late November 2022.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the European commissioner for internal market, wrote in a post on X (formerly Twitter).

The AI Act passed with 523 votes in favor and only 46 against. There were 49 abstentions.

Praise for the EU AI Act

Many in the tech industry applauded the passage, with Christina Montgomery, IBM’s vice president and chief privacy and trust officer, calling it “monumental” and a significant step for AI regulation not only in Europe but also around the world.

Montgomery noted that European policy makers for several years have been wrestling with how to regulate AI to ensure businesses, citizens and governments are protected from the potential risks that come with AI.

“Unless people in business trust this groundbreaking technology, we won’t be able to realize the benefits to industry and to society as a whole,” she told journalists and analysts in the leadup to the vote.

Jonathan Dambrot, CEO of enterprise AI security startup Cranium AI, told Techstrong.AI that “it’s impossible to understate the impact that this leading regulation, EU AI Act, will have on the global AI development and deployment environment, likely even more so than GDPR.”

“Historically, many international regulatory bodies look to the European Union to provide a baseline on security and privacy regulations,” Dambrot said. “Given the complexity of the challenge to define and implement regulatory requirements on AI, the approval will be a major catalyst to the global regulatory sphere for AI. I highly expect other countries to leverage the AI Act as a template to modify to support their own particular requirements.”

Levels of Risk

The regulations outline levels of risks that the myriad AI applications represent, from low up to “unacceptable.” Essentially, the higher the risk, the stricter the regulations around it. Some applications, like AI-based recommendation software, rank among the lower risks. However, others that hold the potential to threaten such areas as health, safety, fundamental rights, or democracy, will face stronger requirements like maintaining use logs, being accurate and transparent, using high-quality data, and ensuring human oversight.

Such applications include those used for critical infrastructure like water or energy systems, education and vocational training, and essential private and public services like health care and banking.

Then there are the uses that bring unacceptable risks – which are essentially banned – such as facial recognition software used in public places, social scoring, emotion recognition in schools or workplaces, and in some cases predictive policing. Also, AI that manipulates human behavior and exploits people’s vulnerabilities also will be forbidden.

In addition, as a nod to the fast-growing generative AI market, the EU also added requirements for general-purpose models from the likes of AI vendors like Google, Microsoft, and OpenAI, such as detailing the data – from texts to video to images – that the models are trained on.

Context is Key

Such context is critical when it comes to AI, IBM’s Montgomery said.

“The way in which models are developed and ultimately deployed really matters,” she said. “It provides clear guidance on end uses and categories of high-risk activity, it requires companies to show the impact of high-risk system in the case of things like biases, and it centers on transparency, because nobody should be tricked into interacting with an AI.”

It won’t be easy for businesses to navigate their way through the new regulations, though they will have time, Cranium AI’s Dambrot said. They’ll be challenged in finding and publishing the relevant information about their AI systems, given the rise of shadow-AI – where AI applications are being used by employees without the company’s knowledge – and the fact that every new and legacy tool is being embedded with AI capabilities.

“Simply meeting transparency requirements in the EU AI Act will require coordination across the enterprise, and conducting follow-on risk assessments and monitoring for these systems create pressure on traditionally under-funded departments,” he said. “What is notable here is that enterprises will appreciate the compliance grace period of anywhere from six to 24 months, as many organizations are still in the early stages around standing up governance controls for their AI usage.”

The rules for chatbots and other general-purpose AI systems will come into effect a year after the law does. By the middle of 2026, all the regulations will apply.

Ahead of the Pack

The European Parliament’s vote comes as other countries are trying to pull together legislation to catch up with AI development and accelerate the benefits while reducing risk. In the United States, President Biden in October 2023 issued an executive order that calls for an all-of-government approach and collaboration with the private sector to ensure the safe and secure development and use of AI.

In addition, AI companies are putting in place secure AI policies and have worked together to address the risks that come with AI, including an agreement last year among Microsoft, Meta, Google, OpenAI, Amazon, Meta, Anthropic and Inflection and in conjunction with the Biden Administration.

Some of the same companies – such as Microsoft, Google, Meta, and OpenAI, along with Apple – last month joined a consortium created by the White House regarding the safe development of AI. In December, the Cloud Security Alliance launched the AI Safety Initiative that includes Microsoft, Google, Amazon, OpenAI and Anthropic, as well as the Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA). The consortium also is focusing on AI-associated security and risks.

In addition, at least seven states are working on legislation aimed at reducing bias in AI.

Other countries like China and Brazil reportedly are putting in place guidelines for AI, as are the United Nations and other associations of countries.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY