The successful integration of artificial intelligence (AI) into our lives depends mainly on trust. Both businesses and the general population need to trust AI to use it, and the first step to achieving that has been taken. The European Union (EU) has now approved the landmark AI Act, a solid step in playing by the rules and in continuing AI regulations globally.
This is the first comprehensive law, and the world’s first rules on AI by a major regulator, and a significant moment. The act affects anyone who develops and deploys AI systems in the EU, and will expand in scope as AI technology continues to develop. The legislation safeguards against intrusive and discriminatory practices. Eduardo Azanza, CEO of the biometrics solutions company, Veridas, noted, “The passing of the Artificial Intelligence Act should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public.
I took the opportunity to ask Eduardo his opinion of why we need this law now. “Currently, there is slight apprehension towards AI by organizations and the wider public, “ he said. “Many people have raised worries regarding how and where AI stores and uses data. By having agreed standards and deliverables such as the Artificial Intelligence Act, it helps build trust from the public that AI and biometrics are being used responsibly and ethically.
As the U.S. looks to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards. These standards need clearly defined responsibility and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. Ultimately, public trust will be the reason why we continue to see the rapid development of AI or not.”
What Will the New Rules Cover?
While many AI systems are low-risk and pose no threat, they must still be assessed. So the new rules do just that: Establish obligations for both providers and users across the board, depending on the risk factor of the AI products and services. There are different rules for different levels.
The different categories are:
- Unacceptable Risk: This applies to any systems deemed to be a risk to people, and they are banned.
Specifically: “Cognitive behavioral manipulation of people or specific vulnerable groups: For example, voice-activated toys that encourage dangerous behavior in children.”
“Social scoring: Classifying people based on behavior, socio-economic status or personal characteristics.”
“Real-time and remote biometric identification systems, such as facial recognition.”
So, for those of us who automatically think of AI comparisons to Terminator, Ted or Black Mirror – for now, we are safe.
- High Risk: Any AI systems that negatively affect human rights, or that affect safety. Also affected are any systems that fall into the following eight areas, and these need to be registered in an EU database:
Biometric ID and categorization of persons
Critical infrastructure management
Educational and vocational training
Access to public services and essential private services
Asylum, migration and border control
Application of the law and interpretation
- Limited Risk: This category allows us, as users, to make informed decisions, with, for example, AI systems that generate or manipulate images, or audio or visual content.
Users will be made aware that they are interacting with AI and given the choice to continue using it or opt out.
- Generative AI: It must comply with transparency requirements, and products like ChatGPT must disclose that any content made is generated with AI.
Generative AI models will be prevented from generating illegal content.
Rules for Transparency, Responsibility and Ethics
Why are the new rules necessary? It’s a clear case of accountability. As Azanza says, “It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved.”
“Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical and ethical standards.”
The Role of Cybersecurity in the Development of Regulations
The cybersecurity community will also play a key role in the development and implementation of ongoing AI regulations. And particularly with the frenetic pace of activity in the development of AI. Like cloud, endpoint and mobile before it, we can use those experiences to identify and deal with attack vectors that attempt to exploit AI system vulnerabilities.
Christopher “Tito” Sestito, CEO of HiddenLayer, a security solutions provider, says, “The EU AI Act is one step in what we expect to be continued AI regulatory actions globally. HiddenLayer believes the cybersecurity community is uniquely prepared to play a key role in the continued development of regulations for AI. We have decades of experience with new technologies, their unique attack types and development of appropriate controls to balance security and technology adoption.”
The Future of AI Regulation in the United States
So, is the new EU Artificial Intelligence Act the future for the U.S. too? Presently, there is no comprehensive federal legislation dedicated solely to AI technology and its regulation. However, policymakers and lawmakers have recognized the need for a framework, and U.S. regulation from the harmful aspects of AI is surely a necessity. Ani Chaudhuri, CEO of Dasera, says, “We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights.”
CNN Business notes that Europe is indeed taking the lead. We can see that China has regulations already in place and Australia is seeking input on regulations. Until there is a comprehensive approach to domestic AI regulation, then Europe will continue to lead this race.