legislation, EU, regualtion, AI Act, regulation

The European Parliament’s Artificial Intelligence Act comes into force on August 1, 2024. Driven by the public think-tank gurus now entrusted with regulating the creation, deployment, implementation and management of safe AI across the 27 member states of the European Union, the legislation and compliance standards within will impact organizations interacting with EU countries across North America and the rest of the world.

Perhaps mindful of cloud computing’s early ascendancy period when the technology industry accelerated development without due consideration for the security provisioning that exists today, the EU wants to make sure both corporate and public bodies in the region adopt intelligence safely and responsibly.

Four AI Risk Categories

Rosemary J. Thomas, senior AI labs technical consultant at Version 1 reminds us that the EU AI Act has split the applications of AI into four risk categories. “Those four AI categories are unacceptable risk, high risk, limited risk and minimal or no risk. Most of its provisions, however, won’t become applicable until after two years – giving companies until 2026 to comply. The exceptions to this are provisions related to prohibited AI systems, which will apply after six months, and those related to general-purpose AI, which will apply after 12 months,” said Thomas.

As ominous as it sounds, the presence of “prohibited AI systems” relates to the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision. 

Balancing Innovation vs Standards

While it’s clear that AI needs international regulatory control measures as soon as possible, the speed and measure of this wide-ranging set of regulations are not being laid down without questions being raised. If we are seeking to control, corral and coalesce, then have we done so in a way that is itself equitable and accountable? This is a significant sticking point for CalypsoAI CTO James White.

“There are still many questions that have yet to be answered and that absence of information could affect implementation,” advises White. “For instance, where will liability [for sloppy AI] rest? How will this Act interact with other legislation – and how will it apply to future innovations? Not knowing the answers explicitly makes it more challenging for an organization to prepare. Another challenge for businesses will be how to strike the perfect balance between pursuing innovation, complying with the Act… and ensuring AI development aligns with ethical standards, yet doesn’t stifle creativity or technological advancement – there’s a lot to digest here.”

This balancing act between innovation and control is certainly one that’s widely recognized across the enterprise IT sector now. Eduardo Crespo, VP of EMEA at PagerDuty calls it a tightrope (insert balance-related analogy of your choice) and warns that companies “barrelling into AI usage” without a clear blueprint will feel the sting down the line. When it comes to data protection, for instance, his team has seen that EU regulators can be unrepentant, enforcing massive fines for transgressions, with no organization above the law. The estimation is that violations of the AI Act could draw large penalties (up to 35 million Euros or 7% of a company’s revenue) when compliance is not kept top-of-mind.

Front Foot First Flourishes

“When new regulation is passed, there is a fair amount of ambiguity in interpreting the legal parameters, which is why leaders need to scrutinize their company’s technology posture continuously when it comes to AI, across compliance and liability. To flourish with AI in the long-term it’s about being on the front foot with procedure, rather than the back foot when called upon by auditors or regulators,” said Crespo.

Thinking about how settled this market is right now, Crespo says that PagerDuty’s research has suggested that many leaders are unable to reach a consensus on digital strategy. While it is necessary, timely and arguably admirable that this regulatory mandate comes into being, the organizational impact of the EU AI Act is yet to be seen. In his role as AI group co-chair at global law firm DLA Piper, Gareth Stokes says that the EU AI Act marks a milestone in clarity and compliance. But he feels that the Act’s adoption feels like the beginning of a journey, not the end i.e. many gaps in the compliance puzzle remain, as organizations will await the keenly anticipated codes of conduct and liability directives with bated breath.

“Yet, despite those challenges, we see a real compliance opportunity for AI here,” said Stokes. “In contrast to many regulatory and compliance hurdles, those who have grasped the nettle early and put AI policies in place have almost universally seen a boost as the new-found certainty grants their people permission to innovate. We’ve seen time and again that those with good AI governance are also those unlocking AI’s productivity and efficiency benefits. In doing so, the new opportunities created comfortably offset the costs of compliance.”

Clear as Mud

At a time when the majority of technology advocates and evangelists are being politely poised about the impact of the EU AI Act, not everyone shares the glass-half-full upbeat stance. “Folks who think the EU AI act is clear have either not read it or do not understand AI & ML,” asserts Dr Kjell Carlsson, head of AI strategy at Domino Data Lab. Carlsson is convinced that the language used in the Act is “extraordinarily broad” and therefore effectively impossible to determine what would be sufficient for compliance. 

“With the passing of the EU AI act, the scariest thing about AI is now AI regulation itself. Between the astronomical fines, sweeping scope, and unclear definitions, every organization operating in the EU now runs a potentially lethal risk in their AI, ML, and analytics-driven activities,” said Carlsson. “However, using these technologies is not optional and every organization must increase their use of AI in order to survive and thrive. Consequently, it is more important than ever for companies to build their responsible AI capabilities by implementing the processes and platforms to efficiently govern, validate, monitor and audit the entire AI lifecycle at scale.”

Quite how we navigate around AI after this Act passes is another key point in terms of implementation. A process of “watermarking” AI-generated content that has significant human input has been mooted as one effective procedure. We may see some form of remuneration scheme for rightsholders who uphold all the legislative rulings, but how this would be categorized and executed is open to question.

Portions of Proportionality 

“The AI Act will introduce limited exceptions for text and data mining and recognize the importance of balancing copyright protection with promoting research and innovation. It acknowledges the need for proportionality in compliance requirements for startups and SMEs,” said Sebastian Gierlinger, VP of engineering at enterprise CMS company Storyblok. 

He notes that the EU AI Act includes a transparency requirement for “publishing summaries of copyrighted data used in training” with some get-outs to allow for data mining of copyrighted works in instances such as for use by research institutions. However, this is not considered a viable defense for AI companies with public and commercial generative AI systems. But while big tech puts pressure on governments to hold off on legislation, AI systems continue to train on copyrighted content. Gierlinger advises us that the AI Act requires transparency from providers, ensuring accountability and enforcement of copyrights, meaning that AI teams will be required to provide comprehensive information about the datasets used.

Throughout this discussion, humans do feature. Humans are stipulating the parliamentary mandate, humans are running the enterprise technology teams creating and implementing the AI models, and humans are validating which intelligence engines have a significant degree of human ratification and validation (we could add RAG in here too as an essential seasoning)… and humans are using the AI-enabled apps that result at the end of this food chain. As we move forward with these new rulings, let’s hope we can keep humans at the heart of AI to ensure the world becomes more intelligent.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY