
Europe, in March, introduced the most extensive artificial intelligence (AI) legislation globally. Its goal: To place strict guardrails and requirements for AI systems, particularly those classified as high-risk.
This includes bans on certain uses of AI like emotion recognition in schools and the unregulated scraping of images for facial recognition, alongside compulsory risk assessments and transparency enhancements.
This legislation emerges amidst a heated debate over AI’s potential dangers relative to its benefits. On one side you have the “safety-ists”, which advocate for a more paced and surgical adoption of AI; on the other, you have the “move fast and break things” group that feels fears are overblown and could decelerate innovation.
Who’s right? One could argue both sides make good points, but not in totality.
Where’s There a Pro There’s a Con
The biggest con with any regulation is the fear of ‘stifling innovation’, whether or not that is realized.
The Act imposes potential fines of up to 7% of global revenue for non-compliance. It also mandates that all AI products in the EU market, regardless of origin, comply with these new rules, thus extending its impact globally as major AI firms must adapt to enter or operate within the EU.
This raises a few critical questions:
- Will smaller, resource-constrained companies have trouble complying?
- Will startups choose to set up shop elsewhere in the world?
- Will this deter larger companies from doing business in Europe as well?
- Will all of this cause a huge brain drain?
All the above are possible. This is why this must be done very, very carefully.
Despite all of this, there are significant potential benefits to be had in terms of trust and adoption through the definition of red lines about what is prohibited or heavily regulated. It could very well promote transparency and data privacy, which could ultimately lead to better outcomes
It could also have an impact globally as the rules have the potential to guide decisions in the U.S., allowing individual states or the government to cherry-pick the best aspects of this ‘global first’ regulation, much in the same way as GDPR. Given that the regulation is EU-wide, it will have a significant impact on U.S. companies that do any business in Europe — especially the large tech giants, so some aspects of the regulation are likely to become ‘de-facto’ practices.
So What Now? Prepare
Although this is likely not a perfect solution and will be iterated in time, the pace of development and change in the ‘AI industry’ or ‘AI-enabled’ industry means that done is better than doing. Further efforts can iterate off this good start.
Here are some proactive steps that firms can take to adequately prepare in the short term:
- Audit any existing or proposed AI use. Catalog all AI systems in your stack. Examine how these technologies are used, their purposes, and their impact on operations and data handling. More importantly, audit third-party AI systems thoroughly, as other vendors may have varying security standards that don’t directly align with your organization’s.
- Establish a primary point of contact for keeping those records up to date. Designate a responsible officer or team whose role includes maintaining this AI inventory. This point of contact will ensure that all records are current and reflect any changes in AI use, assisting in compliance and oversight.
- Evaluate the data security and data sources of the firm’s AI tools, to identify areas of potential bias and privacy risk. Conduct a comprehensive review of the security measures in place for protecting data used by AI systems. Assess the origins of the data to identify any potential biases embedded within the data sets or risks to data privacy, adjusting data handling and processing practices as needed.
- Include questions on AI use, policies and controls in due diligence assessments of vendors. Amend your vendor due diligence processes to incorporate evaluations of AI usage. This includes understanding vendors’ AI frameworks, their compliance with relevant laws and how they manage data security and privacy risks.
- Review risks associated with AI use regularly, establishing guidelines to ensure consistency. Implement a schedule for regular reviews of AI-related risks, considering changes in legal requirements, technological advancements, and organizational needs. Develop and maintain guidelines that ensure consistent application of risk assessment methods and remediation strategies across the organization.
Legislation and Steps to Prepare Will Establish Good Practices
These steps can, while all the kinks are still being ironed out with the legislation, help establish good practices for AI system hygiene. It will encourage the safe use of AI systems and their data to ensure AI governance down the road — regardless of changes in the regulatory environment.