AI, governance

Following the recent passing of the European Union’s AI Act, a spotlight has continued to hone in on global reactions and business responses. Considered to be the world’s first comprehensive AI law, a net-new era is upon us, leaving it up to government, business and stakeholders to ensure AI adoption isn’t stifled and open-source innovation flourishes.

As we approach election season, a number of U.S. states have introduced legislation aimed at AI’s potential to manufacture deepfakes that could create mis- and dis-information. There is a stack of 407 AI-related bills active across the U.S. alone—representing an exponential rise since the 191 AI bills tallied in September. These bills are just part of the changing landscape where governments are moving from principles around trustworthy AI to actions to ensure citizens, businesses and government agencies are protected from potential risks.

In addition to government, the business world and stakeholders from boards to consumers are prioritizing current AI issues. Luckily technology—even AI itself— is ready to tackle these issues head on. As trust in AI hinges on a deeper understanding of how AI models are trained, with what data, how they arrive at their recommendations and whether they are routinely screened for harmful and inappropriate bias—companies can only realize AI’s huge potential to drive progress if they trust the technology as a force for good.

As organizations and regulatory bodies around the world discuss how to moderate high-risk AI deployments, while achieving innovation and earning trust, companies can act by:

Mitigating Bias in AI and Machine Learning

Ensuring fairness and reducing bias is central to risk mitigation. Concern over how to reduce bias will ultimately impact how organizations as a whole use AI on a broad scale. This is especially true in highly regulated industries, such as financial services and healthcare, where there could be detrimental repercussions if recommendations returned by the model result are biased.

Companies can implement proactive monitoring using commercial toolset for bias, drift and performance deviations. It is also important to include a diverse group of stakeholders to include different perspectives in creating an ethical AI governance framework that reflects the culture and values of the company and the end-user of the technology.

Ensuring Transparency in AI models and Understanding the Provenance of Data

Governance is essential to build trust. Without it, skepticism will surge while legal issues pursue. It is important to place ethical considerations like technical robustness and safety, fairness, transparency and most importantly, human agency at the forefront of the organization’s AI governance program. To achieve this, generative AI models must be transparent, explainable, and informed by governed data that the organization is confident in. As a best practice, corporations and governments should include a diverse group of stakeholders to drive clear standards and practices and leverage technology to automate these standards to ensure responsible AI adoption consistently across the enterprise.

AI governance needs to start at the time the use case is requested by the business and infused across the entire model life cycle through development and runtime to create an audit trail that explains why and how AI decisions were made.
Plus, at the heart of most generative AI is vast amounts of data, therefore, companies today— more than ever—need to be able to understand data lineage and what governed data was used, where it originated, how it has evolved and identify discrepancies in data flow. To stay informed while exercising ethical responsibility, companies today are joining alliances for responsible AI, like the AI Alliance launched in December. Alliances and utilization of software tools for data provenance tracking to ensure confidence are good starting places.

Creating, or Advancing and Evolving AI Governance Programs

It’s becoming essential that companies install (or continue to evolve and improve) AI governance frameworks to help enable compliance with internal policies and procedures, industry standards and increasing regulation. AI has been demonstrated to present opportunities and threats in virtually every industry today. Businesses need to be savvy and responsive, because the use of these technologies could pose significant threats to groups, individuals and society. Governance frameworks enable companies to manage AI to meet evolving regulations, guidance and policies worldwide. This can be accomplished by applying technology that helps map external regulatory obligations and internal values-based ethical guidelines to existing requirements and controls—automating the identification of regulatory changes to applicable requirements and effectively tracking metadata across model lifecycles. By developing a global view of AI compliance, getting involved in AI programs, investing appropriately and enforcing compliance, business will be primed to stay in tune with the everchanging landscape.

As we advance toward a more AI-regulated world, global governments, corporations and societal contributors must continue to focus on compliance, prioritize liability over licensing and support responsible AI adoption. Calling for proactive corporate accountability to ensure that AI is explainable, transparent and fair can also not be avoided. If not, fines will hit hard—violators of the AI Act could be fined up to 35 million euros ($38 million), which is 7% of corporate global revenue.

It will pay for businesses to preemptively invest in technologies, processes and people—so AI can be managed responsibly, with abated risk, as its full (permitted) potential to benefit society and business moves closer to being realized.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY