AI projects, governance

As businesses increasingly rely on artificial intelligence (AI) to drive innovation and efficiency, ensuring that this technology is used ethically and safely becomes paramount. In response to the massive adoption of AI, national and international organizations have begun to put forth new frameworks focused on AI, which are helping companies mitigate potential risks while ensuring compliance. The International Organization for Standardization recently published ISO 42001, while the National Institute for Standards and Technology (NIST) has released a draft of its AI Risk Management Framework, two major AI frameworks that I’ll discuss shortly.

The State of AI Governance

Companies from every industry are rapidly adopting AI for the numerous benefits it can bring and the use cases where it can be applied. While these benefits are great, on the flip side of the coin there are risks and concerns with using AI. That’s why new frameworks are being developed and published, to bring compliance and governance into the fold.

AI governance ensures ethical and responsible use of AI, manages risks associated with AI systems (such as accuracy issues and hallucinations), and helps build and maintain public trust. For companies building AI into their products, complying with AI frameworks demonstrates to customers and stakeholders that they take the security of AI seriously. And, for companies allowing employees to use AI tools in their jobs, formal policies help govern and manage this usage.

ISO 42001

One of the first major AI regulations to be published, the International Organization for Standardization (ISO) put forth ISO 42001 in December of 2023. The standard specifies requirements for establishing, implementing, maintaining and improving AI systems within an organization.

Some key points of the framework include:

  • This is a broader AI governance framework, focused on establishing an AI management system for a given organization.
  • The framework is intended for any organization that is developing, deploying, or using AI systems.
  • The framework takes a management system approach, delivering guidelines for integrating AI governance into an organization’s processes and overall culture.
  • ISO 42001 covers a range of AI governance aspects, including organizational context, leadership lifecycle processes, risk management, stakeholder engagement, and transparency.
  • This framework follows the same structure as others from ISO, with clauses for context, leadership, planning, support, operation, performance evaluation, and improvement.

NIST AI RMF

Another major AI framework, which had a draft published in April of this year, is the National  Institute for Standards and Technology’s (NIST’s) AI Risk Management Framework. This framework was put forth to help organizations identify and manage the risks of generative AI and propose actions for generative AI risk management.

Some key points of the framework include:

  • Designed to manage risks related to AI systems, with an emphasis on trustworthiness.
  • Primarily focused on U.S. federal agencies and contractors, but is also being adopted by companies in regulated industries such as healthcare and finance.
  • This framework follows a risk-based approach, focusing on identifying, assessing, and mitigating risks associated with AI systems.
  • Emphasis on AI system trustworthiness, addressing risks related to security, safety, fairness and accountability.
  • This framework is structured around a multi-step process, including phases to prepare, categorize, select, implement, assess, monitor and dispose.

What’s Next for AI Governance?

ISO 42001 and NIST AI RMF are two of the first major frameworks focused on AI, though more are likely to follow. These frameworks are not mutually exclusive, with overlap in what they cover when it comes to AI regulation. For organizations building, deploying, or using AI, choosing one of these frameworks to adhere to will help mitigate risks and improve safety related to AI.

While the actual enforcement of these frameworks is still yet to be determined, ISO 42001 will have an accredited certification audit option for those that adopt them. NIST AI RMF, on the other hand, does not have a formal certification but can and will be used for implementing best practices. Regardless of which standard you choose to adopt, adherence to AI governance frameworks shows customers and stakeholders that safeguards are being put into place and can be proven.

As AI usage expands, we can expect AI governance to follow. This means increased regulatory measures, laws, and standards focused on AI safety and usage. It also means a greater emphasis on AI ethics and responsible AI. As AI governance and regulations come into focus, proactively meeting one of the first frameworks (ISO 42001 or NIST AI RMF) can help organizations stay ahead of the curve for AI requirements and be a competitive advantage by proving commitment to AI safety and responsibility for customers.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY