As AI technologies continue to advance, addressing AI’s ethical, legal and societal implications will be paramount and the conversation will likely reflect the growing recognition of the importance of responsible AI development and deployment. 

Microsoft’s recently released blueprint for AI governance, “Governing AI: A Blueprint for the Future,” is based off five guidelines governments should consider when developing policies, laws and regulations for AI.

As a commitment to responsible AI development, it provides a framework for organizations looking to establish governance practices for their AI initiatives. The document advocates for public-private partnerships and collaboration to establish effective regulations and policies that govern AI technologies.  It also highlights the need for ongoing monitoring, evaluation and regulation updates to address emerging challenges and opportunities.

Damian Mingle, president and CEO of LogicPlum, a machine learning platform, says it’s necessary to consider AI technology’s potential risks, benefits and societal implications and provide recommendations for responsible AI governance.

“Not everybody is up to speed on what is possible with AI, so it’s important to encourage conversation around ethical considerations, transparency, accountability and collaboration to ensure AI’s safe and beneficial deployment,” he says.

Mingle explains as AI technology becomes more advanced and capable, it raises ethical questions about its use and potential impact on society.

“This document addresses the need for ethical guidelines and principles to ensure that AI is developed and used responsibly and beneficially,” he says, noting AI can affect various aspects of people’s lives, from personal interactions to critical infrastructure.

Cliff Jurkiewicz, vice president of global strategy at Phenom, says while the framework has core elements that feel like a commonsense approach, he adds it has two glaring flaws.

“The first is that it relies on ‘government led AI safety frameworks’ and the challenge is that government bodies may lack the knowledge and foresight to understand these complex technologies in a way to adequately regulate them,” he explains.

He says some may read such a core element of the framework as “passing the buck”– a similar approach to Facebook founder Mark Zuckerberg challenging Congress to regulate social media knowing such regulation would be difficult to achieve in a divided political environment.

“Second, it does not outline direct responsibility or accountability related to organizations who create and promote AI,” Jurkiewicz says. “While framework elements 2-4 feel like they do, they’re more guiding principles rather than requirements or directives.”

He points out that ultimately, Microsoft is leading the conversation, which is to be commended, but some may see this as seemingly requiring mass participation before they commit themselves.

“Microsoft’s blueprint should challenge others to either contribute or create their own version,” he adds. “Collaboration would benefit everyone.”

From Mingle’s perspective, it is essential to have regulations and governance frameworks in place to protect public safety, privacy and individual rights while leveraging the benefits of AI.

The document also stresses the importance of holding organizations and individuals accountable for the decisions and actions taken in developing and deploying AI systems.

“Transparency in AI algorithms and processes can help build trust and ensure that potential biases or discriminatory outcomes are addressed,” Mingle says.

He points out AI governance requires cooperation among stakeholders, including governments, industry leaders, researchers and civil society organizations.

“Given the rapid pace of AI advancement, it is necessary to have flexible and adaptable governance frameworks to keep up with technological developments,” he adds.

Mingle says there will likely be an increased focus on AI education and workforce development to address the growing demand for AI professionals and to ensure a skilled workforce capable of navigating AI governance challenges.

Initiatives may be launched to promote AI literacy, provide training programs and integrate AI ethics and responsible AI practices into curricula across various disciplines.

“Overall, the conversation about AI governance is expected to become more nuanced, encompassing many issues and involving a broader set of stakeholders,” he says.