Synopsis: In this Techstrong AI video, Ken Huang, chief artificial intelligence (AI) officer at DistributedApps.ai, explains what will really be required to successfully govern artificial intelligence.
In this episode of the Techstrong AI video series, host Mike Vizard speaks with Ken Huang, chief AI officer at DistributedApps.ai, about the evolving challenges of AI governance and compliance. Huang explains that while legacy AI models have established governance practices, generative AI presents new challenges due to its unpredictable nature. He highlights efforts by organizations like the Cloud Security Alliance to map out AI responsibilities and governance frameworks, especially as regulations evolve. Huang discusses the AI Corporate Responsibility Working Group’s white papers that explore governance, risk management, and compliance (GRC) for AI. He also emphasizes the importance of human oversight in AI management, despite the increasing role of AI agents in security and compliance workflows.
The conversation also touches on the global regulatory landscape, noting that the European Union has already enacted strict AI regulations, while the U.S. is still navigating its approach. Huang points out that governance efforts may vary depending on political shifts, such as a potential rollback of executive orders if Trump were re-elected. He stresses the need for organizations to designate a Chief AI Officer to manage AI innovations, security, and regulatory compliance. As AI continues to advance, he warns that traditional security programs must evolve to address new vulnerabilities and attack vectors in AI systems. Ultimately, he advises companies to proactively integrate AI governance into their workflows rather than treating it as a separate initiative, ensuring that responsibility and security remain at the forefront.