LLMs, automation, U.S., business, leaders, transparency, trust, specialized LLMs, LLMs, AI, AI tech and LLMs government, combat, AI trust, networks

Sen. Chuck Schumer’s sputtering plan to oversee AI has gained an unlikely savior.

Matt Calkins, chief executive of AI company Appian Corp., based just a few miles from the nation’s capital, has developed guidelines to protect AI intellectual property, as well as data and content from creators that are under threat because of a lack of rules.

“I think the process and the oligopolists [that government officials] were listening to (Google, Microsoft, OpenAI, Meta, Amazon) shared a common incentive in preserving the lawless status quo,” Calkins said in an interview. “We need IP protection and transparency. The game has changed, and the AI owners don’t know it yet. They have all the data, but what AI now needs is trust. AI is now about us and not grabbing data.”

It’s been more than a year since Schumer, D-N.Y., unfurled his open-ended plan to regulate AI that landed with a thud, lacking AI governance rules and in need of $32 billion in federal funding. Meanwhile, the European Union’s Artificial Intelligence Act, which classifies AI risk via a four-tiered system, became law in May. [OpenAI CEO Sam Altman said the EU AI Act would make it prohibitively difficult for his company to compete in Europe.]

A wide-ranging White House executive order, announced by President Biden in October, does require federal agencies to establish guardrails for governmental use of AI in high-risk scenarios such as government benefits, and to establish a directive on the application of nondiscrimination requirements in health programs to AI systems, among other things. But the platform of presidential candidate Donald Trump vows to repeal the order on the grounds it is “dangerous” and “hinders AI innovation.”  

Calkins is seeking a loosely-based framework of legal guidelines as nearly every company and tech vendor forges forward to adopt and implement AI technology. Calkins is presenting his recommendations to reporters and customers, and he hopes to gain the support of other tech executives. “We are trying to straddle an important point in AI’s development,” he said.

So far, Calkins has gained the support of tech leaders like David Link, CEO of ScienceLogic, a leader in IT operations management and artificial intelligences for IT operations. “As AI rapidly advances, we have a unique opportunity and responsibility to shape its trajectory,” Link said in an email message. “By joining this initiative, I aim to contribute to establishing ethical frameworks and best practices that will guide AI’s evolution.”

In June, Calkins issued AI IP Protection Guidelines, a set of basic rules to provide a framework for the U.S. to manage the power of AI. The foundation of what he calls AI 2.0 is based on four points: 1) All AI models must disclose data sources; 2) If AI uses private data, it should require consent and compensation; 3) If AI uses personal, identifiable information, it needs permission; and 4) Any use of copyrighted information by AI requires consent and competition.

“AI 2.0 hinges on trust,” Calkins said. “This is no longer about a data grab.”

In the interim, Big Tech continues to fight lawsuits and use data and content without consent, contends Calkins, who points out how OpenAI recently allegedly used actress Scarlett Johnson’s voice without her consent.

For many tech leaders, the single-most important issue is coming up with a measure of trustworthiness, Ameesh Divatia, CEO and co-founder of data security startup Baffle, said in an interview. “OpenAI scrapped as much public data as they could get to build models, but enterprises want to create their own little environments to run their models,” he said. “The public database of AI has run its course. The next frontier is private AI run by enterprise. What is most important is to protect data all the way down to the record level.”

Enterprise customers are loathe to share with competitors their video and image data used to train models. AI companies are always in search of harvesting data from users so they can build better AI models, according to Kit Merker, CEO of Plainsight Technologies, a company that forbids customer data being reused for general-purpose models it resells, unless in very narrow situations with clear permission.

“While AI technology continues to evolve, the future needs to be grounded in trust and transparency,” Naveen Rao, vice president at AI at Databricks, said in an email message. “Thoughtful regulation is necessary to uphold responsible AI development across the board.”

The already existing “trust divide between tech companies and the general public… should not be further amplified,” Aleksander Tsuiman, head of product legal and privacy at Veriff, said in an email message. “Guardrails should specifically avoid further erosion of trust and preserve what is left of it. In the case the government does not act, the onus is on businesses to take action themselves.”

Lacking legal guardrails, tech companies have been consigned to self-governing in their rush to embrace AI, Joe Bradley, chief scientist at LivePerson, said in an interview.

“The customers viewpoint is they know they need to do something, but they don’t know what to do, while presenting clear value to executives about the benefits of AI,” Bradley said. “Execs are under mounting pressure to push AI and comply with the rule, as it exists in some U.S. states. This is why U.S. companies are moving the slowest in AI adoption worldwide.”

But the last thing American companies want is a duplication of Europe’s strict laws, cautions Heidi Messer, CEO of Collective[i].

“I’m not sure I would follow the European model in all its fervor. I have never seen such a preemptive emphasis on enforcement,” she said. “If the U.S. goes too far, you hurt your own industry. Do you regulate out of fear or constructive guidance?”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY