security, MLSecOps, AI, AI security, cybersecurity

Only 5% of 1,000 cybersecurity experts surveyed have confidence in the security measures protecting their generative AI apps, even as 90% are actively using or exploring the technology.

That is one of the startling storylines, and unintended consequences, as the C-suite demands a rush to install GenAI without fully baking the reality of cybersecurity in day-to-day operations throughout an organization.

Attack methods tailored to GenAI, called prompt attacks, are increasingly being exploited to manipulate apps, gain unauthorized access, steal confidential data and take unauthorized actions, according to a GenAI Security Readiness Report survey of 1,000 individuals, 60% of whom have more than five years of cybersecurity experience. The survey, taken in mid-May, was released on Wednesday.

“The race to adopt GenAI, fueled by C-suite demands, makes security preparedness more vital now than at any pivotal moment in technology’s evolution. GenAI is a once in a lifetime disruption,” said Joe Sullivan, ex-chief security officer of Cloudflare Inc., Uber Technologies Inc. and Meta Platforms Inc., who is an adviser to Lakera.ai, the company behind the survey. “To harness its potential, though, businesses must consider its challenges and that, hands down, is the security risk. Being prepared and mitigating that risk is the #1 job at hand for those companies leading adoption.”

Friction and angst are the two overriding themes among the rank and file as they cope with pressure by executives to leverage revolutionary AI apps ASAP on top of evolutionary security technology framework.

“Security lags for reasons, as it did with the internet, cloud and now AI,” Expel Chief Executive Dave Merkel said in an interview. “But AI is moving much faster. It is accelerating attacks. And people don’t get religion [about security] until after they’re bitten. That is human nature.”

Concerns over large language models top the worry list, based on survey results.

LLM reliability and accuracy was the No. 1 barrier to adoption: 35% of respondents said they are concerned, and 34% are concerned with data privacy and security. A lack of skilled personnel was cited by 28%.

Nearly half of those polled (45%) are exploring GenAI use cases. Just 9% have no current plans to adopt LLMs.

Less than one in four (22%) have adopted ​​AI-specific threat modeling to prepare for GenAI specific threats.

“In the end, it’s about getting people to trust your system,” Lakera.ai Chief Executive David Haber said in a video interview. “This is the most complex endeavor we’re ever encountered.”

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, crafted to prevent large AI models from being used to cause “critical harms” against humanity, applies to AI models that cost at least $100 million and require 10^26 FLOPS during training. The bill mandates a safety protocol to prevent AI misuse that includes an “emergency stop” button to shutter an AI model. A new state agency, the Frontier Model Division, would oversee the rules.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY