JOIN queries, OpenAI, ChatGPT

OpenAI may be downplaying what it characterizes as a minor breach, but fallout from an intrusion in April 2023 that was not disclosed until a news report this month has not only cast doubt on the company’s cybersecurity defenses but raised alarms about the degree to which AI companies are protected.

The breach underscores how OpenAI — and the AI industry in general, with a treasure trove of data — is a valuable target for espionage campaigns backed by nation-state actors or corporate competitors, according to a dozen experts contacted by Techstrong.

This begs the question: Is the recently disclosed breach at OpenAI a harbinger of security issues to come from other AI startups, or an unintentional byproduct when AI-hyped CEOs demand changes fast to adopt AI? In the rush to jump on the bandwagon, some serious hiccups like shoddy security and escalating costs may be just part of a Faustian corporate bargain.

“If one attacker can get in, what about China or Russia or Iran or North Korea?” Eric O’Neill, a former FBI counterterrorism and counterintelligence operative, told the New York Times.

The security scare comes amid a frantic push by chief executives for their companies to adopt and install AI systems as quickly as possible and worry about the consequences later — be that overtaxed engineers, poorly trained employees, escalating infrastructure costs and some gaps and flaws in security.

“The biggest priority is speed over everything else,” Fortanix CEO Anand Kashyap said in an interview. “The incident, as well as Hugging Face and AT&T, has tarnished trust in AI. And I am concerned the problems run deeper with the OpenAI breach in particular.”

Any level of distrust in OpenAI can have a deleterious effect on an AI industry pegged to contribute $15.7 trillion to the global GDP by 2030, according to a PricewatershouseCoopers study.

The first body blow casts doubt on the ability of AI systems for “truthful decisions,” said Jack Berkowitz, chief data officer of Securiti AI, a cybersecurity and data management company who previously held the same title at ADP. “Trust is broken, which could lead to some form of regulation and consequences with customers,” he said. “If you show a history of not being transparent, you face a heck of a lot of attention from regulators.”

“OpenAI did not talk about this for a year because they chose not to,” Berkowitz said. “They are not obligated to report, but boy you should.”

“Ignorance is no longer an excuse in cybersecurity breaches,” Steve Wilson, chief product officer at AI security company Exabeam, said in an interview. “What we find though that people who have worked in AI for a long time have had little exposure to security. They didn’t worry so much about security. Now, it is front and center, with OpenAI as the fastest-growing SaaS company.”

The unsettling situation is being magnified by a complaint filed with the Securities and Exchange Commission by whistleblowers at OpenAI. They claim the company blocked staff from warning regulators about the risks its technology poses. And they allege OpenAI instituted strict employment, severance and nondisclosure agreements to keep things quiet, according to a seven-page letter obtained by The Washington Post.

In their letter, whistleblowers said “given the risks associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement authorities.”

What Did OpenAI Know, And When?

What remains unanswered, security experts agree, is the extent of damage and/or infiltration to OpenAI’s infrastructure.

Though the company insists the incident poses no threat to national security, former and current employees dispute that characterization. Leopold Aschenbrenner, who once worked at OpenAI as a researcher, claims the company’s security is not sufficient to prevent the theft of secrets by state actors. And company CEO Sam Altman has been accused of putting commercial interests ahead of public safety.

William Saunders, an OpenAI research engineer who quit in February, said in a podcast interview he had noticed a pattern of “rushed and not very solid” safety work “in service of meeting the shipping date” for a new product.

On Thursday, OpenAI unveiled a cheaper version of its top AI model, GPT-4o mini. “We expect GPT-4o mini will significantly expand the range of applications built with AI by making intelligence much more affordable,” the company said.

OpenAI’s dawdling handling of the intrusion “shows a certain amount of internal disagreements over safety protocols and self-governance,” said Vaibhav Malik, who as global partner solution architect at Cloudflare Inc. is an expert in AI security.

“A lot of it comes down to testing features and making updates as versions come out,” Malik said in an interview. It is incumbent on OpenAI to be more transparent in how it treats data, as well as articulate its practices for encryption, secure coding, regular security assessments and supply-chain process, he added.

“OpenAI shouldn’t get a pass for not disclosing their breach in a timely manner,” said Mitch Ashley, chief technology advisor with The Futurum Group and chief technology officer at Techstrong Group. “For us to have transparency into AI, companies like OpenAI must also be transparent about their security posture and disclosures.”

“Young companies like OpenAI often do things fast and break stuff,” Rich Wilding, partner at Co-Created, said in an interview. “And it isn’t just them.”

Of late, there has been a lot of breaking of AI and non-AI systems. “The situation with OpenAI is not unique to AI,” Wilding said, rattling off a recent raft of intrusions at AT&T Inc., AutoZone Inc., Snowflake Inc. and others. Indeed, Identity Theft Resource Center reported a 490% year-over-year spike in data breaches in the first half of 2024, or about 1.1 billion instances.

“Shadow IT has been around a lot, which worries companies,” Wilding said. “It’s a new reiteration of the same issue, that applied to banks. He and others also warned of the accumulation of old or unnecessary data in a centralized repository or “data lake” that can put businesses at greater risk of security breaches.

Adds Steve Benton, vice president of threat research at Anomali: “Are there other breaches going undisclosed? Well let’s not be naïve here. Almost certainly there are. For any organization protecting brand and reputation, they will live within the bounds of what they are obliged to disclose. That’s just commercial reality.”

As a private company, OpenAI does not have the same breach reporting obligations as public companies. But this could eventually change.

In October, The Federal Trade Commission approved an amendment to the Safeguards Rule that would require non-banking institutions to report certain data breaches and other security events to the agency.

“Trust will continue to be an issue, but people understand this stuff is complicated and risky,” Christian Wentz, founder and CEO of Gradient Technologies, said in an interview.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Qlik Tech Field Day Showcase

TECHSTRONG AI PODCAST

SHARE THIS STORY