OpenAI, OpenAI hack

No one is immune.

When hackers gained access to OpenAI’s internal messaging system last year, and filched details about the design of the company’s AI technology, including ChatGPT, it was a stark reminder that AI companies are tantalizing treasure troves of data for cyber-intruders.

The episode underscores the risks businesses increasingly face as they go all-in on AI and make it easier than ever to sift and peruse mountains of data instantaneously, cybersecurity experts caution. Adversaries like China may be tempted to steal AI technology that eventually compromises national security as more enterprises integrate ChatGPT into their operations.

“This is only the tip of the iceberg,” Tracy Ragan, co-founder and chief executive of DeployHub, said in an email message. “Companies must start getting serious about securing all data, including training data for LLMs. Modifying this data in critical AI applications is an easy way to cause major disruption across industries.

“Investment in cyber security should be put before investment in AI,” Ragan added. “Getting our ducks in a row will be critical for consumers to trust AI overall. And the SCOTUS Chevron ruling may just take the government out of regulating AI companies to do what is right.”

AWS

AI companies have become gatekeepers of data and — consequently — prime targets of hackers. Indeed, more than 100,000 compromised ChatGPT account credentials found their way on to dark web marketplaces between June 2022 and May 2023, according to a Group-IB report shared with The Hacker News. India led with 12,632 stolen credentials.

“The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023,” Group-IB said. “The Asia-Pacific region has experienced the highest concentration of ChatGPT credentials being offered for sale over the past year.”

On Tuesday, Justice Department officials said they seized two internet domains and searched almost 1,000 social media accounts that Russian operatives allegedly used to spread disinformation across the country and abroad. The Justice Department “will not tolerate Russian government actors and their agents deploying AI to sow disinformation and fuel division among Americans,” Deputy Attorney General Lisa Monaco said in a statement.

If there is a silver lining in the OpenAI incident, it is that hackers only gained access to an internal OpenAI employee discussion board; a far worse breach would have allowed hackers entry to internal systems or models in progress.

OpenAI executives disclosed the breach to employees in an all-hands meeting in April 2023, but opted not to make it known publicly because no information about customers or partners was stolen, according to two people who spoke to the New York Times last week.

Still, former OpenAI technical program manager Leopold Aschenbrenner deemed the break-in a “major security incident” during a recent podcast. While employed at OpenAI, Aschenbrenner sent a memo to the company’s board of directors warning that OpenAI was not doing enough to prevent the Chinese government and other adversarial nations from stealing its secrets. [OpenAI’s trove of user data contains  billions of conversations with ChatGPT on hundreds of thousands of topics that could include industrial secrets and sensitive personnel data.]

“This is the question, really: Does OpenAI really know the extent of the attack? This happened more than a year ago. We know this: We’re going to continue to see substantially more attacks on large AI companies like OpenAI,” Joe Nicastro, field chief technology officer at Legit Security, said in a video interview. “There is an astronomical amount of personal information, company secrets, proprietary code.”

“Attackers see OpenAI as a treasure trove of data for other companies,” he said, pointing to SolarWinds, a Texas-based technology company whose software was breached in a massive 2020 Russian cyberespionage campaign, exposing information from the Justice and Homeland Security departments, and more than 100 private companies and think tanks.

Microsoft Corp. President Brad Smith has called SolarWinds “the largest and most sophisticated attack the world has ever seen.”

To minimize risks, cybersecurity experts recommend users secure their accounts with two-factor authentication to prevent account takeover attacks.

That may seem like obvious advice, but one that is necessary as more companies — from small- and mid-sized to behemoths — rush to embrace AI. Sometimes, cybersecurity is getting short shrift, leaving it vulnerable to attacks.

“The cybersecurity landscape is very challenging, but there is hope,” Krista Macomber, research director at The Futurum Group, said in a report released in May. “Organizations must adapt and invest with attackers leveraging AI and with the attack surface expanding. While AI is a hot topic, focus on practical solutions for proactive prevention, faster response, and better risk management.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY