DeepSeek accidentally exposed personal data and chat histories to anyone on the internet, cybersecurity firm Wiz Research revealed Wednesday.

Wiz said scans of DeepSeek’s infrastructure showed the Chinese artificial intelligence (AI) company inadvertently left more than 1 million records unsecured on a publicly accessible database.

DeepSeek has secured a “completely open” database that exposed user chat histories, system logs, user prompt submissions, API authentication keys, and other sensitive information, Wiz researchers found. “More critically, the exposure allowed for full database control and potential privilege escalation within the DeepSeek environment without any authentication or defense mechanism to the outside world,” Wiz said.

DeepSeek said it “promptly secured” the leaky database after being notified by Wiz.

Wiz’s findings underscore the inherent risk in breathlessly adopting AI services lacking security. DeepSeek knows all too well: Shortly after it became widely available in late January, it suffered a wide-scale cyberattack and temporarily suspended new accounts. It has also increasingly been the focus of reports detailing its security flaws.

“This incident underscores the growing risks tied to AI model security — especially when models lack robust safeguards. DeepSeek, trained on a larger AI model, aims for high accuracy but appears to fall short on critical security measures compared to leading U.S.-based models,” Ali Haidar, chief customer advocate at cybersecurity company Anomali, said in a message.

Security researchers at Kela Cyber claimed they were able to force DeepSeek to create dangerous malware and phishing campaigns while also exposing serious security flaws. Enkrypt AI said its research found DeepSeek’s model highly susceptible to generating insecure code, as well as vulnerable to manipulation, tricking it into helping create chemical, biological, and cybersecurity weapons. Additionally, researchers at cyber-intelligence firm Kela found they could manipulate DeepSeek to create malicious code designed to swipe credit card data from specific browsers and relay that information to a remote server.

The torrent of security warnings not only have tarnished DeepSeek’s reputation but sparked bans in the U.S., Italy, and Ireland.

“As AI technologies like DeepSeek become increasingly advanced, the risks of failing to secure sensitive data grow exponentially,” Metomic CEO Rich Vibert said. “DeepSeek’s ability to exploit vulnerabilities on a massive scale highlights the urgent need for businesses to adopt proactive data security strategies so they can detect, classify, and protect sensitive data real time and at scale​.​”

More important, security experts point out, is determining if “LLM models are built in a way that can keep attackers out? The cat is out of the bag, and these open-source LLMs, whether American or Chinese-owned, are addictive, and their adoption is accelerating,” Roy Akerman, vice president of identity security strategy at Silverfort, said in an email.

“Consumers and organizations alike want to innovate, increase their productivity, and do more with less. However, to do so responsibly, they need to assess the risks, implement governance, and build a security framework that accounts for AI’s rapid advancements,” Akerman said.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY