The rapid innovation around generative AI – and the land rush-like adoption by organizations of the myriad tools being developed – is opening up a broad new attack surface that can put enterprises at risk if they can’t get control of the technology’s use within their businesses.
Prompt Security launched this week with $5 million in its pockets and a multi-faceted security platform designed to help companies ensure that data isn’t being leaked through the large language models (LLMs) that underpin most generative AI tools or “shadow IT” isn’t being created by employees using any of the myriad products on the market.
“Prompt Security protects organizations from GenAI risks on all its aspects, whether it’s employees using tools like ChatGPT or Jasper – and for this, we have a browser plugin that monitors and alerts of any potential data leaks to the IT of the organization,” co-founder and CEO Itamar Golan told Techstrong.ai. “We also protect the GenAI integrations in customer-facing products. Think, for instance, of an app that uses generative AI to offer a certain customer-facing service. Using either our SDK, API or reverse proxy, we can protect the application from external risks like prompt injection, acting like an ‘AI firewall.”
All of this is managed from a single dashboard that gives organizations broad visibility and governance over the generative AI tools being used, Golan said.
An Idea and Company are Born
Golan and co-founder and CTO Lior Drihem both worked at cybersecurity companies Check Point and Orca Security as well as Israel’s Intelligence Service. It was while working together at Orca on feature for a generative AI-powered security product that Goran and Drihem saw what security would mean in the generative-AI era.
“Working on this project made it even clearer that a completely new attack surface was emerging: Applications of any kind that would feature GPT-like capabilities would be vulnerable to a new array of attacks,” Goran wrote in a blog post.
They founded Prompt Security in August 2023. The $5 million in seed funding for the company is coming from Hetz Ventures, with participation from Four Rivers and angels like the CISOs at Airbnb, Elastic and Dolby.
The Need to Secure Generative AI
The company is launching at a time when development of generative AI tools is accelerating and organizations will have to jump on board if they want to remain competitive. The problem is that LLMs are creating security issues that differ from other vectors.
“There’s the shadow AI and the risk it poses when employees use genAI tools unbeknownst to their IT teams and potentially disclose company information to external parties,” the CEO said. “This becomes much more severe as once the information is shared with an LLM model, this information can be used as training data. Once the model is trained on the data, the information is encoded within its weights. Consequently, any downstream use of this LLM by others could potentially generate outputs based on your data.”
Another concern are malicious prompts crafted by bad actors that could expose data or make the LLM respond in inappropriate ways, leading to reputational damage or other threats, including denial-of-service attacks, remote code execution (RCE) and SQL injections.
Prompt Security aims to tackle a wide range of security. It inspects semantic data to protect against threats like prompt injection, jailbreaking and data extraction. Contextual LLM-based models can detect and redact sensitive data to ensure sensitive data is protected, while responses from generative AI tools are scrutinized for harmful or toxic content.
With such broad visibility, organizations can more easily define access policies based on applications or user groups and can detect AI tools based on usage patterns, ensuring that they can identify thousands of tools being used within their businesses.
Prompt Security’s tool can be deployed within minutes and include extensions for all major browsers and with multiple methods for securing applications, including via a SDK.
The Dangers are Real
Prompt Security pointed to two examples – a report by Google from November 2023 that showed LLMs like those used by OpenAI’s ChatGPT can be manipulated to reveal the massive amounts of data they were trained on and The New York Times’ lawsuit against OpenAI and Microsoft alleging that responses by ChatGPT to user prompts can deliver near-verbatim excerpts from new articles – of how generative AI tools can leak training data.
The AI security market is growing fast. According to analysts with market research firm Statista, the space will grow from $10.5 billion in 2020 – two years before OpenAI’s release of ChatGPT kicked off the rapid adoption of generative AI by companies and individuals – to $46.3 billion by 2027.
Golan said a key differentiator for Prompt Security in this increasingly crowded AI security market is that its product is a fully working one-stop platform that can secure both employees and applications.
“Our ease of deployment – it takes only a few minutes to set it up and start getting value from the tool – being LLM-agnostic … and our low latency, minimal false-positive rate and accurate detection makes it a robust security platform to fully address the risks of GenAI,” he said, noting that it can support such LLMs as those from OpenAI, Mistral AI, Meta (Llama), and Microsoft (Copilot).
Platform is Already in Use
The CEO said the product is deployed in about a dozen organizations cover thousands of endpoints, with most deployments either with beta partners or with companies running proof-of-concepts. His goal is to convert most of them into paying customer in the next few months.
“It was crucial to us to build a product closely with our potential buyers, meaning to build something that companies will actually see value from and would want to pay money for,” Goran said. “We’ll continue developing functionality in the platform, stay current with all the genAI risks to protect our customers, and continuously improve the user experience.”