Dope.security has extended the reach of a Web gateway it provides to prevent end users from accessing the consumer version of ChatGPT generative artificial intelligence (AI) service when connected to a corporate network.

The primary goal is to provide a method that ensures ChatGPT interactions via a Secure Web Gateway developed by dope.security are strictly limited to the enterprise edition of the platform that an organization has licensed.

AI Unleashed 2025

That capability also enables organizations to ensure that whatever policy controls they may have applied to the enterprise edition of ChatGPT are actually being enforced, says dope.security CEO Kunal Agarwal. “It helps ensure controls to limit data exfiltration are being followed,” he adds.

IT teams can, for example, prevent access to other ChatGPT services altogether or automatically create an alert that advises the end user to not upload sensitive data per corporate policy

It’s not clear how many organizations have licensed the enterprise edition of ChatGPT, but there are growing concerns over how data being collected might be used to train future iterations of an AI model. There are ways to prevent that from happening by specifically opting out of enabling ChatGPT to use data for training purposes, but most end users are not going to easily find in their licensing agreement which box to check to ensure they have opted out. Conversely, the settings and associate policies an organization wants to enforce can be more consistently applied via the enterprise edition of ChatGPT.

ChatGPT as a cloud service is much like any other that IT organizations can restrict access to using a Web gateway. Previously, dope.security has added a cloud access security broker (CASB) that leverages a large language model (LLM) to identify sensitive data before it is shared externally. This latest extension extends that data loss prevention (DLP) capability to the ChatGPT service, notes Agarwal.

Hopefully, as it becomes easier to identify instances where sensitive data is being exposed to an AI model, the number of potential compliance issues that could arise will decline. Of course, end users can still find ways around a Web gateway by, for example, using a mobile device at home, but the number of instances where data might be shared should be reduced assuming governance policies are being applied and enforced.

Ultimately, organizations might have little to no control over what ChatGPT and other similar AI services are invoked. At the very least, however, there should be some effort to remind end users why it is important to exercise extreme caution with AI services that once exposed to sensitive data are never going to forget it. Months, even years later, that data might in response to any one of thousands of prompts being applied to related data sets might surface again in a way that almost anyone can see and definitely share.

Exactly how and when that AI model was exposed to that data may be difficult to ascertain but the one thing that is certain is there will be plenty of blame to go around when that data surfaces at what is usually the most inopportune moment possible for all concerned.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY