AI risk

At the end of 2022 not many people had heard of OpenAI’s ChatGPT platform. Mid-way through 2023 it’s become a highly discussed technology and one which many people have played around with for fun or used in earnest to write college essays or request refunds for parking tickets. 

British newspaper The Guardian reported that 100 million people around the world were using ChatGPT by February 2023, just two months after it was launched. And according to the Pew Research Centre as many as six out of ten people in the US are familiar with the chatbot, although 18% of US adults had heard a lot about it and only 14% had tried it. 

Organizations in every industry are considering how they can gain efficiencies and competitive advantages, or save money, by using the new wave of generative artificial intelligence (AI) tools, such as OpenAI’s large language model. But how many are assessing the potential risks to their data security? And how many are thinking seriously about data security?

With its vast capacity to learn from large amounts of data, it is a tool that organizations should adopt – but in a safe and controlled way. Businesses must be cautious of ChatGPT’s potential to leak sensitive information, which could result in data loss and reputational harm. 

Where to Start When Assessing Risk

It’s also important to understand the differences between GPT, ChatGPT, Bard, Microsoft, OpenAI and any other tools before allowing security to monitor for sensitive data being unintentionally sent into these platforms. 

While many invest in cybersecurity tools to protect their assets from ever-evolving threats, OpenAI’s sophisticated tool has opened the door to another dimension of risk – and in an economic and geopolitical climate where everyone could do without being exposed to even more risk.

Of course, the threat of data being leaked existed before generative AI was developed, and the threat remains outside of such tools. Reasons for data leaks include: 

  1. Insufficient data privacy measures built in

  2. Overfitting of data and the unintentional memorizing of data

  3. Lack of understanding of this new technology 

  4. Intentional misuse of tools

  5. Third-party plugins that reuse data into other repositories outside of the initial service.

Despite other cybersecurity risks in the world today, it’s really important for organizations to safeguard their reputations and sensitive information from potential data leaks due to the use of ChatGPT’s AI platforms.

Fortunately, there is a solution for businesses, and it starts with taking these three steps: 

  1. Conduct a thorough assessment of data security in relation to the ChatGPT platform

  2. Identify sensitive data that should be protected and any potential risks to data security

  3. Determine appropriate security solutions, including the creation of an internal policy for the use of generative AI services.

The main objective should be to mitigate the risk of your team members uploading confidential information to the ChatGPT platform. This includes their personally identifiable information (PII) and your intellectual property (IP). 

This might seem like a daunting, time consuming and labor intensive task, and perhaps even nearly impossible for an organization with tens of thousands of employees. But it is achievable. 

How to Conduct a Business Impact Assessment

By conducting a thorough business impact assessment, you can identify potential risks and develop effective solutions to address them. An assessment that’s worth its salt would certainly:

  1. Scope the data to be protected, including which data should be protected, user information, intellectual property, sensitive information, and any other data that could potentially harm the organization’s reputation if it were compromised

  2. Identify risks through insights into your security appliances and governance standards

  3. Recommend solutions to mitigate the identified risks, such as implementing data loss prevention (DLP) tools, customizing web filter indicators, developing incident response plans, and conducting regular security assessments

  4. Create an internal policy for using generative AI platforms across the whole business

  5. Raising awareness of similar generative AI tools and the differences between them.

Lastly, it’s imperative to cultivate a robust understanding of generative AI, which includes comprehending the distinctions between publicly accessible generative AI and the private employment of these technologies for restricted use. Additionally, it’s valuable to explore how solutions from companies that are at the forefront of AI technology, such as Microsoft, can alleviate these risks.

For any organization, data security is of the utmost importance. Don’t sit idle until a data breach incident occurs. Take immediate action to shield your organization’s confidential information and reputation from potential data leaks that may arise from the use of generative AI platforms like ChatGPT.