AI agents, AGI, AI, GenAI, security, AI brain

Generative AI has massive potential to revolutionize many areas of business. But building applications on top of generative AI poses significant risks. Based on my conversations with leaders at several enterprises assessing this technology, here are the key, as-yet-unresolved risks that enterprises should consider as they look to leverage generative AI.

Why Is Generative AI So Risky?

To set the stage, let’s look at what makes generative AI so distinct from existing machine learning technology, including current natural language processing (NLP) techniques that have seen widespread success.

The Outputs Are Open-ended

Fundamentally, generative AI models (e.g., GPT, Stable Diffusion) generate text or image data from scratch. So to a significant extent, outputs of generative AI models are open-ended and not easily controlled. By contrast, other frequently used discriminative ML models (e.g., as in classification or regression models) produce outputs that fall within a well-defined set of values (e.g., Yes/No, number between 0-1). Said another way, while your sentiment analysis model produces a positive or negative sentiment based on some input text (i.e., produces a well-defined set of outputs), GPT can produce many different types of the input text itself.

The Training Inputs Are Unknown

AWS

Unlike most models out there, the current generation of generative AI models has been trained on internet-scale data; for example, GPT-3 was trained on 500 billion tokens. Recent incarnations of generative AI models (e.g., GPT-4, Stable Diffusion, Codex) have not disclosed the exact training data used, raising privacy issues and copyright questions around whether creators of content have consented to contribute to training data. These limitations manifest in outputs generated by these models as we discuss below. 

The Model Is Ultra-Powerful, Opaque, and Prone to Errors

Few models in the history of AI have been as powerful as the current generation of generative AI models, and these models are beginning to show emergent properties (i.e., properties these models were not trained to replicate, but are able to do so). As a result, models meant for one purpose can be circumvented and utilized for another purpose. Moreover, the capacity of a model with 175 billion parameters means that explaining results from the model and introspecting its inner workings is near impossible. At the same time, we are in the early innings of the technology, and so current generative AI models have some clear limitations, such as hallucinations that introduce new types of risk.

Given these unique attributes of generative AI models, these models pose risks not always seen in DS and ML, as we’ll now explore.

The Risks 

  1. No guarantees on output quality

The open-ended and stochastic nature of generative AI outputs means that guaranteeing the quality of results of these models is extremely challenging. While one output from a marketing GPT model may meet your brand guidelines, another may not. While one generated ad may be appropriate in one culture, it may be offensive in another. Although a human may be able to easily pick up on the distinction, the model, unaware of cultural nuances, may let it through.

At the technical level, given the open-ended nature of the results from generative models, validating their output is notoriously difficult, and existing ML metrics do not translate well in this setting. As a result, human review continues to be critical in assessing quality of the output, reducing the speed and volume with which validation can be performed.

  1. Hallucinations and factual errors

While generative AI models are rapidly improving, the models today have a few significant limitations. The first one is that of “hallucinations,” where the model makes up information. Examples of generative AI hallucinations abound, from the relatively benign (suggesting George Washington invented the cotton gin) to the potentially defamatory (reporting fake criminal allegations). For enterprise applications, this propensity to hallucinate means that, without significant guardrails, these tools cannot be used in cases where providing factual information is important (e.g., search).

  1. Legal risks: Copyright, Privacy, and Consent

Generative AI also raises potentially serious legal and regulatory risks. Generative AI tools have already produced results that incorporate copyrighted material; for example, both individual artists and image-licensing giants have filed suit against generative AI-based image-creation toolmakers over alleged copyright violations stemming from the set of images used to train the models. Similarly, potential copyright infringement due to the use of code recommended by GitHub Copilot has kept many companies from trying out or using this technology.

Taking this a step further, currently the terms of use of generative AI-based applications do not make clear what (if any) user interaction data will be used to improve/retrain the model. For example, if GitHub Copilot trains on your corporate source code and adds it to the Codex model, that would be extremely concerning. (Samsung experienced this kind of incident with ChatGPT.)

The lack of transparency into the training data for generative AI models could also have regulatory repercussions. In particular, Italy recently banned OpenAI from processing personal data of individuals in Italy for ChatGPT over concerns about consent, privacy, output accuracy and age verification. It remains to be seen how the ban plays out and whether it will spread to other EU markets or elsewhere.

  1. Biased Output

Another side effect of the specific composition of training data is that (as with other ML models) generative AI models learn biases that are ingrained in the data. For example, ask Stable Diffusion to produce images of a “confident CEO” and you most likely will get images only of males. Although fairness and bias concerns also exist with traditional ML models, the generative nature of these models and their direct use in customer-facing scenarios means that there may be greater risk exposure than when a model’s Y/N outputs are interpreted by business logic.

  1. Jailbreaking

The double-edged sword of generative AI is its sheer power. While GPT was trained to predict the next word, reasoning was an “emergent capability.” As we continue to improve generative AI models, users may find ways to circumvent a given model’s intended use and employ it for wholly different purposes (i.e., jailbreak it). For example, if your product is a mental health chatbot built on ChatGPT, malicious users potentially could jailbreak the chatbot and make it respond with inappropriate comments or expose proprietary information. Almost all chat interfaces built on top of GPT have been jailbroken shortly after release.

  1. Competitive Risk

One additional practical business risk for building applications on generative AI is the lack of expertise (and compute) available outside of a handful of high-tech companies to build robust applications on the technology. While compute power can be obtained from cloud providers or via APIs from OpenAI or Azure, expertise in this space is highly concentrated at this time.

Does all this mean that business leaders should shelve their generative AI projects? Absolutely not. But they should engage with generative AI with a full awareness of the technology’s risks and plan accordingly.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY