AI and LLMs government

The Federal Trade Commission is looking to define its ground within a federal government that is trying to grab the reins of a rapidly expanding AI industry that is reaching deeply into most parts of business and society.

In an eight-page comment submitted to the U.S. Copyright Office, FTC officials said the agency has been investigating the risks associated with AI use in everything from consumer privacy, discrimination and bias to scams like imposter schemes and the “turbocharging of deceptive practices.”

The FTC said it also has a role in copyright issues, which has created friction since OpenAI’s release of ChatGPT with creators like writers and illustrators, including a growing list of those suing OpenAI and other AI vendors over claims that generative AI chatbots and other systems that train on vast amounts of publicly available data – including books and music – infringe on their copyrights.

However, there are copyright issues that go beyond copyright issues of infringement and liability related to creators, but also touching on businesses and consumers.

“The use of AI technology raises significant competition and consumer protection issues beyond questions about the scope of rights and the extent of liability under the copyright laws,” the FTC wrote. “As the courts apply the doctrine of fair use to the training and use of AI, the evolution of the doctrine could influence the competitive dynamics of the markets for AI tools and for products with which the outputs of those tools may compete.”

Government Turns Its Eye to AI

The agency’s comment comes as the Copyright Office is fielding outside input as part of an ongoing study of the country’s copyright laws and the regulations and policies that may be needed to address issues raised by the accelerated innovation around AI. At the same time, the FTC and Copyright Office join myriad other federal agencies that are trying to find their place within a government that is putting a focus on AI.

That concern was illustrated late last month when the Biden Administration issued an executive order calling for government guidelines for AI around everything from development and security to protections for workers and consumers.

“As AI increasingly becomes a larger part of daily life, the technology could transform the way we live, work and interact,” the FTC wrote in the comment, which was sent to the Copyright Office October 30 and made public this week. “The manner in which companies are developing and releasing generative AI tools and other AI products, however, raises concerns about potential harm to consumers, workers and small businesses.”

AI the Next Tech Challenge

Agency officials noted that part of the FTC’s historical role has been to promote competition and protect consumers, and that over the years it has had to adapt to the rise of new technologies that often pose challenges to consumers, workers and businesses. AI is only the latest technology challenge.

The FTC noted its investigation of Amazon and its Ring businesses for using private data collected by Amazon’s Alexa voice assistant and Ring’s internet-connected home security cameras in training their AI algorithms. Amazon eventually agreed to pay $31 million in penalties for privacy violations.

AI also poses a risk to competition, with its growing importance in the global economy tipping the scales in favor of large and established tech firms that control many of the levers for AI, from cloud computing to compute power to access to large stores of training data. This is another area of AI that the FTC is responsible for.

“These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets,” the agency wrote. “In addition, AI tools can be used to facilitate collusive behavior that unfairly inflates prices, precisely target price discrimination or otherwise manipulate outputs.”

Many Issues in Copyright Law

In the area of copyright, there are a number of issues to consider. The Copyright Office is trying to establish policies around human-made creations and AI-generated content. The Copyright Office created a policy that stated that content not created by humans could not register for copyright protection, a decision that led to a lawsuit filed by the owner of an AI computer that he claimed created a piece of art on its own.

A Federal District Court ruled in favor of the Copyright Office in August, but the plaintiff, Steven Thaler, reportedly is appealing.

However, there are other copyright-linked issues that the FTC should address, officials wrote. A key one is liability.

“How should liability principles apply to harm caused by AI tools trained on creative work that are used to generate new content?” they asked. “How should liability be apportioned among users, developers of AI tools, and the developers of the training dataset? How should this analysis take into account that training data is often scraped from sources hosting pirated content?”

These and other questions involve consumer protection and competition, such as some instances where the use of pirated or misuse of copyrighted materials could be an unfair practice or unfair method of competition. The doctrine of fair use could also evolve under the weight of AI use and training.

“Conduct that may violate the copyright laws – such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness – may also constitute an unfair method of competition or an unfair or deceptive practice,” the FTC wrote.

Competition and Privacy Concerns

In addition, a large company may have the money to indemnify those who use their generative AI tools or get exclusive rights to copyrighted training data, which not only touch on copyright policy but also consumer protection and competition concerns.

Earlier in October, the FTC – including Chair Lina Khan – hosted a roundtable discussion with musicians, authors, actors, software developers and other creative pros to talk about the pros and cons of AI tools in content creation, including the use of content for training without consent, transparency around the training data, and the potential of the technology – with sufficient guardrails – to help them in their work and open up opportunities.