Apple is floating multi-year contracts of up to $50 million to major news and publishing organizations, seeking access to their vast archives of articles, photos, videos and metadata, an apparent move by the second-largest smartphone manufacturer in the world to develop its own AI tool to compete with the likes of OpenAI’s ChatGPT and Google’s Bard, according to an article published in the New York Times.

Apple has opened negotiations with Condé Nast, a company it frequently advertises with, in publications such as Vogue and The New Yorker; as well as NBC News, and IAC, an American holding company that owns brands in approximately 100 countries. Some of IAC’s major brands include People, The Daily Beast and Better Homes and Gardens.

The status of those negotiations is not known, as both sides have declined to comment.

The alleged practice of scraping information from the open web to allow LLMs to learn, has drawn widespread indignation from content creators including artists, writers, photographers, videographers and news and publishing organizations, and has sparked lawsuits against some LLM developers.

Pam Samuelson, the Richard M. Sherman Distinguished Professor of Law at the University of California, Berkeley, and a preeminent expert on intellectual property, called the offer by Apple a positive step “to license the use of copyrighted work as training data.”

“This is necessary for developers to get access to data that would otherwise be behind a paywall,” Professor Samuelson told

“The fact that such licenses are being granted does not mean that every GenAI must get licenses for every item of data on which it trains. The litigations against GenAI developers are mostly about data that can be found on the open web.”

Whether compensated access to vast storages of information will be the wave of the future remains to be seen, but some experts say it is likely.
“I suspect we will see more of these agreements as companies aim to improve the training data for their AI systems,” said Lindsay Grace, a Knight Chair in Interactive Media, and an associate professor in Interactive Media at the University of Miami School of Communication. He researches the intersection of technology and society, including AI. 

“The competition will not only be about who gets there first, but who can amass the highest quality systems- a function that depends greatly on high quality training data,” Mr. Grace told

This latest development follows several other recent moves by Apple to raise its standing in the AI realm. The Cupertino, Calif.-based company announced that it is developing technology that could potentially give its iPhones the capacity to run generative AI LLMs, and that it is working on techniques to produce high-quality 3D avatars for people to use in virtual reality as well as doing things like trying on virtual clothing before making online purchases. That the company has apparently ironed out the memory limitations of its iPhone to accompany LLMs, is a strong indication that it plans to move forward with developing its own AI tool.

In its recent research paper focused on the issue of memory, titled “LLM in a flash,” Apple identified the problem and the solution. “Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their intensive computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters on flash memory but bringing them on demand to DRAM.”

In the conclusion, researchers stated, “Our work not only provides a solution to a current computational bottleneck, but also sets a precedent for future research . . . We believe as LLMs continue to grow in size and complexity, approaches like this work will be essential for harnessing their potential in a wide range of devices and applications.”