testing, software testing,

An Apple Inc. news summarizer that fumbled facts has been unplugged, for the time being. The generative AI feature which aggregated headlines from news outlets will apparently make a comeback in another version.

“We are working on improvements and will make them available in a future software update,” a company spokesperson said.

The feature for news and entertainment was disabled with a beta software update to developers on January 16, 2025. The opt-in feature was part of Apple Intelligence, the company’s new endeavor to embed more AI into its portfolio of products.

Currently, Apple Intelligence is available in the U.S., U.K., Australia and Canada.

Among the errors was an alert generated by Apple AI that stated that Luigi Mangione, the suspect in the fatal shooting of UnitedHealthcare CEO Brian Thompson, had shot himself.

Mangione did not shoot himself and remains in federal custody after being charged with first-degree murder. Other erroneous headlines, pulled from major news outlets, claimed that high-profile U.S. and foreign officials had either been arrested or fired.

Prior to Apple halting its AI news summaries, the non-profit, public interest organization, Reporters Without Borders (RSF), implored the company to pull the feature after a series of errors. RSF’s international headquarters is in Paris. Other organizations, such as the British Broadcasting Corporation (BBC), and the National Union of Journalists, both based in London, also complained to Apple and asked the Silicon Valley-based company to cease its news aggregator feature.

“AIs are probability machines, and facts can’t be decided by a roll of the dice,” said Vicent Berthier, the head of RSF’s Technology and Journalism Desk. “RSF calls on Apple to act responsibly by removing this feature. The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility and a danger to the public’s right to reliable information on current affairs. The European AI Act — despite being the most advanced legislation in the world in this area — did not classified information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately.”

Apple’s mishap did no favors for an industry that has seen the public’s trust in its work erode over the past 20 years. According to a recent Gallup poll, for the third consecutive year, more U.S. adults have no trust at all in the media (36%) than trust it a great deal or a fair amount (31%).

“Gallup first asked this question in 1972 and has measured it in most years since 1997,” said Megan Brenan, senior editor at Gallup. “In three readings in the 1970s, trust ranged from 68% to 72%, yet by Gallup’s next readings in the late 1990s and early 2000s, smaller majorities of 51% to 55% trusted the news media.”

On January 7, 2025, the National Union of Journalists (NUJ) issued a lengthy statement also asking Apple to discontinue the feature. “The NUJ has repeatedly raised its concerns over the use of generative AI within journalism, and the harm caused to the information ecosystem where developers fail to meet their responsibilities, resulting in reckless and dangerous practices. Recent incidents with Apple Intelligence serve as a stark warning of the consequences of unethical and irresponsible use of AI.”

A July 2024 study conducted jointly by Cornell University, the University of Washington, and the University of Waterloo (Ontario, Canada), found that some top AI models can be faulty.

“Large Language Models (LLMs) have made significant progress in generating coherent texts. Despite these advancements, ensuring the factual accuracy of the generated content remains a formidable challenge. Hallucinations – instances where LLMs generate information that is unsupported or incorrect – pose a major obstacle to these models’ reliable and safe deployment, particularly in high-stake applications. This issue becomes more serious as users trust the plausible-looking outputs from advanced LLMs.”

Google has also had a few hiccups with its AI Overviews, generative responses that appear in Google Search results, as it has reportedly told users to use glue on pizza and to eat rocks.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY