Apple's AI headlines are more of a break from reality than breaking news
Will a promised update fix the facts?
- Apple’s AI-generated news summaries have come under fire.
- The fake headlines are upsetting news organizations by undermining trust.
- Apple has promised an update to fix the issues.
Apple's much-hyped Apple Intelligence is facing a crisis of trust after several attempts at summarizing news headlines produced inaccurate and sometimes bizarre results. The feature has an understandable appeal for iPhone owners as Apple tries to condense notifications into digestible snippets. But instead of accurately summarizing, the AI occasionally indulges in creative writing.
It's gotten bad enough that major news organizations are complaining about how the headlines mislead readers, asking Apple to fix or remove the tool before it further embarrasses them. There have been a few particularly prominent examples since Apple debuted the feature.
In December, Apple Intelligence wrote a headline for a BBC story about Luigi Mangione, the accused killer of UnitedHealthcare CEO Brian Thompson, claiming Mangione had shot himself. That detail was entirely invented by the algorithm. The broadcaster wasn't thrilled about being blamed for something it didn’t write.
Similarly, a New York Times story did not claim Israel's Prime Minister Benjamin Netanyahu was arrested, regardless of Apple's AI headline. Apple only responded this week in a statement:
"Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback," the company said in the statement. "A software update in the coming weeks will further clarify when the text being displayed is summarization provided by Apple Intelligence. We encourage users to report a concern if they view an unexpected notification summary."
When TechRadar reached out for comment, Apple said it had nothing to add to the statement. And while it's good that Apple has plans to fix the issue, it does feel a little like putting a “Wet Paint” sign on a wall after you already have a red stripe down the back of your shirt.
Headlining AI
Errors are endemic to generative AI; the hallucinations appear no matter what model you use, which can make the tools built by Apple, Google, or OpenAI unpredictable. These systems are trained to process and summarize information, but they’re not immune to confusion.
Google faced a similar backlash last year when its AI Overviews, summaries shared on top of search results, delivered some questionable facts. One could argue that errors like these are just growing pains, but when it comes to news, mistakes aren’t easily forgiven or forgotten.
News brands rely on people trusting their reporting, so this isn't as simple as chalking errors up to bad summaries. A wild claim unsupported by facts and attributed to a supposedly professional newsroom can make people unfairly distrustful of that news source. The last thing journalists and the public need is AI inaccuracies messing with headlines.
Besides rolling out that promised update, Apple will likely have plenty of fine-tuning to do for the AI headlines. That might mean stricter guardrails for the AI, or maybe a more prominent warning that the headlines are AI-generated. If Apple can't fix this, Apple Intelligence may have to be renamed Apple Imagination.
You might also like
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.