LA Times Sparks Heated Debate With New Tools. Should Other Media Take Note?
By Movieguide® Contributor
The Los Angeles Times has sparked a debate in the entertainment industry after incorporating an AI feature that automatically ranks the bias of every article on the political scale.
“The purpose of Insights (the name of the tool) is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article,” LA Times owner Dr. Patrick Soon-Shiong wrote in a letter to his readers. “I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”
“We added these new features to encourage audience engagement and interaction with The Times and our content,” Soon-Shiong continued. “I believe the media is evolving, and The Times is well-positioned to lead the way.”
Along with the AI-generated political bias evaluation, Insights will also provide readers with links to articles from other perspectives, helping them form an educated opinion, rather than feeding them an echo chamber-esc view.
The addition of the AI tool, however, has already proved to be controversial as the L.A. Times union has already voiced their opposition to its inclusion on the editorial.
“We don’t think this approach – AI-generated analysis unvetted by editorial staff – will do much to enhance the trust in the media,” wrote L.A. Times Guild vice chair Matt Hamilton. “Quite the contrary, this tool risks further eroding confidence in the news. And the money for this endeavor could have been directed elsewhere: supporting our journalists on the ground who have had no cost-of-living increase since 2021.”
Beyond the debate of whether the creation of this tool was money well spent, there are also questions about the accuracy of Insights, especially after a study from February found that AI is bad at summarizing the news in general. Apple even canceled an AI-generated news report on Apple News after the program generated too many false responses.
If the AI behind Insights were to fall into these same traps, it could misidentify the bias behind a piece or direct users to articles unrelated to the piece they had been reading.
Movieguide® previously reported:
The BBC recently put four of the most popular AI chatbots to the test and found that all of them perform quite underwhelmingly when asked to accurately summarize news articles.
Related: Artificial Intelligence ‘Does More Harm Than Good’ in the Classroom
During the study, researchers fed OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini and Perplexity AI 100 news articles and asked for an accurate summary. The responses were disappointing as 51% of all answers included significant issues, and 19% introduced factual errors, such as making up facts, names and dates.
According to The Verge, some of the examples of inaccuracies included Gemini saying that the UK’s National Health Service “advises people not to start vaping, and recommends that smokers who want to quit should use other methods.” But the NHS actually does recommend the practice for smokers who want to quit. Another error? ChatGPT said that Ismail Haniyeh was still part of Hamas leadership in December 2024, even though he was assassinated in July 2024.
Furthermore, these chatbots struggled to differentiate between opinion pieces, editorials and news stories, oftentimes leaving out crucial information.
“The price of AI’s extraordinary benefits must not be a world where people searching for answers are served distorted, defective content that presents itself as fact,” BBC News and Current Affairs CEO Deborah Turness wrote. “In what can feel like a chaotic world, it surely cannot be right that consumers seeking clarity are met with yet more confusion.”