Los Angeles Times billionaire owner Patrick Soon-Shiong recently announced in a letter to readers that the newspaper will be using artificial intelligence (AI) to add a "voice" label to some articles. Articles deemed to express an "opinion" or written from a "personal perspective" will be marked as "voice." The AI will also generate a set of "Insights" at the bottom of articles, presented as bullet points, including what it labels as "different perspectives on the topic."

News, Information, news

Soon-Shiong stated in his letter that the "voice" label will not be limited to opinion columns but will also encompass news commentary, critiques, and reviews. Any article perceived as taking a stance or employing a personal viewpoint may be flagged as "voice." He believes that offering a wider range of perspectives will support the newspaper's journalistic mission and help readers better understand the challenges facing the nation.

However, this change has not been welcomed by members of the Los Angeles Times union. Matt Hamilton, the union's vice president, stated that while the union supports measures that help readers distinguish between news reports and opinion pieces, they don't believe AI-generated analysis without editorial review will enhance media credibility.

Shortly after the implementation of the changes, questionable results have emerged. The Guardian noted that, on an opinion piece about the dangers of unregulated AI use in historical documentaries, the LA Times' AI tool claimed the article was "generally aligned with a center-left viewpoint" and suggested that "AI democratizes historical narratives." Furthermore, on a report about California cities electing Ku Klux Klan members to city councils in the 1920s, an AI-generated insight claimed that local historical accounts sometimes portray the KKK as a product of "white Protestant culture" responding to societal shifts, rather than an explicitly hate-driven movement, thus downplaying its ideological threat. While this statement might hold some truth, its presentation seems clumsy and contradicts the article's main point.

Ideally, the use of AI tools should involve editorial oversight to prevent issues like those currently facing the Los Angeles Times. Lack of oversight can lead to various errors, such as MSN's AI news aggregator wrongly recommending tourist spots or Apple misinterpreting a BBC headline in its latest notification summary.

While other media organizations are also employing AI in their news operations, it's not typically used to generate editorially assessed analysis. Bloomberg, USA Today, the Wall Street Journal, the New York Times, and the Washington Post are among the many news outlets using this technology in various ways.

Key Points:

🌐 The Los Angeles Times is introducing AI to add "voice" labels to articles and generate AI-analyzed insights.

📰 Union members express concerns about the AI-generated analysis, believing that the lack of editorial review will negatively impact media trust.

🔍 Issues with the AI analysis have already surfaced, with some insights contradicting the main points of the articles, raising public concern.