Apple's recent launch of an artificial intelligence feature has sparked widespread attention and controversy. This feature aims to provide users with news summaries but has faced criticism for disseminating incorrect information. The international press freedom organization "Reporters Without Borders" has publicly called for Apple to withdraw this feature immediately, citing that it incorrectly summarized a BBC report in the notifications sent to users.
Specifically, Apple's AI system erroneously stated that Luigi Manjone, the suspect in the murder of UnitedHealthcare's CEO, had committed suicide. However, the BBC did not mention this in its original report. The BBC has stated that it has contacted Apple regarding this issue and hopes to resolve it quickly, but it has not confirmed whether Apple has responded.
Vincent Berthier, the director of technology and journalism affairs at Reporters Without Borders, stated that Apple should be held accountable for this incident and should promptly withdraw the feature. He pointed out that AI is merely a probability machine and cannot determine facts simply by making random selections. He emphasized that automatically generated misinformation not only damages the credibility of the media but also threatens the public's right to access reliable information.
Additionally, the organization expressed concerns about the risks posed by emerging AI tools to the media, arguing that current AI technology is not mature enough to be used for the dissemination of public information.
In response, the BBC stated that ensuring the public's trust in the information and news it publishes is crucial, and all content, including push notifications, must be accurate and credible.
Apple launched this generative AI tool in the United States in June, promoting its ability to summarize specific content in concise paragraphs, bullet points, or tables. This feature aims to help users conveniently access news information, allowing them to choose to receive these centralized push notifications on their iPhone, iPad, and Mac devices. However, since the official rollout of this feature at the end of October, users have found that it also incorrectly summarized a New York Times report, claiming that Israeli Prime Minister Netanyahu had been arrested, when in fact, it was merely an arrest warrant issued by the International Criminal Court.
The challenge in this incident lies in the fact that news organizations do not have control over the summary content generated by Apple's AI. While some publishers have chosen to use AI to assist in writing articles, that is a decision they made independently. However, Apple's AI summaries are published under the name of the publishing organization, which not only spreads potential misinformation but also jeopardizes the credibility of the media.
Apple has not responded to this incident, and the controversy surrounding this AI feature reflects the challenges faced by the news publishing industry in a rapidly changing technological environment. Since the launch of ChatGPT, several tech giants have rolled out their own large language models, many of which have been accused of using copyrighted content, including news articles, during their training processes. While some media organizations have taken legal action regarding this, others, like Axel Springer, have chosen to reach licensing agreements with the relevant developers.
Key Points:
📰 Apple's newly launched AI feature has faced widespread criticism for misinformation.
🚨 Reporters Without Borders calls for Apple to withdraw this feature, emphasizing that AI technology should not be used for news summaries.
🗞️ The BBC states that ensuring the credibility of information is crucial and has contacted Apple regarding this issue.