According to a recent report by Wired magazine, Apple's content-grabbing robot, Applebot, has recently faced collective resistance from several mainstream media outlets, sparking widespread discussion in the industry about AI content scraping.

Since its first appearance in November 2014 and its official debut in May 2015, Applebot has been quietly working to improve the search capabilities of Siri and Spotlight. However, recent investigations show that several well-known media outlets and platforms, including Facebook, Instagram, The New York Times, and Financial Times, have chosen to block this robot, denying it access to their website content.

This resistance is primarily achieved through the robots.txt file. Research data shows that approximately 6% to 7% of websites have blocked Applebot-Extended, while another study indicates that as many as 25% of tested websites have chosen to block it. This phenomenon is not limited to Applebot; OpenAI and Google's scraping robots have also encountered similar treatment, with 53% and 43% of news websites, respectively, choosing to intercept them.

Robot Computer Office Artificial Intelligence

Image source note: The image is generated by AI, provided by the image licensing service Midjourney

Although the blocking rate of Applebot is relatively low, experts believe this is not because the media has a particular fondness for it, but rather because Applebot's public visibility is not as high as other robots, thus not garnering enough attention. This explanation reveals the complex situation in the current AI content scraping field.

Behind this "social cold war" lies the complex attitude of the media industry towards AI technology. On one hand, AI technology has brought revolutionary changes to content distribution and user experience; on the other hand, unauthorized content scraping has raised issues such as copyright protection and data privacy.

For Apple, the plight of Applebot is undoubtedly a warning. Finding a balance between technological innovation and content rights has become a difficult issue for tech giants. At the same time, this also serves as a wake-up call for the entire industry, reminding us to re-examine the content ecosystem in the AI era.

As AI technology continues to deepen, similar controversies may intensify. How to formulate reasonable content scraping rules, how to protect the rights of creators, and how to seek a balance between openness and protection are challenges that the entire internet industry needs to face together.

In this game between AI and traditional media, there are no absolute winners. In the future, we may need to establish a more transparent and fair content ecosystem, protecting originality while also leaving room for technological innovation. Only in this way can we truly achieve a win-win situation for AI technology and the content industry, promoting the healthy development of the entire industry.