As artificial intelligence imaging technology becomes increasingly prevalent, Google has announced that it will introduce a new AI editing indicator feature for the Google Photos app starting next week. Photos edited using AI features such as Magic Editor, Magic Eraser, and Zoom Enhance will display a "Edited with Google AI" label at the bottom of the "Details" section in the app.
This update comes more than two months after Google released the Pixel9 phone with multiple AI photo editing features. However, this labeling method has sparked some controversy. Although Google claims this move is to "further enhance transparency," the actual effect is questionable: the photos themselves do not add any visible watermarks, meaning users cannot intuitively identify whether these photos have been AI-processed when sharing on social media, messaging, or browsing photos daily.
For photo editing features like Best Take and Add Me that do not use generative AI, Google Photos will also label the editing information in the metadata but will not display it under the details tab. These features are mainly used to combine multiple photos into a complete image.
Michael Marconi, the communications manager for Google Photos, told TechCrunch: "This work is not yet complete. We will continue to gather feedback, strengthen and improve security measures, and evaluate other solutions to increase the transparency of generative AI editing." Although the company has not explicitly stated whether it will add visible watermarks in the future, it has not completely ruled out this possibility.
It is worth noting that all photos edited with Google AI currently include AI editing information in the metadata. The new feature simply moves this information to a more easily found location under the details tab. However, the actual effectiveness of this approach is concerning, as most users do not specifically check metadata or details when browsing online images.
Of course, adding visible watermarks within the photo frame is also not a perfect solution. These watermarks can easily be cropped or edited out, and the problem persists. As Google's AI image tools become more widespread, synthetic content may increase significantly on the internet, making it increasingly difficult for users to distinguish between real and fake content.
Google's current metadata watermarking method largely relies on platforms to label AI-generated content for users. Meta has already implemented this practice on Facebook and Instagram, and Google plans to identify AI images in search results later this year. However, progress on other platforms is slower.
This controversy highlights an important issue in the development of AI technology: how to ensure content authenticity and user知情权 while promoting technological innovation. Although Google has taken the first step in enhancing transparency, more efforts and improvements are clearly needed to prevent synthetic content from misleading users.