Adobe, a name well-esteemed in the creative industry, has earned the reputation of being a "Copyright Guardian" for its stance on copyright protection. However, the company has recently been caught in a storm of public opinion due to a quietly updated service term.

This February, Adobe silently updated its product service terms, adding a striking new clause: users must agree that Adobe can access their works automatically and manually, including those protected by confidentiality agreements. Adobe will use these works to improve its services and software through technologies such as machine learning. If users disagree with these new terms, they will not be able to use Adobe's software.

This change was recently exposed, sparking strong opposition from creative professionals, digital artists, and designers, who are Adobe's core users. They believe this is a forced authorization, essentially a "tyrant clause," with the true purpose of training Adobe's generative AI model "Firefly." Blogger Sam Santala questioned this clause on Twitter, and his post has reached tens of millions of views.

image.png

Many users have expressed their concerns about their privacy and copyright, choosing to stop using Adobe's products. At the same time, Meta has taken similar measures, updating its privacy policy to allow the use of user information shared on Meta's products and services to train AI. If users do not agree to the new privacy policy, they should consider stopping the use of social media products like Facebook and Instagram.

With the rapid development of AI technology, the struggle between technology companies and users over data privacy, content ownership, and control is becoming increasingly intense. Adobe claims that the training data for its Firefly model comes from hundreds of millions of images in the Adobe image library, some publicly licensed images, and publicly accessible images whose copyright has expired. However, other AI image generation tools, such as Stability AI's Stable Diffusion, OpenAI's Dall-E2, and Midjourney, have all been in controversy over copyright issues.

Adobe is trying to take a differentiated market positioning in this field, becoming a "white knight" in the AI arms race, emphasizing the legality of its model training data, and promising to pay compensation for copyright disputes caused by images generated by Adobe Firefly. But this strategy has not calmed all users' concerns. Some users, like senior designer Ajie, jokingly call themselves "Adobe's legitimate victims," believing that Adobe uses its vast creative ecosystem to train AI, which is a smart business strategy, but there is a lack of interest distribution and user knowledge rights between the platform and creators.

Additionally, there have been numerous copyright disputes related to Adobe overseas, raising doubts among users about whether Adobe truly respects creators' copyrights. For example, artist Brian Kesinger found that images similar in style to his works were sold under his name in Adobe's image library without his consent. The estate management of photographer Ansel Adams also publicly accused Adobe of selling generative AI replicas of the late photographer's works.

Under public pressure, Adobe revised its service terms on June 19, explicitly stating that it will not use user content stored locally or in the cloud to train AI models. However, this clarification did not completely ease creators' concerns. Some overseas AI circle bloggers pointed out that the revised service terms of Adobe still allow the use of user private cloud data to train machine learning models for non-generative AI tools. Although users can opt out of "content analysis," the complex cancellation process often discourages many users.

Differences in user data protection regulations exist in different countries and regions, which also affects the strategies of social media platforms in formulating user service terms. For example, under the framework of the General Data Protection Regulation (GDPR), users in the UK and the EU have the "right to object," and they can explicitly choose not to use their personal data to train Meta's AI models. However, U.S. users do not have the same level of knowledge rights, and according to Meta's current data sharing policy, the content posted by U.S. users on Meta's social media products may have been used to train AI without their explicit consent.

Data is likened to the "new oil" of the AI era, but the "extraction" of resources still has many gray areas. Some technology companies have adopted vague practices in obtaining user data, which has caused a double dilemma of users' personal information rights: digital copyright ownership and data privacy issues, seriously damaging user trust in the platform.

Currently, there is still a significant lack of efforts to ensure that generative AI does not infringe on creators' rights, and there is a lack of sufficient regulation. Some developers and creators have taken action, launching a series of "anti-AI" tools, from the work protection tool Glaze to the AI data poisoning tool Nightshade, to the popularity of the anti-AI community Cara. Facing the data training of AI models by technology companies without the consent of users/creators, people's anger is escalating.

With the rapid development of AI technology, how to balance technological innovation and user privacy safety, protect creators' rights, still needs further industry development and continuous improvement of legal regulatory measures. At the same time, users also need to be more vigilant, understand their data rights, and take action to protect their creations and privacy when necessary.