YouTube, the world's largest video platform, has introduced a new mechanism that allows people to request the removal of content generated through AI that mimics their appearance or voice. This mechanism is an extension of the relatively lenient regulation of the technology so far.

youtube


Although this mechanism was quietly added to YouTube's privacy guide updated last month, it was only noticed by TechCrunch this week. YouTube views the use of AI technology to "change or create synthetic content that looks or sounds like you" as a potential privacy infringement, rather than a misinformation or copyright issue.

However, those who request deletion cannot guarantee that their request will be fulfilled, and YouTube's standards leave a considerable amount of ambiguity. YouTube states that it will consider factors such as whether the content is disclosed as "altered or synthetic," if the individual "can be uniquely identified," and whether the content is "convincing." There is also a huge and common loophole, as to whether the content is considered a parody or satire, or even more ambiguously, if it has "public interest" value. These unclear qualification conditions indicate that YouTube has taken a relatively weak stance on this issue, far from opposing AI.

In protecting any form of privacy infringement, YouTube follows its standards and only accepts first-party claims. Third-party claims are only considered in special cases, such as when the impersonated individual does not have internet access, is a minor, or is deceased.

If the claim is approved, YouTube will give the violating uploader 48 hours to address the complaint, which can include trimming or blurring videos to remove the problematic content, or completely deleting the video. If the uploader fails to take action in time, their video will be subject to further review by the YouTube team.

All these guidelines are good, but the real question is how YouTube will implement them in practice. As noted by TechCrunch, as a platform under Google, YouTube has its own interests in the AI field, including releasing music generation tools and robots that summarize comments under short videos.

This might be why the new AI content deletion request feature was quietly launched as a moderate continuation of the "Responsible AI" initiative that began last year, which now requires that realistic AI-generated content must be disclosed by March.

Key Points:

- 💡 YouTube has launched an AI content imitation complaint mechanism.

- 💡 Those who request deletion cannot guarantee that it will be deleted, and YouTube's standards leave a considerable amount of ambiguity.

- 💡 Third-party claims are only considered in special cases, such as when the impersonated individual does not have internet access, is a minor, or is deceased.