Recently, Google has finally taken action to combat involuntary deepfake content. Especially eight months ago, the deepfake incident involving Taylor Swift garnered widespread attention, prompting tech companies and legislators to take this issue seriously. Henry Ajder, an expert in generative artificial intelligence, points out that we have reached a critical turning point where increased consumer awareness and legislative pressure make it impossible for tech companies to ignore this problem any longer.
Google announced last week that it will take measures to prevent pornographic deepfake content from appearing in search results. They will streamline the process for victims to request the removal of involuntary pornographic images and filter out all related pornographic search results, as well as remove duplicate images. This means that if someone searches for deepfake content involving a person's name, Google will try to push high-quality non-pornographic content, such as related news articles. Ajder agrees with this move, believing it will significantly reduce the exposure of involuntary pornographic deepfake content.
However, although Google's actions are a positive start, there is still much work to be done. Earlier this year, I mentioned several ways to combat involuntary pornographic deepfakes, including strengthening regulation, introducing watermarking technology, and establishing protection mechanisms. However, watermarks and protection mechanisms are still in the experimental stage and their effectiveness is unstable, while regulatory changes are gradually progressing. For example, the UK has banned the creation and distribution of involuntary pornographic deepfake content, leading to related websites like "Mr DeepFakes" blocking access for UK users.
In the EU, the AI Act has come into effect, requiring creators of deepfakes to clearly state that these materials are generated by artificial intelligence. The US Senate has also passed the "EARN IT Act," providing victims with a channel to seek civil remedies, but this legislation still needs to pass the House of Representatives to take effect.
Ajder notes that while Google can identify high-traffic websites and work to remove deepfake sites from search results, it can do more. He calls for a re-evaluation of the nature of involuntary deepfakes, arguing that such content should be treated as strictly as child pornography. He emphasizes that internet platforms need to take more robust measures to ensure that such content is not easily created or accessed.
Key Points:
🌟 Google takes action, streamlining the process for victims to request the removal of involuntary pornographic deepfake content.
📜 The UK has banned the creation and distribution of involuntary deepfake content, prompting related websites to impose restrictions.
💡 Ajder calls for a re-evaluation of the nature of involuntary deepfake content, stressing that it should be treated as severely as child pornography.