Recently, researchers at Google have issued a warning: Generative AI (GenAI) is ruining the internet with fake content. This is not just a cautionary note but also a form of self-reflection.

Ironically, Google plays a dual role in this "battle of truth and falsehood." On one hand, it is a significant promoter of generative AI, while on the other, it is also a creator of misinformation. Google's AI summarization feature has previously offered absurd suggestions such as "spreading glue on pizza" and "eating stones," which ultimately had to be manually removed due to these erroneous pieces of information.

image.png

Google's research team conducted an in-depth investigation into 200 news reports about the misuse of generative AI, finding that tampering with human portraits and fabricating evidence were the most common forms of abuse. These actions are aimed at influencing public opinion, committing fraud, or seeking profit. Although the risks of generative AI have not yet escalated to a "threat to survival," they are currently happening and could potentially worsen in the future.

Researchers found that most cases of GenAI abuse were normal system usage, without any "jailbreaking" behavior. Such "regular operations" accounted for 90%. The widespread availability, accessibility, and hyper-realism of GenAI make it possible for a continuous stream of lower-level abuse forms. The cost of generating false information is simply too low!

image.png

Much of Google's research material comes from media reports, raising the question of whether the conclusions are biased by the media. Media tends to report sensational events, which could lead to a dataset skewed towards specific types of abuse. 404Media points out that there are likely many unreported instances of generative AI abuse that we are not yet aware of.

The "fences" of AI tools can be cleverly bypassed with some prompts. For example, the AI voice cloning tool from ElevenLabs can highly realistically mimic the voices of colleagues or celebrities. Users on Civitai can create AI-generated images of celebrities, and although the platform has a policy against NCII (non-consensual intimate images), nothing prevents users from using open-source tools on their own machines to generate NCII.

When misinformation runs rampant, the chaos on the internet poses a significant challenge to people's ability to discern truth from falsehood. We will be caught in a perpetual state of doubt, asking, "Is this real?" If not addressed, the pollution of public data by AI-generated content could hinder information retrieval and distort collective understanding of societal-political realities or scientific consensus.

Google has played a role in exacerbating the proliferation of fake content from generative AI. The bullet fired years ago has finally hit its own forehead today. This research by Google may mark the beginning of a path to self-redemption and serves as a wake-up call to the entire internet community.

Paper link: https://arxiv.org/pdf/2406.13843