Recently, papers released by two research teams have garnered widespread attention, stating that the content of generative AI products can essentially be considered "nonsense." The paper titled "ChatGPT is Nonsense" points out that generative AI's indifference to accuracy poses many challenges for public servants, especially officials who are legally obligated to speak the truth.
Authors Michael Townsen Hicks, James Humphries, and Joe Slater emphasize that the misinformation generated by generative AI cannot be simply described as "lies" or "hallucinations." Unlike deliberate deception, nonsense refers to an expression that doesn't care about the truth, attempting to create a specific impression. They believe that calling AI errors "hallucinations" only misleads the public, making them think that these machines are still trying to convey what they "believe" in some way.
They stated, "Calling these errors 'nonsense' rather than 'hallucinations' is not only more accurate but also helps enhance public understanding of technology." This highlights the importance of using more precise terminology to describe AI errors, especially in the context of the current urgent need for improvement in technology communication.
Meanwhile, another research paper on large language models (LLMs) focuses on the legal and ethical environment in the EU. The conclusion of the paper is that the current laws and regulations regarding AI are still inadequate to effectively prevent the harm caused by the "nonsense" generated by these AIs. Authors Sandra Wachter, Brent Mittelstadt, and Chris Russell suggest introducing regulations similar to those in the publishing field, emphasizing the avoidance of "careless speech" that could cause social harm.
They point out that this obligation emphasizes that no single entity, whether public or private, should be the sole arbiter of truth. They also note that the "careless speech" of generative AI might turn truth into a matter of frequency and majority opinion, rather than actual facts.
Key Points:
📌 Research teams suggest referring to misinformation from generative AI as "nonsense," not "hallucinations."
📌 Existing laws and regulations are insufficient to effectively prevent the social harm caused by AI-generated misinformation.
📌 There is a call for new regulations to emphasize the avoidance of "careless speech," ensuring that truth is not a product of majority opinion.