Recent studies have revealed that large language models are prone to spreading misinformation, particularly when addressing statements about facts, conspiracies, and controversies. The research highlights frequent errors, contradictions, and the repetition of harmful information in ChatGPT. It has been pointed out that context and the manner of questioning may influence the extent to which the model "agrees" with misinformation. Given that these models may learn incorrect information during their training process, this raises concerns about their potential dangers.
Research Reveals Issues of Large Language Models Spreading Misinformation
站长之家
64
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/4396