Research Findings: GPT-3 Often Makes Mistakes and Repeats Harmful Misinformation
站长之家
48
Translation:
Recent studies have revealed that large language models like OpenAI's ChatGPT frequently exhibit issues with repeating harmful misinformation. Researchers from the University of Waterloo in Canada conducted systematic tests on ChatGPT's comprehension abilities and found that GPT-3 displayed self-contradictory responses and repeated harmful misinformation during the answering process. They utilized various survey templates and queried over 1200 distinct statements, identifying the presence of this issue.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/4494