Translation: Recent studies have revealed that large language models like OpenAI's ChatGPT frequently exhibit issues with repeating harmful misinformation. Researchers from the University of Waterloo in Canada conducted systematic tests on ChatGPT's comprehension abilities and found that GPT-3 displayed self-contradictory responses and repeated harmful misinformation during the answering process. They utilized various survey templates and queried over 1200 distinct statements, identifying the presence of this issue.