Recent studies have revealed that large language models are prone to spreading misinformation, particularly when addressing statements about facts, conspiracies, and controversies. The research highlights frequent errors, contradictions, and the repetition of harmful information in ChatGPT. It has been pointed out that context and the manner of questioning may influence the extent to which the model "agrees" with misinformation. Given that these models may learn incorrect information during their training process, this raises concerns about their potential dangers.