Recently, a research team from New York University published a study revealing the vulnerabilities of large language models (LLMs) in data training. They found that even a tiny amount of false information, constituting just 0.001% of the training data, can lead to significant errors in the entire model. This finding is particularly concerning in the medical field, as incorrect information could directly impact patient safety.
Image Source Note: Image generated by AI, image licensed from service provider Midjourney