Translated data: Researchers discovered 100 malicious machine learning models on the Hugging Face AI platform, which could potentially allow attackers to inject malicious code on users' machines. These malicious AI models utilize methods such as PyTorch to execute harmful code, exacerbating security risks. To mitigate these risks, AI developers should employ new tools to enhance the security of AI models. The discovery of these malicious models underscores the risks posed by malicious AI models to user environments, necessitating ongoing vigilance and enhanced security measures.