Hugging Face AI Platform Exposes 100 Malicious Code Execution Models
站长之家
39
Translated data:
Researchers discovered 100 malicious machine learning models on the Hugging Face AI platform, which could potentially allow attackers to inject malicious code on users' machines. These malicious AI models utilize methods such as PyTorch to execute harmful code, exacerbating security risks. To mitigate these risks, AI developers should employ new tools to enhance the security of AI models. The discovery of these malicious models underscores the risks posed by malicious AI models to user environments, necessitating ongoing vigilance and enhanced security measures.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/6182