The JFrog security team has identified at least 100 malicious AI ML models on the Hugging Face platform. Some of these models can execute code on victim machines and provide persistent backdoors. Security researchers have discovered PyTorch and TensorFlow Keras models with malicious functionalities on the platform, such as the user-uploaded baller423 model which can establish a reverse shell on a specified host. Some of these malicious models may have been uploaded for security research purposes to discover vulnerabilities and earn bounties.
Hundreds of Malicious AI Models Discovered on Hugging Face Platform

奇客Solidot
This article is from AIbase Daily
Welcome to the [AI Daily] column! This is your daily guide to exploring the world of artificial intelligence. Every day, we present you with hot topics in the AI field, focusing on developers, helping you understand technical trends, and learning about innovative AI product applications.