The JFrog security team has identified at least 100 malicious AI ML models on the Hugging Face platform. Some of these models can execute code on victim machines and provide persistent backdoors. Security researchers have discovered PyTorch and TensorFlow Keras models with malicious functionalities on the platform, such as the user-uploaded baller423 model which can establish a reverse shell on a specified host. Some of these malicious models may have been uploaded for security research purposes to discover vulnerabilities and earn bounties.