AIbase
Product LibraryTool Navigation

llm-security-prompt-injection

Public

This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.

Creat2023-11-22T07:05:13
Update2025-03-20T10:25:33
38
Stars
0
Stars Increase