LLM Guard is an open-source toolkit designed to enhance the security of large language models, aiming to streamline the secure adoption of LLMs in enterprises. It offers a wide range of evaluators for both inputs and outputs of LLMs, including cleansing, detecting harmful language and data leaks, as well as preventing injection and jailbreak attacks. By providing a one-stop-shop of essential tools, the toolkit aspires to be the preferred open-source security solution in the market. The release of LLM Guard will facilitate the broader application of large language models in enterprises, offering them better security and control.