Researcher Naphtali Deutsch discovered that hundreds of open-source large language model (LLM) building servers and dozens of vector databases are leaking large amounts of sensitive information. This problem arises from companies neglecting the security of related tools in their haste to integrate AI into their workflows. By hacking into 438 Flowise servers, he revealed stored sensitive data, including GitHub access tokens, OpenAI API keys, Flowise passwords, and API keys. Additionally, he found...