IBM Research: AI Chatbots Easily Deceived into Generating Malicious Code
站长之家
7
IBM's research indicates that malicious code or false security advice can be easily generated by deceiving large language models like GPT-4. Researchers have found that with basic English knowledge and some background on the model's training data, it is straightforward to trick AI chatbots. Different AI models vary in their susceptibility to deception, with GPT-3.5 and GPT-4 being more prone. The threat level of these newly discovered vulnerabilities to large language models is moderate. However, if hackers were to release these models onto the internet, chatbots could potentially be used to provide dangerous security advice or collect users' personal information.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/304