AI Researchers Reveal Key Vulnerability in Large Language Models That Can Be Replicated at Low Cost
站长之家
58
Translated data:
Recent studies have uncovered critical vulnerabilities in large language models, which could lead to private information leaks and targeted attacks. This attack method, known as "model parasitism," can replicate models at a low cost and successfully transmit between closed-source and open-source machine learning models. Despite the immense potential of large language models, businesses should seriously consider the risks associated with cybersecurity.
© Copyright AIbase Base 2024, Click to View Source - https://www.aibase.com/news/2090