Translated data: Recent studies have uncovered critical vulnerabilities in large language models, which could lead to private information leaks and targeted attacks. This attack method, known as "model parasitism," can replicate models at a low cost and successfully transmit between closed-source and open-source machine learning models. Despite the immense potential of large language models, businesses should seriously consider the risks associated with cybersecurity.