The data to be translated: Researchers from the University of Sheffield in the UK have revealed a concerning finding: AI tools like ChatGPT may be manipulated to generate malicious code, posing a threat to database security. Various commercial AI tools have security vulnerabilities, and a successful attack could lead to the leakage of confidential database information and disrupt normal database services. The researchers urge users to be aware of potential risks and emphasize the need to establish new communities to ensure that cybersecurity strategies keep pace with the evolving threats. Some companies have already taken steps to fix these security vulnerabilities, but the issue remains one that requires ongoing attention.