In today's rapidly advancing technological landscape, artificial intelligence, particularly Large Language Models (LLMs), is gradually becoming the focal point. However, the cybersecurity laws in the United States appear to be lagging behind this fast-evolving field. Recently, a group of scholars from Harvard University pointed out at the Black Hat conference that the current Computer Fraud and Abuse Act (CFAA) does not effectively protect those engaged in AI security research and may even expose them to legal risks.
Image source: Generated by AI, image licensed by Midjourney
These scholars include Kendra Albert from Harvard Law School, Ram Shankar Siva Kumar, and Jonathon Penney. Albert mentioned in an interview that existing laws do not clearly define actions like "prompt injection attacks," making it difficult for researchers to determine whether their actions are illegal. She noted that while accessing models without permission is clearly illegal, it becomes ambiguous whether using these models in unintended ways, after obtaining permission, constitutes a violation.
In 2021, the U.S. Supreme Court's ruling in Van Buren v. United States changed the interpretation of the CFAA, stipulating that the act only applies to those who access internal computer information without authorization. This ruling makes sense in traditional computer systems but is inadequate when it comes to Large Language Models. Albert pointed out that the use of natural language to interact with AI makes this legal definition more complex, as AI responses often do not equate to retrieving information from a database.
Meanwhile, Kumar also mentioned that discussions on the legality of AI security research are far less emphasized than issues like copyright, and he is unsure whether he would be protected when conducting certain attack tests. Albert expressed that given the current legal uncertainties, the issue might be clarified through court litigation in the future, but for now, it leaves many "well-intentioned" researchers feeling lost.
In this legal environment, Albert recommends that security researchers seek legal support to ensure their actions do not violate the law. She also worries that vague legal provisions could deter potential researchers, allowing malicious attackers to go unpunished and create greater security risks.
Key Points:
🛡️ The U.S. Computer Fraud and Abuse Act does not adequately protect AI security researchers and may expose them to legal risks.
💡 Current laws lack clear definitions for actions like prompt injection attacks, making it difficult for researchers to judge legality.
⚖️ Scholars believe that future court litigation may be necessary to clarify relevant legal provisions and protect well-intentioned researchers.