Recently, researchers at North Carolina State University proposed a new method to extract artificial intelligence (AI) models by capturing the electromagnetic signals emitted by computers, achieving an accuracy of over 99%. This discovery could pose challenges to the development of commercial AI, especially considering the significant investments made by companies like OpenAI, Anthropic, and Google in proprietary models. However, experts point out that the real-world implications of this technology and the necessary defenses remain unclear.
Lars Nyman, Chief Marketing Officer of CUDO Compute, stated that AI theft is not just a loss of the model itself; it could trigger a series of chain reactions, such as competitors leveraging years of R&D efforts, regulatory bodies investigating mismanagement of intellectual property, and even customers filing lawsuits upon discovering that their AI's "uniqueness" is not unique. This situation may prompt the industry to advocate for standardized audits, similar to SOC2 or ISO certifications, to distinguish responsible companies from those that are not.
In recent years, the threat of hacking attacks on AI models has been increasing. The business world's reliance on AI has made this issue more prominent. Recent reports show that thousands of malicious files have been uploaded to Hugging Face, a key repository for this AI tool, seriously jeopardizing models used in industries like retail, logistics, and finance. National security experts warn that weak security measures could expose proprietary systems to theft, as demonstrated by OpenAI's security vulnerabilities. Stolen AI models could be reverse-engineered or sold, undermining corporate investments and eroding trust, allowing competitors to catch up quickly.
The research team at North Carolina State University revealed key information about model structures by placing probes near Google’s Edge Tensor Processing Units (TPUs) and analyzing their signals. This attack method does not require direct access to the system, posing significant security risks to AI intellectual property. Aydin Aysu, co-author of the study and associate professor of electrical and computer engineering, emphasized that building an AI model is not only costly but also requires substantial computational resources, making it crucial to prevent model theft.
As AI technology becomes increasingly widespread, companies need to reassess certain devices used for AI processing. Technology consultant Suriel Arellano believes that businesses may turn to more centralized and secure computing, or consider other technologies that are harder to steal. Despite the risks of theft, AI is also enhancing cybersecurity by automating threat detection and data analysis to improve response efficiency, helping to identify potential threats and learn to counter new attacks.
Key Points:
🔍 Researchers demonstrated a method to extract AI models by capturing electromagnetic signals with an accuracy exceeding 99%.
💼 Theft of AI models could allow competitors to exploit years of R&D efforts, impacting business security.
🔒 Companies need to strengthen the security of AI models to address the increasing threat of hacking attacks.