As the technology behind autonomous vehicles continues to mature, artificial intelligence plays a crucial role in aiding vehicles with decision-making, environmental perception, and predictive modeling. However, a recent study conducted at the University of Buffalo has raised concerns, revealing potential vulnerabilities in these AI systems that could be exploited by attackers.

Smart Cars, Autonomous Driving, Driverless Vehicles

Chunming Qiao, a SUNY Distinguished Professor in the Department of Computer Science and Engineering and the lead researcher of this study, stated that while the current research was conducted in a controlled environment, it does not imply that existing autonomous vehicles are unsafe. Nevertheless, these findings could have profound implications for the automotive, technology, insurance, and regulatory policy sectors.

Over the past three years, the research team has conducted numerous tests on autonomous vehicles at the University of Buffalo, focusing primarily on the vulnerabilities of LiDAR, millimeter-wave radar, and cameras. Researcher Yi Zhu noted that millimeter-wave radar can detect objects more reliably than many cameras in poor weather conditions such as rain, fog, and low light, but it is also susceptible to hacking.

Using 3D printing and metal foil, the researchers created a specific geometric object called a "brick mask," which, when placed on a vehicle, could effectively make it "disappear" in radar detection. This work provided strong evidence of the vulnerability of AI models in radar detection.

Yi Zhu also pointed out that attackers could secretly attach malicious objects to vehicles before a journey begins, or place related items in a pedestrian's backpack, thereby eliminating detection of that person. The motives for such attacks could include insurance fraud, competition among autonomous vehicle companies, or personal intentions to harm others.

However, the researchers also emphasized that these simulated attacks are based on the assumption that the attacker has a thorough understanding of the vehicle's radar object detection system, which is not common knowledge among the public. Therefore, while the safety of autonomous vehicles is a concern, public safety awareness and technological protective measures are equally important.

In the future, the research team hopes to further investigate the security of other sensors and develop effective defense measures to counter these potential attacks.

Key Points:

🔍 The study found that AI systems in autonomous vehicles are at risk of being maliciously attacked, potentially causing vehicles to disappear in radar detection.

🛡️ Attackers can interfere with AI system judgments by placing special objects on vehicles or items on pedestrians.

🚗 The research team aims to delve deeper into the security of other sensors and devise effective protection strategies in the future.