On January 1, a shocking incident occurred in Las Vegas. A man detonated a Tesla Cybertruck outside the Trump Hotel. Following an investigation by the Las Vegas Police Department, authorities revealed that the man used the artificial intelligence chat tool ChatGPT to plan the explosion.

Fire Explosion (1)

Image Note: Image generated by AI, licensed by service provider Midjourney

During a press conference, police stated that the man, Matthew Levisberg, asked ChatGPT over 17 questions in the days leading up to the incident. These questions included how to obtain materials for the explosion, related legal issues, and how to use firearms to detonate the chosen explosives. Levisberg interacted with ChatGPT for a full hour, discussing topics such as the legality of fireworks in Arizona, where to buy guns in Denver, and what type of firearm could effectively detonate explosives.

Assistant Sheriff Dori Cohen confirmed that the responses from ChatGPT played a crucial role in executing the explosion plan. ChatGPT provided information about the firing speed of firearms, which allowed Levisberg to successfully carry out his plan. Although the final explosion was not as powerful as he had anticipated, with some explosives failing to ignite, the incident still shocked law enforcement agencies.

Las Vegas Police Chief Kevin McKail stated, "We have long known that artificial intelligence would change our lives at some point, but this is the first time I have seen someone use ChatGPT to construct such a dangerous plan." He pointed out that there is currently no government oversight mechanism to flag queries related to explosives and firearms.

While the Las Vegas police have not disclosed the specific questions asked to ChatGPT, the questions presented at the press conference were relatively simple and did not use traditional "jailbreak" terminology. It is worth noting that this manner of use clearly violates OpenAI's usage policies and terms, but it remains unclear whether OpenAI's safety measures were effective when Levisberg used the tool.

In response, OpenAI stated that it is committed to ensuring users use its tools "responsibly" and aims for its AI tools to refuse harmful commands. OpenAI further explained, "In this incident, ChatGPT merely responded based on publicly available information on the internet and also provided warnings against harmful or illegal activities. We are continuously working to make AI smarter and more responsible." The company is currently cooperating with law enforcement to support their investigation.

Key Points:

🔍 The incident occurred on January 1, 2024, when a man detonated a Tesla Cybertruck outside the Trump Hotel in Las Vegas.

💡 The man utilized ChatGPT for an hour of planning before the explosion, involving questions about obtaining explosives and firearms.

⚠️ Authorities stated this is the first instance in the U.S. of an individual using ChatGPT for such a dangerous activity, and no effective government oversight mechanism has been found.