According to the latest report from Gartner, the application of artificial intelligence (AI) in cyberattacks has become the greatest risk faced by enterprises for three consecutive quarters.

The consulting firm surveyed 286 senior risk and audit executives between July and September, and the results showed that 80% of the respondents expressed deep concern about AI-enhanced malicious attacks. This trend is not surprising, as evidence indicates that cyberattacks using AI are on the rise.

Hacker, Cyber Attack, Coding

Image Source Note: The image is generated by AI, provided by the image licensing service Midjourney

The report also lists other emerging risks, including AI-assisted misinformation, increasing political polarization, and mismatched organizational talent allocation. Attackers are using AI to write malware, craft phishing emails, and more. For example, researchers intercepted a malware-spreading email campaign in June, suspecting that the script was written with the help of generative AI. The script was structurally clear, with each command annotated, which is uncommon in manual writing.

According to data from security company Vipre, the number of business email compromise attacks increased by 20% in the second quarter of 2023 compared to the same period last year, with nearly half generated by AI. CEOs, HR, and IT personnel have become the main targets. Usman Choudhary, Chief Product and Technology Officer at Vipre, said that criminals are using sophisticated AI algorithms to create convincing phishing emails that mimic the tone and style of legitimate communications.

Additionally, according to Imperva Threat Research, retail websites averaged 569,884 AI-driven attacks per day from April to September. Researchers noted that tools like ChatGPT, Claude, and Gemini, as well as bots specifically designed to scrape website data for training large language models, are being used for distributed denial of service attacks and business logic abuse.

More ethical hackers are also acknowledging the use of generative AI, with the proportion rising from 64% last year to 77%. These researchers say that AI can assist in multi-channel attacks, fault injection attacks, and automated attacks, thereby attacking multiple devices simultaneously. As such, if "good people" find AI useful, "bad people" will also exploit this technology.

The rise of AI is not surprising as it lowers the barrier to cybercrime, allowing less technically proficient criminals to use AI to generate deepfakes, scan network entry points, and conduct reconnaissance. Researchers at the Swiss Federal Institute of Technology have recently developed a model that can solve Google reCAPTCHA v2 100% of the time. Analysts at security company Radware predicted at the beginning of the year that the emergence of private GPT models would be used for malicious purposes, leading to an increase in zero-day vulnerabilities and deepfake scams.

Gartner also noted that critical issues with IT vendors have entered the radar of executives for the first time. Zachary Ginsburg, Senior Director of Gartner's Risk and Audit Practice, said that customers who rely heavily on a single vendor may face higher risks. Like the incident with dStrike in July, which paralyzed 8.5 million Windows devices worldwide, causing significant impacts on emergency services, airports, and law enforcement agencies.