As one of the fastest-growing forms of adversarial artificial intelligence, the losses related to Deepfakes are expected to skyrocket from $12.3 billion in 2023 to $40 billion by 2027, with a compound annual growth rate of an astonishing 32%. Deloitte predicts a surge in deepfakes in the coming years, with banks and financial services becoming primary targets.

Deepfakes represent the cutting edge of adversarial AI attacks, with a growth rate of 3,000% just last year. It is projected that Deepfakes incidents will increase by 50% to 60% by 2024, with an estimated 140,000 to 150,000 such incidents worldwide this year.

The latest generation of generative AI applications, tools, and platforms provide attackers with everything they need to quickly and inexpensively create deepfake videos, mimic voices, and produce fraudulent documents. Pindrops' 2024 Voice Intelligence and Security Report estimates that deepfake fraud targeting contact centers is causing losses of about $5 billion annually. Their report highlights the severe threat that deepfake technology poses to banks and financial services.

Bloomberg reported last year that "a complete cottage industry has emerged on the dark web, selling scam software at prices ranging from $20 to thousands of dollars." Recently, an infographic based on Sumsub's 2023 Identity Fraud Report provided a global perspective on the rapid growth of AI fraud.

image.png

One-third of companies do not have strategies to address the risks of adversarial AI attacks, which are most likely to originate from deepfakes of their key executives. Ivanti's latest research finds that 30% of companies lack plans to identify and defend against adversarial AI attacks.

Ivanti's 2024 Cybersecurity Status Report found that 74% of surveyed companies have seen evidence of AI threats. The vast majority (89%) believe that AI threats are just beginning. Among the majority of CISOs, CIOs, and IT leaders interviewed by Ivanti, 60% are worried that their companies are not prepared to defend against AI threats and attacks. The use of deepfakes as part of sophisticated strategies, including phishing, software vulnerabilities, ransomware, and API-related vulnerabilities, is becoming more common. This is consistent with the expectations of security professionals that threats are becoming more dangerous due to the next generation of AI.

image.png