In recent years, the rapid development of artificial intelligence (AI) has brought numerous opportunities to businesses. However, its potential threats are also becoming increasingly apparent. According to the latest 2024 "Next-Generation Risk Report," a survey reveals that a staggering 80% of respondent companies have not yet developed specific plans to address generative AI risks, including security concerns such as AI-driven cyber scams.
The survey was conducted by Riskconnect, a risk management software company, and included 218 global risk compliance and resilience professionals. The results show that 24% of respondents believe that AI-driven cybersecurity threats (such as ransomware, phishing, and deepfakes) will have a significant impact on businesses within the next 12 months. Meanwhile, 72% of respondents stated that cybersecurity risks have had a significant or severe impact on their organizations, up from 47% last year.
As concerns over AI ethics, privacy, and security intensify, the report highlights that while companies are growing more concerned about AI, their risk management strategies have not kept pace, leaving many critical gaps. For example, 65% of companies do not have policies regarding the use of generative AI by partners and suppliers, despite the fact that third parties are common entry points for cyber scammers.
Internal threats are also significant. Taking the example of companies using generative AI to create marketing content, marketing expert Anthony Miyazaki warns that while generative AI excels at writing text, the final copy still needs to be edited by humans to ensure its persuasiveness and accuracy. Additionally, relying on AI to generate website content can lead to negative consequences, such as Google explicitly stating that using AI content to manipulate search processes will result in lower search rankings, which would severely impact a company's search engine optimization (SEO).
To address these challenges, companies must ensure comprehensive internal policies to safeguard sensitive data and comply with regulations. John Scimone, Chief Security Officer at Del Tech, says they established principles before the generative AI boom to ensure the development of AI applications is fair, transparent, and responsible.
At digital marketing agency Empathy First Media, Vice President Ryan Doser also emphasizes the strict measures the company has taken regarding employee use of AI, including prohibiting the input of client sensitive data into generative AI tools and requiring human review of AI-generated content. These measures aim to increase transparency and build customer trust.
Key Points:
🌐 80% of companies have not developed specific plans for generative AI risks, facing potential security vulnerabilities.
🔒 72% of companies believe cybersecurity risks have had a significant impact, calling for enhanced risk management.
📈 Companies should take proactive measures to ensure the security and compliance of AI applications, avoiding internal and external threats.