A recent study reveals that 63% of global security leaders are considering banning the use of AI-generated code by development teams. This phenomenon has garnered widespread attention, particularly within the tech industry. While the adoption of AI is believed to assist developers in handling mundane tasks, security experts harbor reservations about its use.
Image Source: Image generated by AI, provided by Midjourney
According to a survey by Venafi, 92% of decision-makers express concern about the use of AI-generated code within their organizations, primarily due to concerns over output quality. They believe that AI models may be trained on outdated open-source libraries, resulting in substandard code. Additionally, developers might become overly reliant on these tools, leading to a decline in code quality.
Security leaders generally believe that AI-generated code is not as rigorously quality-checked as code written by humans. They worry that developers might feel less accountable for the outputs of AI models, resulting in less thorough code reviews. Tariq Shaukat, CEO of Sonar, mentions that an increasing number of companies are experiencing downtime and security issues after using AI-generated code, largely due to inadequate code review.
This report from Venafi shows that despite the concerns of security experts, 83% of organizations are still using AI for code development, with over half incorporating it into their regular practices. Although many companies are considering banning AI-assisted coding, 72% of security leaders believe it must be allowed to maintain competitiveness. According to Gartner, by 2028, 90% of corporate software engineers will use AI code assistants, thereby boosting productivity.
Furthermore, the report indicates that about 66% of respondents cannot effectively monitor the secure use of AI within their organizations due to a lack of visibility. Security leaders are concerned about potential vulnerabilities, with 59% feeling anxious, and nearly 80% believing that the proliferation of AI code will lead to a surge in security issues, forcing companies to reevaluate their management strategies.
Key Points:
🔍63% of global security leaders are considering banning AI-generated code, primarily due to concerns over output quality and security issues.
⚠️92% of decision-makers are concerned about the use of AI code, believing that developers do not review AI-generated code thoroughly enough.
📈 Despite significant security concerns, 83% of organizations continue to use AI for code development to stay competitive.