In the realm of technology, a "cat-and-mouse game" between developers and management is unfolding. The protagonists of this game are AI coding tools that are explicitly prohibited by companies yet still secretly used by developers.

According to the latest global study by cloud security company Checkmarx, despite 15% of companies explicitly banning the use of AI coding tools, nearly all development teams (99%) are using them. This phenomenon highlights the challenges in controlling the use of generative AI.

Only 29% of companies have established some form of governance mechanism for generative AI tools. In 70% of cases, there is no unified strategy, and purchasing decisions are made ad hoc by various departments. This situation leaves management somewhat powerless in controlling the use of AI coding tools.

image.png

As AI coding tools become more prevalent, security concerns are also increasing. 80% of respondents are worried about the potential threats posed by developers using AI, especially 60% who are concerned about the "hallucination" issues caused by AI.

Despite these concerns, there is still significant interest in the potential of AI. 47% of respondents are open to allowing AI to make unsupervised code changes. Only 6% said they would not trust AI's security measures in software environments.

Tzruya stated: "These responses from global CISOs reveal a reality where developers use AI in application development even though AI cannot reliably create secure code, meaning security teams need to deal with a lot of new, vulnerable code."

Microsoft's Work Trend Index recent report also showed similar findings, with many employees using their own AI tools without being provided with them by the company. Typically, they do not discuss this use, which hinders the systematic integration of generative AI into business processes.

Despite explicit bans, 99% of development teams still use AI tools to generate code. Only 29% of companies have established governance mechanisms for the use of generative AI. In 70% of cases, decisions about AI tool use are made ad hoc by various departments. Meanwhile, security issues are growing. 47% of respondents are open to allowing AI to make unsupervised code changes. Security teams face the challenge of dealing with a large amount of potentially vulnerable AI-generated code.

This "cat-and-mouse game" between developers and management continues, and the future of AI coding tools remains to be seen.