Study Reveals Most Companies Deploy Insecure Software
A recent study titled Future of Application Security in the Era of AI, published by the Application Security Institute on 14 August, 2025, has shed light on some concerning trends in the security of AI-generated code. The survey, which was conducted among professionals from Europe, North America, and Asia-Pacific, has revealed that the growing adoption of AI coding assistants is eroding developer ownership and expanding the attack surface.
According to the study, AI-generated code often contains known vulnerabilities by default. A staggering 81% of organisations knowingly ship vulnerable code, a finding that highlights the urgent need for improved security practices. Fewer than half of the respondents deploy foundational security tools, such as mature application security tools like dynamic application security testing (DAST) or infrastructure-as-code scanning.
The study also found that only half of the organisations surveyed actively use core DevSecOps tools. In North America, just 51% of organisations report adopting DevSecOps, a figure that is even lower in Europe, where only 32% of respondents said their organisation often deploys code with known vulnerabilities, compared with 24% in North America.
Eran Kinsbruner, vice president of portfolio marketing at Checkmarx, stated that security needs to be embedded from code to cloud due to the velocity of AI-assisted development. He also emphasized the need for policies for AI usage to be established, as well as governance around AI coding tools, which is currently lagging.
The report encourages organisations to operationalize security tooling that focuses on prevention. To this end, Checkmarx recently announced the general availability of its Developer Assist agent with extensions to top AI-native Integrated Development Environments (IDE) including Windsurf by Cognition, Cursor, and GitHub Copilot. Agentic AI can be used to automatically analyze and fix issues in real-time, providing a proactive approach to security.
The growing adoption of AI coding assistants has raised concerns about the erosion of developer ownership and the expansion of the attack surface. Thirty-four percent of the respondents admit that more than 60% of their code is AI generated, a figure that is expected to rise in the coming years. Within the next 12 to 18 months, 32% of the respondents expect Application Programming Interface (API) breaches via shadow APIs or business logic attacks.
Kinsbruner predicts that secure software will be the competitive differentiator in the coming years. As organisations continue to adopt AI-assisted development, it is crucial that they prioritise security and take proactive measures to protect their code from known vulnerabilities. The study serves as a call to action for organisations to establish policies and governance around AI coding tools, and to operationalize security tooling that focuses on prevention.