The patching treadmill: Why traditional application security is no longer enough
Written by
David Gewirtz, Senior Contributing EditorSenior Contributing Editor May 11, 2026 at 8:29 a.m. PT
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
- Continuous deployment makes old security models feel obsolete.
- Vulnerability backlogs are overwhelming development teams.
- Application security needs to move toward code creation.
For all the time I've spent exercising on treadmills, I've always found them faintly demoralizing. You thump-thump-thump over and over again, but get nowhere. It's a lot of effort. You always work up a bit of a sweat, but ultimately feel unfulfilled. This feeling is reinforced the next day, when you have to do it all over again.
In many ways, application security is like that treadmill. Once the coding is done, security teams (or customers) find flaws. Scanning tools also find flaws, often resulting in reports that seem never-ending. Coders are constantly yanked away from new development to re-learn what they wrote, locate bugs, patch them, and release fixes.
Also: 77% of IT managers say their AI agents are out of control - 5 ways to rein in yours
But then, like on the treadmill, the cycle repeats when new code, new dependencies, and new vulnerabilities appear. Because, of course, they will.
This frustrating process is often called the find-and-fix cycle. Security and QA teams use vulnerability scanners and penetration tests. When problems are found, as they will be, developers work from the bug reports, set up triage queues, and sometimes dedicate blocks of time to remediation sprints.
Find-and-fix isn't so much a development strategy as it is a reactive response to shipping code. The hope is that security flaws (all flaws, really) can be identified and fixed after release, but before they create serious harm or before your customers show up at your door with pitchforks and torches, demanding reliable code.
Some security flaws are found so deep in older code that fixing them isn't practical. Code change after code change has been layered on an already shaky, compromised foundation. Getting to the root cause would require tearing everything apart, which would undoubtedly break even more.
Also: I asked 5 data leaders about how they use AI to automate - and end integration nightmares
That's where another time-honored but suboptimal practice, defend-and-defer, comes into play. Rather than fix deeply entrenched, vulnerable code, programmers and security teams add protective walls around it. Firewalls, runtime protections, monitoring, compensating controls, segmentation, access restrictions, and emergency mitigations all somewhat reduce exposure while the underlying application weakness remains unresolved.
But at least there's some defense in place, right? Right?
Here's the thing. Find-and-fix and defend-and-defer practices will never completely go away. No matter how good our best practices get, life will find a way. There will always be unexpected behavior. Given the non-deterministic nature of large language models, that possibility is even more true in the age of AI.
Also: Nearly half of cybersecurity pros want to quit - here's why
Find-and-fix and defend-and-defer practices are no longer sufficient. Software development moves way too fast, especially as developers use more AI assistance to crank out new versions and new capabilities at machine speed.
Faster releases, slower fixes
It used to be the case that software delivered updates and new versions periodically. Big releases came out once a year. Updates, maybe, once a quarter. But now, with CI/CD (continuous integration/continuous deployment), the operative word is "continuous."
Every tweak, every sprint, every bug fix, every dependency update, every cloud configuration change, every new API integration, and every AI-assisted coding session can break things and introduce new security problems faster than traditional security teams can review them.
Also: These 4 critical AI vulnerabilities are being exploited faster than defenders can respond
And that focus doesn't even consider mitigation. When security teams review code, whether AI-assisted or not, they often reveal hundreds or thousands of problems that need fixing. The problems are being found faster than developers can realistically fix.
Worse, most fixes take developers away from innovation and new code development, resulting in a painful and productivity-killing context switch. That's why most software has a queue of unresolved problems and vulnerabilities that regularly need to be prioritized, re-prioritized, accepted, deferred, or ignored.
According to security platform provider Edgescan, network issues take an average of 54 days to fix. Web apps take almost 75 days to fix. The problem is worse at big companies. According to Edgescan's analysis, 45% of large-company vulnerabilities remain unfixed after a full year.
This situation is not good. The software might create issues for users. The vulnerabilities could be exploited by attackers, bots, and criminal groups. Known but unpatched vulnerabilities are so popular that information about them is sold to others wishing to break into systems.
Also: The biggest AI threats come from within - 12 ways to defend your organization
When it comes to breaches, Verizon's 2025 Data Breach Incident Report determined that 20% of threat actors gained initial access to systems through code vulnerabilities, up 34% on the previous year. The other two primary access methods were credential abuse (22%) and phishing attacks (16%).
In other words, patching vulnerabilities might have blocked 20% of all breach attacks, but success is not that simple.
Here's another stat that reinforces this problem. Security analytics company VulnCheck reported that, "32.1% of KEVs had exploitation evidence on or before the day the CVE was issued, an increase from 23.6% in 2024."
In short, bad guys knew about the vulnerabilities, KEV stands for known exploited vulnerabilities, before vendors knew they needed to be fixed. CVEs (common vulnerabilities and exposures) are the mechanism often used to notify and track the resolution of known vulnerabilities.
Basically, the VulnCheck stat reported that almost a third of all vulnerabilities were in bad actors' hands and being actively exploited before the developers who could fix the vulnerabilities even found out about them.
We can't just patch faster
Unfortunately, we can't just demand that developers patch code with improved speed or productivity. Beyond the physical limits of human coders, or even the enhanced performance but practical limits of our AI overlords, there are practical concerns.
Enterprise systems have dependencies, uptime requirements, change-control boards, regulatory constraints, customer commitments, fragile integrations, and teams that may not own the vulnerable code.
Also: Rolling out AI? 5 security tactics your business can't get wrong - and why
Smaller systems may depend on components or elements out of their control. For example, I woke up one morning this week to find that five of my legacy websites were no longer functioning. Those sites had been working perfectly. They had been unmodified for at least seven years.
The hosting operator changed a version of a critical software system without warning, and some of my custom code stopped functioning. It took me a few days to get back up to speed on what my code did, then track down and fix it. And that was with the help of OpenAI Codex.
Then there's the issue of prioritization fatigue. When every vulnerability comes in as critical, it's as if nothing is crucial. Did you ever have a day where you prioritized your to-do list, only to realize that you had 30 top-priority tasks? I see you nodding your head. At that point, it's just overwhelming, and no issue stands out.
Also: Will AI make cybersecurity obsolete, or is Silicon Valley confabulating again?
Even AI-driven vulnerability scans won't help you deal with the challenge. Super tools, like Anthropic Mythos, or even more accessible tools, such as Claude Security or Codex Security, can't really solve the problem. A dashboard full of findings can create the appearance of control, while the underlying engineering practices continue to produce the same defect categories.
It's at this point that IT operators often try the defend-and-defer approach using tools like network or application firewalls, intrusion detection and prevention systems, endpoint detection and response, network segmentation, rate limiting, logging and monitoring, runtime application self-protection, or even virtual patching.
These "compensating controls" are sometimes essential, but they can become a permanent substitute for fixing root causes. This practice is dangerous because surrounding weak software with a scaffold of security tooling doesn't solve the underlying problem: weak code.
Patching after the fact isn't just insecure, it's really expensive. Yes, it's sometimes necessary (like when, a decade after I wrote a line of code using the standards at the time, a much later OS release broke it). But coding defensively, and making fixes while the original code is being developed, is far less time-consuming and painful than identifying, triaging, patching, validating, deploying, and monitoring fixes way after release.
Modern development changed the risk equation
It's hard to pin down exactly when "modern development" practices started, because everyone has a different perspective. But it's fair to say that development lifecycles changed when we went from shipping updates on disk to building cloud-centric services. Then the practice changed again in the past few years when AI-assisted development became a transformative force.
The fact is, our approach to software development is different from the time when find-and-fix was the way of the world. Application risk now pervades the whole software lifecycle: design choices, coding practices, dependency selection, secrets handling, identity controls, build pipelines, deployment configurations, and runtime exposure.
Also: Why enterprise AI agents could become the ultimate insider threat
As I've been discussing for the past year, AI has radically changed release timelines, accelerating schedules, and collapsing timelines. Unfortunately, that increase in speed can widen the gap between code creation and security review. If nothing else, the volume of code produced has increased as the time to create code has collapsed.
Testing time, on the other hand, has not flattened. I've been working on a Mac app in Claude Code for about four months. The actual code-writing process takes about 20 minutes each session. But because my code uses on-device AI for sophisticated document parsing, the testing takes hours each session.
My coding time has collapsed to a mere rounding error, but the testing time now takes the bulk of my development time. Still, without having AI for the initial code-writing process, I probably wouldn't have time to finish this project, whenever that happens.
Also: 7 AI coding techniques I use to ship real, reliable products - fast
The key problem is that AI-generated code is not necessarily secure code. AI monitoring company Snyk reported that 56.4% of developers frequently encountered security issues in AI-generated code, while 80% ignored or bypassed organizational AI code-security policies.
Changing where application security begins
In this article, we've looked at what happens as software production accelerates, but security remains a downstream problem: the treadmill speeds up. More code means more problems, which are found faster than developers can go back and make fixes.
To be clear, we will never be able to abandon find-and-fix or defend-and-defer practices. Stuff happens. We'll always need to employ scanning, patching, monitoring, and runtime defense to some degree. But these practices should be migrated to a second-tier safety net.
Do vulnerability backlogs feel like a manageable process or an endless queue in your organization? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.
Artificial Intelligence
-
I tried a Claude Code alternative that's local, open source, and completely free - how it works
-
How to remove Copilot AI from Windows 11 today
-
AI is quietly poisoning itself and pushing models toward collapse - but there's a cure
-
How to spot an AI image: 6 telltale signs it's fake - and my go-to free detectors