BETA — Сайт у режимі бета-тестування. Можливі помилки та зміни.
UK | EN |
LIVE
Технології 🇺🇸 США

Beyond the cleanup job: Redefining application security for the modern enterprise

ZDNet 0 переглядів 14 хв читання
Tech Part of a ZDNET Special Feature: Software Strategies to Transform Cybersecurity Outcomes Home Tech Security Beyond the cleanup job: Redefining application security for the modern enterprise Secure-by-design is no longer just a developer concern. Enterprise leaders must treat application security as a board-level responsibility, with accountability, incentives, and customer risk reduction built in. david-gewirtz Written by David Gewirtz, Senior Contributing EditorSenior Contributing Editor May 11, 2026 at 8:24 a.m. PT
software concept
monsitj/iStock/Getty Images Plus

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

  • App security needs board-level accountability.
  • Culture can make or break secure-by-design work.
  • An operating model turns prevention into practice.

Businesses are focusing on software strategies that transform cybersecurity outcomes. The challenge is to bake security early in the development cycle and build the tools and techniques that catch bugs and vulnerabilities before they become monsters. In this article, we consider the transition from reactive to preventive as a cultural mandate and how leadership must elevate security from a post-launch fix-it approach to a pre-launch design-in strategy.

Traditional application security finds and patches flaws, usually post-release. Secure-at-the-source is a strategic approach that tries to prevent issues from ever existing. But there's more to the approach than that, especially at the enterprise level. To make this strategy a mandate across the organization, prevention needs to be a funded, managed, repeatable operating model.

Software security as a leadership responsibility

This is where software management moves from a line management responsibility to a board-level imperative. When the code your business development teams produce manages customer experience, operations, identity, payments, analytics, and AI workflows, secure design becomes a senior leadership bet-the-company risk mitigation priority.

Developers develop. It's in our DNA. We have tools, now augmented by AI, that we can use as scanners and dashboards to identify and track problems. But our software tools, and even our flesh-and-blood human engineering teams, can't determine global priorities, allocate enterprise-wide engineering capacity, change incentives, resolve departmental ownership conflicts, or make risk prevention a key component of every department and division's core operating principles.

Also: Privacy in the AI era is possible, says Proton's CEO, but one thing keeps him up at night

When a company produces a quarterly or annual report, one of the key metrics that investors, leaders, and regulators examine is debt; the more debt that weighs down the company, the more concerned stakeholders become.

But while debt on a balance sheet highlights the company's obligation to future payments, technical debt and security debt aren't as easy to measure. Even so, both reflect the organization's obligation for future maintenance and repair.

This requirement represents opportunity cost, reputation cost, customer satisfaction cost, and real dollar cost, sometimes well in excess of the numbers visible on the balance sheet. Feature scope, deadlines, staffing, outsourcing, platform decisions, and vendor selection all affect the level of security debt the enterprise creates.

Unlike balance-sheet debt, technical and security debt is often underrepresented to senior leadership. Sure, vulnerability metrics and ticket closure rates demonstrate some activity, but they only spotlight and then reward cleanup activity. Those measurements don't show whether critical flaws, repeat-defect categories, and risky defaults are declining or increasing.

Also: How to audit what ChatGPT knows about you - and reclaim your data privacy

CISA's (Cybersecurity and Infrastructure Security Agency) Secure by Design initiative recommends that organizations:

  • Choose an executive to be chief security-by-design officer: Give one leader authority over customer security outcomes.
  • Empower the secure-by-design executive: Let leadership influence product investment and risk reduction.
  • Include secure-by-design details in financial reports: Treat customer security as a business performance issue.
  • Provide regular product-security reports to the board: Make customer risk visible at the governance level.
  • Create meaningful internal incentives: Reward teams for improving customer security outcomes.
  • Create a secure-by-design council: Coordinate prevention goals across business and technical teams.
  • Create and evolve customer councils: Use customer feedback to improve product security.

CISA's focus is on secure-by-design as it pertains to delivering products to customers. However, you'll need to take the approach further, making it an overall priority not only for products delivered to users but for all your internal operations.

Making application security part of the corporate culture

Corporate culture is an odd, amorphous thing. On the one hand, there are policy manuals and management directives. On the other hand, there's culture, which is based on spoken and intrinsic signals reflected across all levels of the company.

We've talked about how to integrate application security into management directives. But it's equally (or even more) important to integrate application security into the corporate culture.

Also: The case against an imminent software developer apocalypse

Security can't just be the team that says, "No." Security consciousness needs to become a shared practice, where product managers ask about abuse cases, architects define trust boundaries, developers use safer patterns, and security teams provide practical guidance.

For those of you who don't think corporate culture can change quickly, I have a war story that proves the opposite. Back in the day, I was a very young chief executive. My company's size had doubled in just four months. To make everything a bit more manageable, I decided to divide our groups into departments: sales, engineering, manufacturing, and more.

One week, we had a structure where just about everyone reported to me and chipped in to help with anything the company needed to get done; the next week, we had turf wars. People in one department refused to help out with another department's priorities. The same people who had, days earlier, happily worked side-by-side suddenly refused to help unless directed specifically by their management.

I found this shift completely shocking. I just wanted a way for the company to respond a bit faster and not be entirely dependent on my personal guidance for every decision and directive. What I got instead was a set of newly erected barriers to productivity and the birth of mini-fiefdoms. This change happened over one weekend. On Friday, we all worked together. On Monday, it was the "D" word (for department), and everyone's behavior changed.

Also: These 4 critical AI vulnerabilities are being exploited faster than defenders can respond

While my example is a negative example of corporate culture changing rapidly, I'm sharing it as an object lesson of how organizations can change overnight. I was too green at the time to understand that it's important to be intentional about how you engineer culture changes, but you can learn from my mistake. Quick rest-of-the-story: I outlawed the "D" word, and everyone went back to working together. People, eh?

Anyway, if you want to make prevention a core part of your corporate culture, you'll need to address two potential trouble areas at the outset: developer friction and ownership.

Also: Nearly half of cybersecurity pros want to quit - here's why

As you integrate application security into your culture, also focus on communication quality. If problem reports land on developers as blame, vague descriptions, unmanageable demands, or without regard to overall development load, coders will resist the change, dig in their heels, and manifest unhelpfulness as an art form. But if security details are sent to developers as clear requirements, reusable components, fast feedback, and useful directions, developers will be more encouraged to help and support the pre-release optimization and mitigation process.

Decisions always need to be made. Developers need help determining business imperatives. For example, should developers prioritize a design issue or a new feature release? Pressure from sales and customer support can feel like a tug-of-war.

Solid pre-release quality management requires clear responsibility for design decisions, dependency choices, secret handling, build pipelines, deployment approvals, and vulnerability responses. There must be a management structure in place that protects developers and testers from conflicting messaging (and conflicting or fighting departmental managers).

Keep in mind that ambiguity and ownership conflicts can overwhelm the desire for quality and security. As my "D" word experience shows, culture should never create an environment where making it (whatever "it" might be) into someone else's responsibility is considered acceptable.

Turning application security into an operating model

A radical oversimplification of the idea of a business model is that it's about how the company makes money. Likewise, a radical oversimplification of the idea of an operating model is that it's how the company does what it does to make money.

An operating model describes how a company delivers value to its customers and how it runs itself. Many companies and organizations have uncodified operating models. In short, they do stuff, and other stuff happens. But once activities are consciously turned into deliberate, repeatable, predictable, tunable systems, that's when an operating model becomes a true force multiplier.

Consulting firm McKinsey defines an operating model as "the backbone of any organization. It outlines how the company delivers value to its customers, operates on a day-to-day basis, and achieves its strategic objectives."

The firm says: "A robust operating model serves as a guiding framework for decision-making, resource allocation, innovation, and many other critical activities and practices in the business -- all in the service of improving efficiency and generating sustainable growth."

Also: 5 security tactics your business can't get wrong in the age of AI - and why they're critical

Since today's software development infrastructure underlies almost all other value creation in almost all organizations, an enterprise-wide operating model for software reliability and security makes total sense.

Once senior organizational leadership buys into the critical requirement to move preventative security and code development earlier in the lifecycle, the organization needs defined roles, decision points, workflows, incentives, metrics, and escalation paths that make the early-stage application security process part of normal organizational operations.

When defining an operating model for preventative security, answer questions like:

  • Who owns secure design decisions?
  • When does threat modeling happen?
  • Which features require a security review?
  • What secure templates or approved components should teams use?
  • Who can approve exceptions?
  • How are dependency risks handled?
  • What metrics show whether prevention is working?
  • How can the board or executive team measure progress?

This is the systemization of security-by-design. By baking in an operating model practiced at all levels in all domains, early-lifecycle security and code reliability can be built into every stage: architecture, coding, release, and long-term maintenance.

Improve enterprise resilience

With all this discussion of early-stage security and reliability, I need to stress something really, really important: not everything will work. Don't assume that if you follow the best advice, you will no longer have late-stage security problems or vulnerability issues that need fixing in a panic.

Also: 5 ways to harden your network against the new speed of AI attacks

Don't assume that all code will leave your network perfectly solid and bug-free. Do not assume bad guys won't be able to breach your network, or the networks of your customers, because you practice security-by-design.

Life is alive. The confounding factors in software development and in life are virtually infinite. Stuff, as they say, happens. That's why we're advocating these best practices.

With these practices, you can reduce the number of emergencies. You can reduce your overall security and technical debt (which may, by the by, reduce your actual debt). This approach can help you reduce preventable defects, quickly derive understanding from incidents, define safer defaults, use better engineering judgment, and build internal systems that actively reduce the possibility of making dangerous choices.

Also: 1 in 2 security leaders say they're not ready for AI attacks - 4 actions to take now

Taken together, these practices help to improve enterprise resilience. Resilience has a bunch of different definitions:

  • Webster's describes it as, "An ability to recover from or adjust easily to misfortune or change."
  • The United Nations Office for Disaster Risk Reduction defines resilience as, "The ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner."
  • McKinsey defines it as, "The ability to not only recover quickly from a crisis but to bounce back better -- and even thrive."
  • The US Department of State describes resilience as "The ability to bounce back from difficult experiences."

It's that bounce-backability that is at the core of my advice. By re-specing your organization to build in resilience in the form of code security and reliability early on and through the entire lifecycle, you can increase your ability to bounce back in the face of adversity, as well as reduce the number of times you have to face adversity of your own making.

What single change would most improve your organization's ability to bounce back from software security problems? Let us know in the comments below.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

Featured

Google Maps vs. Waze: I compared the two best navigation apps, and this one's better Google Maps vs. Waze: I've driven with the two best navigation apps, and one is much better Microsoft Surface Laptop in Sapphire These 5 critical Windows Defender settings are off by default - turn them on ASAP Best robot lawn mowers 2025 I've tested robot mowers for years - here's my expert advice for every yard type This critical Linux vulnerability is putting millions of systems at risk - how to protect yours This critical Linux vulnerability is putting millions of systems at risk - how to protect yours Editorial standards Show Comments Log In to Comment Community Guidelines

Related

iPad Pro with M5 (2025) tablet

The best tablets of 2026: Lab-tested recommendations

Sony Bravia 9 TV

The best 75-inch TVs from Samsung, Sony, and more

I took a road trip through rural America to see if 5G is getting to where people aren't

I measured 5G signals of AT&T, T-Mobile, and Verizon in a small town - here's what the data says

Поділитися

Схожі новини