ROOT ACCESS PROTECTION

The Validation Gap

January 25, 2026

Security leaders live with a question that sounds simple but is operationally brutal:

If we were targeted this week, what would actually work?

For a CISO/VP Security, the question shows up in board meetings, regulatory discussions, and post-incident retrospectives.

For a B2B SaaS founder, it shows up in every enterprise security questionnaire and every “we need a pen test before we sign” procurement delay.

Most organizations answer the question with belief: a mix of tool coverage maps, audit artifacts, and last quarter’s penetration test report. But attackers don’t interact with your beliefs — they interact with your current environment. And the gap between “we think we’re secure” and “we can prove what would happen” keeps widening.

This is the validation gap: the distance between what you assume your controls do and what you can demonstrate they do, against the way real adversaries operate, on the systems you run today.

What: The Validation Gap Is a Timing Problem and a Fidelity Problem

Most security programs treat validation like an event. The attacker treats your environment like a stream.

1) Timing: Your Environment Changes Faster Than Your Validation Cycle

Even disciplined teams live with continuous drift:

  • A new SaaS integration creates a new OAuth trust path.
  • A “temporary” firewall rule becomes permanent.
  • A Terraform change quietly widens an IAM policy.
  • An EDR update changes detections (or breaks them).
  • A new engineer ships a service with a debugging endpoint exposed.

Meanwhile, vulnerability exploitation is not hypothetical. Verizon’s 2024 Data Breach Investigations Report (DBIR) notes that exploitation of vulnerabilities as an initial access vector rose 180% and accounted for 14% of breaches, alongside continued dominance of human and credential paths. [1] Verizon also highlights a harsh operational reality: organizations average 55 days to remediate 50% of critical vulnerabilities after patches are available. [2] And even when you focus only on vulnerabilities known to be exploited in the wild, the scope keeps expanding — CISA’s Known Exploited Vulnerabilities (KEV) catalog grew roughly 20% in 2025 to 1,484 entries. [6]

That delta — attackers exploiting what’s newly exposed while defenders validate quarterly or annually — is the first half of the validation gap.

2) Fidelity: “Technique Coverage” Is Not “Stopping What Attackers Do”

The second half is fidelity. Many validation approaches focus on techniques (“did we detect PowerShell?”) rather than methodology (“did we stop the chain of decisions that got an operator from initial access to impact?”).

In the real world, attackers don’t execute a fixed playbook. They run a decision loop:

  1. Observe what’s in front of them (identity, endpoint telemetry, network controls, cloud posture).
  2. Orient to what’s safe and what’s loud.
  3. Decide the next move that balances speed, stealth, and reliability.
  4. Act, then repeat.

This is why “we simulated T1059 and got an alert” is not the same as “we can stop an operator.” A capable adversary doesn’t just run a command; they branch when it fails, slow down when detections appear, switch to an alternate credential path, and exploit trust relationships when hard exploitation is blocked.

The validation gap widens when we confuse:

  • Control presence (MFA is enabled) with control efficacy (MFA actually prevents account takeover in your workflows).
  • Telemetry (events are logged) with detection (someone will notice in time).
  • Alerting (the EDR fires) with outcome (the attack chain is disrupted before impact).

So the “gap” isn’t just time. It’s epistemic: we measure what’s easy to count instead of what matters to survive.

Anti-Confusion: What Continuous Validation Is Not

If you’re a buyer trying to orient, a useful litmus test is what continuous validation refuses to pretend it is:

  • Not a vulnerability scanner: scanners enumerate issues; validation proves exploitability and impact paths.
  • Not a one-time penetration test: episodic engagements find value, but they do not keep pace with drift.
  • Not “MITRE coverage” as an end state: technique mapping helps organize work; it is not evidence of disruption.
  • Not a honeypot strategy: deception can add friction, but belief and decision forcing matters more than “realism.”

So What: False Confidence Is the Most Expensive Risk You Can Buy

The validation gap matters because it creates a specific failure mode: organizations invest heavily, pass audits, and still get surprised.

For CISOs: The Board Buys Assurance, But You’re Often Delivering Artifacts

Security leadership is judged on outcomes — business continuity, avoided incidents, reduced risk — but is often forced to communicate using proxies: tool counts, compliance checklists, maturity models, and “coverage” charts.

The problem is that proxies decay quickly. An annual pentest is a photograph; a modern environment is a video.

And the economic downside is non-trivial. IBM’s 2024 Cost of a Data Breach report puts the average cost of a breach at $4.88M, with 70% of organizations reporting significant or very significant disruption from the incident. [3]

Even if you run a strong program, the validation gap converts uncertainty into executive risk: the difference between “we’re probably fine” and “we have evidence our controls stop real attack paths” becomes the difference between a defensible narrative and a painful post-mortem.

For B2B SaaS Founders: “Security” Becomes a Revenue and Survival Constraint

Enterprise buyers increasingly use security posture as a gating function. The uncomfortable truth is that many early-stage companies try to satisfy that gate with a bundle of point-in-time proofs:

  • a SOC 2 report,
  • a penetration test PDF,
  • a vendor questionnaire packet,
  • a list of security tools.

Those artifacts may close the deal, but they don’t necessarily reduce the probability of an outage, extortion event, or major incident.

Verizon reports that ransomware/extortion remains present in 32% of breaches, and the median loss associated with ransomware in the DBIR dataset was $46,000 (not counting downstream costs like downtime, churn, and engineering diversion). [1]

For a SaaS business, the most painful part often isn’t the ransom. It’s the second-order effects: missed product deadlines, paused launches, lost pipeline, and customers asking (fairly) whether you can be trusted with their data.

The Real Punchline: Attackers Operate on “Mission Tempo”

Attackers don’t wait for your next audit cycle.

Mandiant’s M-Trends 2024 report lists a global median dwell time of 10 days — the median time an attacker remains in an environment before detection. [4] In other words, many organizations have a window measured in days, not quarters, to notice and disrupt a motivated intrusion.

Even in controlled environments, the tempo is stark. Mandiant notes that its red teams often achieve an objective in five to seven days. [5] Your defensive posture has to be something you can rehearse and validate at that pace — not something you reassess annually.

This is where “compliance is not security” stops being a slogan and becomes a math problem.

Now What: Continuous Validation That Produces Evidence (Not Theater)

Closing the validation gap does not mean “run more scanners” or “buy another dashboard.” It means building a validation loop that continuously answers the only question that matters: what would work, against us, right now?

Here’s a practical, technical way to do that without turning security into chaos.

1) Start With Crown Jewels and Likely Access Paths

Most teams start with vulnerabilities because they’re enumerable. Adversaries start with access because it’s actionable.

Define:

  • Your crown jewels (data, systems, and functions whose compromise creates existential impact).
  • The access paths that realistically lead there (identity, endpoints, cloud control plane, third parties).
  • The adversary types that matter for your business (crimeware extortion, competitor IP theft, nation-state targeting).

If you can’t articulate crown jewels and access paths, your validation program will devolve into noise: you’ll validate a hundred low-impact controls and still fail the one chain that matters.

2) Validate Methodology, Not Just Techniques

Technique-level simulation can be useful, but it is insufficient as a primary assurance model.

What you want are methodology-driven scenarios: repeatable “attack flows” that include the branching decisions a real operator makes. Examples (without prescribing a specific toolset):

  • Identity-first intrusion: token theft, session hijack, conditional access bypass attempts, and privilege escalation through mis-scoped roles.
  • Endpoint foothold: LOLBins, credential access attempts, lateral movement through trust relationships, and egress validation.
  • Cloud-to-on-prem pivot: exploiting identity federation and hybrid trust.

The key is not to make them “realistic” in a cinematic sense. The key is to make them behaviorally sufficient: enough interaction and feedback to force the same attacker decisions you see in the wild.

Attackers don’t need realism — they need belief.

That’s how you validate the thing that actually beats defenders: decision logic.

3) Measure Outcomes: Exploitability, Blast Radius, and Time

If you want evidence, define success criteria that map to business risk:

  • Exploitability: can an attacker reliably achieve the next step, or is it blocked?
  • Blast radius: if they land here, what can they reach next?
  • Time: how long until you detect, understand, and disrupt the chain?

This is where many programs fail: they measure alerts, not disruption. Evidence looks like:

  • a blocked privilege escalation with proof of the control that stopped it,
  • a detected lateral movement attempt that triggers a response playbook,
  • an exfil path that is technically possible but operationally observable and containable.

Your executive narrative becomes stronger when you can say, “This path fails here, consistently, and we retest it after every material change.”

4) Run Continuously, Triggered by Change

A validation program that runs “every quarter” is still episodic.

A better model:

  • Scheduled validation (rolling scenarios weekly/monthly based on risk).
  • Change-triggered validation (run relevant scenarios when identity policies, network boundaries, or critical services change).
  • Regression validation (re-run the same scenarios after patches, agent updates, configuration changes, or incident learnings).

This is how you close the timing gap. You stop treating security assurance like a report, and you treat it like a system you can test.

5) Package the Evidence for Two Audiences: Operators and Executives

The same validation run should generate two outputs:

  1. Operator-grade artifacts: command sequences, telemetry references, control breakpoints, and remediation steps engineers can act on.
  2. Executive-grade evidence: what changed, what risk moved, what was validated, and what remains uncertain.

This is where continuous validation becomes a force multiplier:

  • CISOs get board-ready narratives grounded in evidence.
  • Founders get buyer-friendly proof that is harder to dismiss than tool logos and PDFs.

6) Make It Safe: Authorization, Controls, and “Prove Impact Without Causing It”

Continuous validation must be authorized and controlled. It needs rules of engagement, clear stop conditions, and an approach that demonstrates effects without inflicting damage (e.g., ransomware simulation without encryption, exfil simulation without data loss).

This is not red-team theater. It’s operational discipline applied to defensive assurance — where automation handles repetition, and humans keep decision authority.

And it’s the logic behind how Root Access Protection approaches the problem: continuously validate defenses against observed attacker methodology, with a methodology-first model shaped by real-world offensive experience.

Neal Bridges, the founder of Root Access Protection, has spent his career in environments where “trust” is not a control — validation is. His public bios describe experience as a former NSA hacker and work building offensive training programs and leading security programs across government and industry. [7][8][9] That background shows up in the product philosophy: evidence over belief, methodology over checklists, and a loop that runs at attacker tempo.

The Bottom Line

If you want to know whether you’re safe, stop asking “are we compliant?” and stop settling for “did we get alerts?”

Start asking — and continuously answering — three questions:

  1. What would actually work against us right now?
  2. Would we know in time (days, not quarters)?
  3. Could we disrupt the chain before impact, repeatedly, after every meaningful change?

Close the validation gap, and “security” stops being a collection of purchases and becomes something you can prove.

Sources

  1. Verizon, 2024 Data Breach Investigations Report (DBIR) — Report Page
  2. Verizon APAC, 2024 Data Breach Investigations Report Released (GlobeNewswire)
  3. IBM, Cost of a Data Breach Report 2024
  4. Mandiant, M-Trends 2024
  5. Mandiant, Security Validation: The Key to Reducing Breach Risk
  6. heise online, Cisa vulnerability catalog: 20 percent more exploited vulnerabilities in 2025
  7. Purple Hats, Neal Bridges — Speaker Bio
  8. Query.ai, Neal Bridges joins Query.ai CISO Advisory Board
  9. Cyber Insecurity, About — Neal Bridges