Tradecraft is Decision Logic
January 25, 2026
Security teams often talk about “attacker tradecraft” like it’s a shopping list:
- They used Cobalt Strike.
- They exploited Log4Shell.
- They ran Mimikatz.
That framing isn’t just incomplete — it’s one of the main reasons organizations buy the right security tools and still get surprised.
Tools are the surface. Tradecraft is the decision logic underneath. It’s how a real adversary chooses actions, sequences them, and adapts when the environment pushes back.
For CISOs and security leaders, this matters because “we have coverage” is not the same as “we can stop an operator.” For B2B SaaS founders, it matters because your buyers don’t just want a penetration test PDF — they want evidence you can withstand the kind of intrusion that turns into downtime, extortion, or a customer-facing incident.
This Insight follows a simple flow: what tradecraft really is, so what breaks when you misunderstand it, and now what you can do to validate it continuously.
What: Tradecraft Is the Decision Loop, Not the Toolset
MITRE ATT&CK Helps — But It’s Not the End State
MITRE ATT&CK is an invaluable map of adversary behavior, but even MITRE explicitly warns against treating ATT&CK like a checklist or a “100% coverage” scoreboard. In their own “how not to use ATT&CK” guidance: don’t aim for perfect coverage, don’t declare victory because you detected a single technique, and don’t limit your thinking to only what’s in the matrix. [1]
Why? Because tradecraft lives in the space between techniques — in how attackers chain, branch, and adjust based on what they observe.
Tools Are Ephemeral. Constraints Are Enduring.
Attackers select tools the way a good engineer selects libraries: they’ll use what ships with the system if it’s good enough, and they’ll change approaches when something gets blocked.
That’s not speculation — it shows up in real-world data:
- CrowdStrike reports that 75% of attacks used to gain initial access were malware-free and that interactive intrusions increased 60% — humans driving operations instead of “fire-and-forget” payloads. [2]
- Bitdefender found 84% of high-severity attacks involved “living-off-the-land” techniques (abusing legitimate system tools), with tools like
netshappearing in roughly one-third of major incidents they analyzed. [3] - Sophos observed a 51% increase in the use of trusted applications for “living off the land,” and noted that RDP was abused in 89% of the cases they studied in 2024 — not as a “hacker tool,” but as a legitimate admin protocol turned into an access and control channel. [4]
This is the shift most security programs still underweight: modern intrusions often look less like “malware detonated” and more like “someone with access is doing work.”
Tradecraft Is OODA at Adversary Tempo
At its core, tradecraft is a decision loop. A useful mental model is Boyd’s OODA loop: Observe → Orient → Decide → Act. [5] Skilled adversaries run that loop continuously:
- Observe: Where am I? What telemetry is present? What identity context do I have?
- Orient: What’s safe here? What’s loud? What’s reliable?
- Decide: What’s the next move that advances the objective with acceptable risk?
- Act: Execute, validate result, and repeat.
This is why “tradecraft” isn’t a tool. It’s the logic that determines:
- Do I use an exploit, or do I buy/steal credentials?
- Do I move laterally now, or build persistence first?
- Do I run a noisy discovery command, or infer environment from low-signal signals?
- Do I touch the domain controller, or pivot through cloud identity and trust paths?
And the tempo is fast. CrowdStrike reports an average breakout time of 62 minutes and a fastest observed breakout time of 2 minutes 7 seconds — meaning the transition from initial access to lateral movement can be measured in minutes. [6] A “we’ll see it eventually” mindset isn’t a control.
Decision Points Matter More Than “Techniques”
If you want to understand tradecraft, look for decision points — moments where an attacker chooses between branches.
Examples of high-leverage decision points (defensively) include:
- Identity friction: conditional access, MFA enforcement, token/session protections, device trust.
- Privilege boundaries: local admin rights, role assignments, delegation, service principals.
- Egress constraints: outbound controls, proxy enforcement, DNS/HTTPS monitoring, cloud egress policies.
- Lateral movement chokepoints: segmentation, remote admin protocol restrictions, tiering models.
- Data access controls: least privilege on storage, secrets management, logging on sensitive reads.
Attackers “solve” these decision points with methodology. Defenders win when they force the attacker into bad options: noisier moves, slower paths, higher cost, or dead ends.
That leads to the core idea behind high-fidelity validation: you don’t prove security by “detecting a technique.” You prove it by disrupting the attacker’s decision loop.
So What: Most Validation Programs Test Scripts, Not Adversaries
The industry has no shortage of security assessment models. The problem is that many of them validate the wrong unit of reality.
The Checkbox Fallacy: Presence vs. Efficacy
It’s easy to validate that a control exists. It’s harder to validate that it works under adversary pressure.
Verizon’s 2024 DBIR continues to show how often breaches hinge on people and identity: the report notes that 68% of breaches involved a non-malicious human element (errors, social engineering, misuse). [7] That’s not a “security awareness training” problem — it’s an identity and workflow reality problem. Attackers choose the path that works.
When tradecraft is decision logic, the real question becomes:
Do our controls force the attacker into a losing branch?
If your validation program can’t answer that, it’s producing artifacts, not assurance.
Technique Simulation Can Produce a Dangerous Kind of Confidence
Technique-level simulation is useful, but it’s easy to overinterpret:
- You run a known test.
- The EDR generates an alert.
- The dashboard turns green.
Then a real operator comes in with different sequencing, different timing, and different tradeoffs. They avoid the test’s exact signature, and they still accomplish the objective — because you validated a script, not the decision logic.
This is why MITRE warns you not to treat ATT&CK like a coverage bingo card. [1] The “green” state should mean disruption of methodology, not “we have one alert for that.”
The Business Impact: Drift Turns Assurance Into Fiction
Even if you validated last month, the environment has already moved:
- a new SaaS app gets OAuth permissions,
- a new cloud role is added for “just this one project,”
- a segmentation rule changes to unblock an incident,
- a detection pipeline breaks quietly after an update.
The validation gap opens when decision points shift faster than you re-test them.
And the cost of getting surprised is not theoretical. IBM’s 2024 Cost of a Data Breach report puts the average cost at $4.88M, and notes that the majority of organizations experience significant business disruption from breaches. [8]
For a SaaS business, the real loss often isn’t just incident spend — it’s executive time, engineering diversion, churn risk, and enterprise pipeline delays caused by trust erosion.
Now What: Validate Decision Logic Continuously (Evidence, Not Belief)
Closing this gap doesn’t require you to “do everything” or to build an internal red team overnight. It requires you to change what you consider “validated.”
1) Define Your Crown Jewels and the Likely Paths to Them
Tradecraft-driven validation starts from impact, not enumeration.
Write down:
- your crown jewels (data, systems, workflows that define business survival),
- the access paths that realistically lead there (identity, endpoints, cloud control plane, third parties),
- the adversary types that matter (extortion crews, insider risk, targeted theft).
This forces your scenarios to reflect your business, not generic technique lists.
2) Convert “Tradecraft” Into a Small Set of Methodology Scenarios
Instead of testing 200 atomic techniques, test 5–10 methodology scenarios that contain the branching logic attackers actually use.
Examples (conceptual, not tool-specific):
- Identity-led intrusion: stolen credentials → session persistence → privilege escalation via mis-scoped roles → data access attempt.
- Endpoint foothold: initial execution → low-signal discovery → credential access attempt → lateral move decision under EDR pressure.
- Hybrid pivot: cloud access → trust boundary traversal → on-prem reachability and privilege decisioning.
The goal is not “realism theater.” The goal is behavioral sufficiency: enough interaction and feedback to force the same choices a real operator would make.
Attackers don’t need realism — they need belief.
3) Instrument the Decision Points (Not Just the Endpoints)
If tradecraft is decision logic, your telemetry and controls need to illuminate decision points:
- identity events that show anomalous session behavior,
- privilege boundary crossings,
- remote admin protocol use,
- egress attempts and DNS/HTTPS anomalies,
- sensitive data reads and mass access patterns.
This is where threat hunting and validation converge: both are about understanding sequences, context, and branching.
4) Measure the Right Outcomes
Replace “we got an alert” with evidence-grade outcomes:
- Exploitability: could the attacker reliably progress to the next step?
- Time-to-detect: how long until you notice and understand the chain?
- Time-to-disrupt: how long until the attacker’s objective is denied?
- Blast radius: if they land here, what can they reach next?
These metrics turn security into something executives can reason about: not “coverage,” but “risk trajectory.”
5) Run It Continuously and Re-Test on Change
Decision logic changes when your environment changes.
Continuous validation means:
- a rolling schedule aligned to risk,
- regression tests after meaningful changes (identity, network, cloud roles, critical services),
- tight rules of engagement so testing is controlled and authorized,
- outputs packaged for both engineers (actionable fixes) and executives (evidence of control efficacy).
This is the operating philosophy behind Root Access Protection: treat adversary methodology as the unit of reality, generate evidence instead of belief, and run the loop at the same tempo attackers do — with human oversight and strict authorization.
Sources
-
MITRE ATT&CK, “How should I not use ATT&CK?” https://attack.mitre.org/resources/
-
CrowdStrike, 2024 Global Threat Report Highlights (blog) https://www.crowdstrike.com/en-us/blog/crowdstrike-2024-global-threat-report-highlights/
-
Bitdefender, “The Most Prevalent Living-Off-The-Land Tools Used in Major Cyberattacks” https://www.bitdefender.com/en-us/blog/businessinsights/the-most-prevalent-living-off-the-land-tools-used-in-major-cyberattacks/
-
Sophos, Active Adversary Report 2025 (covering 2024 observations) https://news.sophos.com/en-us/2025/09/04/sophos-active-adversary-report-2025/
-
Wikipedia, “OODA loop” https://en.wikipedia.org/wiki/OODA_loop
-
CrowdStrike, “2024 Global Threat Report reveals adversaries accelerating attacks…” (press release) https://www.crowdstrike.com/en-us/press-releases/crowdstrike-2024-global-threat-report/
-
Verizon, 2024 Data Breach Investigations Report (DBIR) — Report Page https://www.verizon.com/business/resources/reports/dbir/
-
IBM, Cost of a Data Breach Report 2024 https://www.ibm.com/think/insights/cost-of-a-data-breach
-
Orients to the environment.
-
Decides on the next objective.
-
Acts to achieve it while minimizing detection risk.
The OODA Loop
Adversary operations are a constant OODA loop (Observe, Orient, Decide, Act). When a script kid runs a scanner, they are not deciding. They are just acting.
A sophisticated adversary lands on a host and asks:
- "Where am I?"
- "What is the defensive posture here?"
- "If I run
net user, will I trigger an alert?" - "Is it safer to live off the land or bring my own toolkit?"
Validating Decision Logic
To truly validate defenses, you cannot just replay a static script. You must emulate this decision logic. You must test if your defenses can confuse, delay, or disrupt the adversary's OODA loop.
- Deception: Can you make them orient on a false target?
- Friction: Can you force them to make noisy decisions?
- Detection: Can you catch them while they are still observing?
This is why we focus on observed methodology—the study of how attackers think and decide, not just the tools they used yesterday.