Human in the Loop means something different in 2025.
AI isn’t some far-off concept anymore – it’s part of standard operations, it’s everywhere, and it’s not going away. That’s the new reality we have to embrace on the blue team. And while speed in incident response has always mattered, what we need now is far more than speed alone.
We need orchestration: tactical, thoughtful, and layered. Orchestration isn’t just for the SOC Tier 1 teams anymore; it’s something we all need to embrace. Even the Tier 1 teams should re-examine their playbooks and integrations.
We know that attackers are already using AI. They’re running fast and getting more quiet - more frequently evading security tooling. They’re testing the edge of our controls with intelligent agents, establishing footholds with new malware and techniques, and moving laterally in ways that seem to blur the line between noise and signal.
Some of the techniques are new, some of them are older techniques now accessible by less experienced adversaries.
The IR playbook is evolving, and it’s happening fast. So yes, automation is necessary– but the real question is how much, where, and when?
In 2025, “Human in the Loop” means more than what it meant in 2024. It is no longer a motto about resisting automation. It’s about being deliberate in how we integrate AI and automation into our human-driven processes.
What follows is a practical look at where agent-driven automation fits in Incident Response (IR) – and where it doesn’t. It’s not hype. This isn’t theory. It’s about building systems that let humans do what they do best: interpret nuance, challenge assumptions, and make judgment calls under pressure.
Autonomous systems have a place – but they work best when they’re backing up human insight, not replacing it. That’s how we build not just faster response, but resilient response.
This is about control, clarity, and context – because those are the things that keep real-world teams sharp when the clock is ticking.
Where Automation Works and Where It Shouldn’t
The rise of intelligent agents, whether it’s scripted playbooks or AI-augmented responders, has changed the IR game. And let’s be clear:AI Agents are not just helpful, they’re essential.
These systems shine in a few key areas:
-
Fast, contextual triage: They cut through noise, process massive log volumes, correlate IOCs, and surface what actually matters,before an analyst even opens the console.
-
Immediate containment: They can suspend accounts, isolate endpoints, or stop lateral movement in seconds. These aren’t nice-to-haves – they’re critical when you’re trying to keep an incident from spiraling.
-
Coordinated response workflows: From updating tickets to notifying stakeholders and preserving artifacts for forensic review, automation keeps the IR engine moving while the humans stay focused on higher-level decision-making.
But this isn’t just about efficiency. This is about survivability and extending the reach of your team in moments where attackers are moving faster than any human can click.
The data backs it up. Automating early-stage actions such as alert handling, evidence collection, and basic containment can dramatically reduce mean time to response. And every second counts.
Still, all this progress raises a necessary question:
Just because we can automate something, should we?
That’s where the conversation gets real. Because while automation buys you time, it doesn’t replace judgment. And there are places in the IR lifecycle where you still need a sharp analyst making a call, not a script executing one.
Lessons from the Field: The Complex Reality of Automation
Let’s start with a real one.
A transportation agency experienced a ransomware attack that began with a phishing email, led to credential compromise, and quickly escalated to ransomware deployment across their most critical systems (PLEASE require MFA on your VPN connections). The agency had a strong foundation with well-tuned SIEM and SOAR platforms, and automation playbooks that:
-
Quarantined endpoints showing suspicious traffic
-
Disabled compromised accounts
-
Triggered forensic memory and disk captures within minutes
The first wave of the response was impressive. Automation worked fast; it cut off lateral movement, enriched alerts, and gave the SOC immediate visibility. But then reality hit.
Two major issues emerged:
-
Collateral damage from false positives: One playbook led to isolation of a system tied to their dispatch and tracking systems. That single mistake led to operational impact that rivaled the threat itself.
-
Custom adversary behavior flew under the radar: The malware was memory-resident, bespoke, and didn’t match known signatures. Automation didn’t catch it. Human analysts had to step in to connect the dots, test hypotheses, and guide the investigation forward.
What separated a prolonged outage from a measured recovery wasn’t more automation. It was human judgment and knowing when to override, when to escalate, and how to communicate risk to leadership clearly and quickly.
In other words, what saved the day was the distinctly human capability to leverage AI & Automations to communicate clearly and make effective decisions.
Advancing IR: Strategies for Balanced Automation
Let’s break this down:
What worked:
Automation crushed the noisy, repeatable stuff like alert triage, basic containment, initial forensics. It gave analysts the room to think, prioritize, and drive real investigative depth.
What didn’t:
Playbooks failed to account for mission-critical systems. Custom threats evaded automated detection. And worst of all, automation executed some decisions too confidently, without context.
Best practice now:
Smart incident response means embracing Human & Computer Symbiosis. Let automation handle the predictable. Let humans steer the complex.
What that looks like in practice:
-
Use agent-based automation for triage, enrichment, and known containment—but require human sign-off before executing anything that could impact operations.
-
Leverage generative AI to support analysts, not replace them—especially in hypothesis testing and initial malware analysis.
-
Add pause points into playbooks. If the decision affects uptime or people, slow down and get a human on it.
-
Build clear override protocols so that when automation misfires, people can quickly take control—with full visibility and without red tape.
Practical Imperatives for IR Leadership
To make human-agent collaboration real (and not just a buzzword), leaders need to lock in a few things:
-
Define clear boundaries. What gets fully automated? What requires a checkpoint? Review these regularly—what worked last quarter might not hold this one.
-
Focus on the repeatable. Start with workflows you can trust: alert triage, IOC correlation, initial evidence capture. Let your humans focus on what machines still can’t do—contextual analysis, threat modeling, communicating risk.
-
Mandate human validation on business-critical incidents. If a customer, regulator, or exec is going to hear about it—an analyst should sign off first, full stop.
-
Continuously test and tune automation. This isn’t a set-it-and-forget-it system. Regular scenario-based testing is critical to avoid the kind of self-inflicted damage we saw in the public sector example.
-
Upskill your people. IR teams need to understand automation just like they understand threats. They need to question it, tweak it, and override it when necessary. That’s the only way this works.
Final Thoughts: Building Enduring Resilience in an AI Era
Speed matters. But effectiveness wins.
Automation will help you scale. Agents will help you move faster. But judgment, adaptability, and communication are still what hold it all together for effective execution. Those aren’t optional. They’re non-negotiables.
If you lead an IR function, here’s what I’d ask:
Where is automation helping you? Where is it hurting you? And who has the final say when the stakes are high?
Audit your response process. Define your guardrails. Make it clear where agentic systems take the lead and be intentional on where humans step in to lead the agents.
Because the next breach might not give you a second chance.
References
-
“Beyond the Tiered SOC” - Cybersec Automation.
-
“Automating Incident Response with Agentic AI: A Live Demonstration,” - Ontinue.
-
“Six Practical Steps for Faster, Smarter Cyber Defense.” - ISACA