AI has quietly (and sometimes not-so-quietly) embedded itself across enterprise infrastructure. It now runs automations, drives decision engines, powers security products, and increasingly acts as an autonomous agent inside corporate systems. That same expansion has blown open the attack surface: AI models, agent frameworks, data pipelines, and LLM-powered applications now operate at speeds—and with complexities—that many teams barely understand.
Hack The Box (HTB) believes the cybersecurity world isn’t training for that reality. Today, the company launched HTB AI Range, which it calls the world’s first live-fire training ground where humans and AI agents are tested, stressed, and evaluated side by side in realistic hybrid scenarios. Think of it as the cyber equivalent of a joint human–machine combat range—one built for defenders scrambling to keep up with adversaries already deploying automated attack chains at scale.
CEO and Founder Haris Pylarinos frames it bluntly:
“AI is now part of the cyber battle… With HTB AI Range, we’re not reacting to AI’s rise in cyber; we’re defining how defense evolves alongside it.”
The concept feels overdue. For two years, HTB has been experimenting with AI-driven learning paths, labs, and research environments. But HTB AI Range formalizes the next stage: continuous evaluation of AI models under real pressure and the co-evolution of human and AI defense tactics.
The New Battlefield: AI Isn’t Coming to Cybersecurity—It’s Already Here
HTB’s move comes against the backdrop of a rapidly changing threat landscape. AI-enabled attackers already blast out thousands of automated requests per second targeting major tech, financial, manufacturing, and government institutions. They reverse engineer APIs, chain vulnerabilities, and run reconnaissance faster than any human red team.
And while defenders have AI tools, most organizations still rely on human analysts to stitch together meaning from alerts, dashboards, and logs produced at machine tempo.
HTB’s view is: that gap is unsustainable.
The AI vs. Human CTF the company ran in April offers a glimpse of the future. Autonomous AI teams solved 19 of 20 easy-tier challenges—a 95% success rate, matching the performance of over 400 human red teams in basic tasks. Once complexity increased, AI stumbled and humans surged ahead, a pattern consistent with current LLM research.
The implication is stark:
AI will dominate low-level tasks, but humans remain superior in multi-step reasoning. Cyber defense teams will need to blend both, with AI acting as a force multiplier—not a novelty widget.
Gerasimos Marketos, HTB’s Chief Product Officer, sums it up:
“We’re validating AI in realistic operational contexts where stakes are high and human oversight remains vital.”
Inside HTB AI Range: A Training Ground for Hybrid Defense
The HTB AI Range mirrors large-scale enterprise complexity with thousands of offensive and defensive targets—all continuously updated to reflect emerging exploits, misconfigs, and threat behaviors. Participants can:
- Stress-test AI agents
- Validate model safety
- Benchmark AI and human teams against industry frameworks
- Observe how hybrid teams behave under time pressure
- Identify failure conditions, hallucinations, and edge-case vulnerabilities
Supported frameworks include:
- MITRE ATT&CK
- NIST/NICE
- OWASP Top 10
That’s unusually comprehensive for a cyber range. Traditional ranges simulate systems. HTB’s version is effectively simulating organizations, with the explicit goal of preparing defenders to work in environments where AI plays an active role—not just in detection and response, but in decision-making.
For managed security service providers (MSSPs), government agencies, and enterprises experimenting with AI-powered SOCs, this type of environment may become mandatory training infrastructure.
Industry Perspective: AI Is Rewriting the Rules of Recon and Exploitation
HTB isn’t alone in warning that AI has already shifted offensive capabilities. Early research across academia and commercial labs is showing LLMs can:
- Automate reconnaissance
- Chain exploit paths
- Identify misconfigurations
- Generate malicious payloads
- Create adaptive phishing campaigns
- Map system relationships at scale
Tasks that once required human creativity and intuition are increasingly achievable by models trained on enough interaction data.
Dawn-Marie Vaughan, Global Offering Lead for Cybersecurity at DXC, echoed this urgency:
“As AI matures, defenders must train under more dynamic, real-world conditions. That’s why I’m energized by the work Hack The Box is doing.”
The takeaway: cyber tooling is shifting from human-driven to human plus machine driven. Organizations that don’t train for hybrid defense will fall behind attackers that already do.
Training the Workforce: AI Red Teaming Goes Formal
The skills gap in AI security is enormous—and quantifiable.
In a 10-day AI Red Teaming CTF hosted by HTB and HackerOne, only 43% of registrants completed even one challenge. That’s not a marginal statistic; it’s a warning shot. As enterprises rush to integrate AI everywhere, the number of professionals capable of identifying AI-specific vulnerabilities is not growing at anywhere near the same pace.
To address this, HTB and Google partnered to build the AI Red Teamer Path, a job-role learning journey aligned with Google’s Secure AI Framework (SAIF). It teaches practitioners how to:
- Evaluate AI systems
- Probe and exploit model weaknesses
- Harden LLM-based applications and agent workflows
- Understand end-to-end AI attack chains
It’s the first curriculum of its type anchored directly to a Big Tech security framework.
And today, HTB formally announced the next step: the HTB AI Red Teamer Certification, coming Q1 2026. It will serve as the capstone credential for the job-role path and, if HTB’s track record is any indication, will likely become a widely recognized benchmark in AI security expertise.
The Bigger Picture: Cybersecurity Training Is Entering a Hybrid Era
AI won’t replace cybersecurity professionals anytime soon. But it will reshape the work:
- Simple tasks: automated
- Routine analysis: automated
- Recon: automated
- Exploit chaining: increasingly automated
- Judgment, creativity, strategy: human-driven
HTB is betting that the future defender will operate more like a pilot in a modern fighter jet—working alongside autonomous systems, delegating microtasks, making high-level calls, and keeping guardrails in place.
That model requires training environments that don’t just simulate attacks but simulate the behavior of AI agents under operational stress.
Cyber ranges built for humans alone won’t cut it.
HTB AI Range represents the next phase:
a proving ground where AI agents can fail safely, where humans can learn the limits of AI, and where organizations can test defensive strategies in a world where automated threats are table stakes.
It’s not a panic response to AI—it’s preparation.
And as Pylarinos puts it:
“This is how cybersecurity advances: not through fear, but through mastery.”












