AI is revolutionizing enterprise cybersecurity, but it’s also opening dangerous new doors. According to new global research released by Trend Micro, the widespread adoption of AI tools is creating a paradox: while businesses lean on artificial intelligence to bolster their digital defenses, the same tech is expanding their attack surface in ways that few fully understand.
The study, which surveyed 2,250 cybersecurity decision-makers across 21 countries, paints a picture of rising tension. On one hand, 81% of organizations already deploy AI-driven tools in their cybersecurity stacks, using them for tasks like anomaly detection, asset discovery, and risk prioritization. Another 16% are exploring AI integration. Optimism abounds: 42% of respondents cite AI and automation as top priorities for the coming year.
But optimism is giving way to unease.
Risk Rises With Adoption
A staggering 94% of organizations believe AI will increase their overall cyber risk exposure in the next 3–5 years. And this isn’t hypothetical worry—it’s grounded in a growing list of attack vectors, from shadow IT and insecure APIs to rogue models and unmonitored endpoints.
According to Trend Micro, businesses are particularly concerned about:
- Sensitive data leaks via LLMs and poorly governed models
- Opaque AI decision-making, making incident forensics harder
- Untrusted AI tools ingesting proprietary data without clear boundaries
- New compliance pressures around AI monitoring and explainability
The fears are warranted. AI systems that aren’t built with security-by-design may introduce more vulnerabilities than they resolve—especially when threat actors begin exploiting the same tools for reconnaissance, automated attacks, and large-scale phishing.
“AI holds enormous promise for strengthening cyber defenses,” said Rachel Jin, Chief Enterprise Platform Officer at Trend Micro. “But attackers are just as eager to leverage AI for their own purposes… Security must be built into AI systems from the outset. There is simply too much at stake to treat this as an afterthought.”
The Pwn2Own Reality Check
For a sobering look at how far AI security still has to go, Trend’s latest Pwn2Own hacking contest in Berlin delivered clarity. The 2024 competition introduced an AI-specific category for the first time—and the results were jarring.
Twelve teams targeted four popular AI platforms, including NVIDIA Triton, Chroma, Redis, and NVIDIA Container Toolkit. The outcome? Seven zero-day vulnerabilities uncovered, with some tools fully compromised using just a single bug. Vendors now have 90 days to patch the holes before the flaws go public.
These are the real-world stress tests that enterprise CISOs can’t afford to ignore.
What This Means for the Enterprise
The message from Trend Micro is clear: every AI initiative must now be viewed as a security initiative. From model selection and fine-tuning to API exposure and data storage, security must be part of the design—not an add-on.
Enterprise leaders should:
- Evaluate AI frameworks with the same scrutiny applied to any critical infrastructure
- Demand transparency from vendors regarding data handling and model provenance
- Embed AI into incident response plans
- Harden all endpoints, APIs, and container environments used in AI workflows
- Adopt continuous testing, including red-teaming against deployed models
This is especially urgent as LLMs and agentic tools become fixtures in SaaS platforms, development pipelines, and customer support operations. AI will not remain a bolt-on tool—it’s becoming infrastructure. And like all infrastructure, it must be secured at every layer.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI.