As AI adoption accelerates across enterprises, organizations are grappling with a new security challenge: shadow AI. According to new research from Harness, the AI DevOps Platform™ company, 75% of security practitioners say shadow AI risks are poised to surpass those once caused by shadow IT, as AI components proliferate faster than teams can monitor them.
The State of AI-Native Application Security 2025 survey, based on 500 security practitioners across the U.S., U.K., France, and Germany, highlights how rapid AI integration has outpaced traditional security measures:
- Limited visibility: 62% of respondents report they have no insight into where large language models (LLMs) are in use.
- AI sprawl outpaces control: 74% say AI sprawl will exceed API sprawl in terms of risk exposure.
- Growing threat landscape: 82% view AI-native apps as the next frontier for cybercriminals, with 63% considering them more vulnerable than traditional IT applications.
- Real incidents: Enterprises have faced LLM prompt injection (76%), vulnerable LLM code (66%), and LLM jailbreaking attacks (65%).
- Developer gaps: 62% say developers are not taking responsibility for AI security, and only 43% build with security in mind from the start.
“Shadow AI has become the new enterprise blind spot,” said Adam Arellano, Field CTO at Harness. “Traditional security tools were built for static code and predictable systems — not for adaptive, learning models that evolve daily. Security has to live across the entire software lifecycle — before, during, and after code — so teams can move fast without losing visibility or control.”
The AI Security Divide
AI-native applications are now embedded in 61% of new enterprise software projects, yet security practices lag far behind adoption. Key challenges include:
- Developer constraints: 62% lack time or expertise to secure AI-native apps effectively.
- Mismatch of speed and security: 75% report AI applications evolve faster than security can keep up.
- Collaboration breakdowns: Only 34% of developers notify security before starting AI projects, and just 53% before going live.
- Perception barriers: 74% of security leaders say developers see security as a blocker to AI innovation.
“AI has redrawn the enterprise attack surface overnight,” Arellano added. “Where teams once monitored code and APIs, they now must secure model behavior, training data, and AI-generated connections. The only way forward is for security and development to operate as one — embedding governance directly into the software delivery process.”
Building AI-Native Security Resilience
Harness recommends enterprises take immediate steps to regain control over AI components:
- Embed security from the start through shared governance between development and security teams.
- Discover and monitor all AI components as they are deployed.
- Gain real-time visibility into models, APIs, and outputs to detect anomalies.
- Test AI-native applications against AI-specific threats before production.
- Protect production applications to reduce the risk of sensitive data exposure.
With AI-native applications multiplying rapidly, Harness warns that enterprises ignoring shadow AI may face growing security incidents and compliance risks unless proactive measures are adopted.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










