Enterprise adoption of generative AI is accelerating fast—maybe too fast.
In its latest Cloud and Threat Report, Netskope reveals a 50% jump in the use of generative AI (genAI) platforms by enterprise end-users in the three months ending May 2025. The kicker? More than half of that usage falls into the “shadow AI” category—unsanctioned tools that slip past IT and security controls.
The trend highlights a growing duality in enterprise AI adoption: companies are eager to embrace the productivity and innovation that genAI offers, but many are struggling to maintain visibility and control as employees race ahead, spinning up apps and AI agents on their own.
GenAI Platforms: A New Favorite Toy—With Teeth
The fastest-growing class of shadow AI tools? GenAI platforms like Microsoft Azure OpenAI, Amazon Bedrock, and Google Vertex AI. Their popularity is no surprise: they offer a plug-and-play foundation for building custom AI apps and agents with minimal friction.
Netskope’s data shows that by May 2025, 41% of organizations were already using at least one genAI platform. Usage of these tools spiked 50%, while related network traffic surged 73%. Microsoft leads the pack (29% adoption), followed by Amazon (22%) and Google (7.2%).
These platforms often connect directly to enterprise data stores—a boon for productivity, but a nightmare for data protection if left unchecked. That’s why data loss prevention (DLP) strategies and real-time monitoring are rapidly becoming non-negotiable.
“Security teams don’t want to hamper innovation,” says Ray Canzanese, Director of Netskope Threat Labs. “But the rise of genAI agents means organizations must overhaul app controls and rethink their DLP policies to include real-time user coaching.”
Local AI Gets Real: On-Prem Agents and LLM Interfaces
While SaaS-based AI use grows, on-premises AI development is quietly gaining ground. Netskope reports that 34% of enterprises are now using local large language model (LLM) interfaces, with Ollama leading the way. Others like LM Studio and Ramalama are just starting to gain traction.
Meanwhile, Hugging Face—often used for downloading pre-trained models and datasets—has been accessed in 67% of organizations, showing how DIY AI is becoming more accessible than ever.
And it’s not just dabbling. Nearly 6% of organizations now run homegrown AI agents using frameworks deployed on-prem, while 39% are actively using GitHub Copilot. These agents aren’t just passively sitting in code editors—they’re pulling in SaaS data via APIs. In fact, two-thirds of enterprises are calling api.openai.com, and 13% are hitting api.anthropic.com, mostly outside the browser.
The implication? Local AI development isn’t just alive—it’s accelerating, and it’s plugging itself straight into the cloud, often under the radar.
SaaS AI Use Matures—and Consolidates
Netskope now tracks more than 1,550 genAI SaaS apps, up from just 317 in February. Organizations are using about 15 different genAI apps on average, up from 13 earlier this year, with data uploads nudging from 7.7GB to 8.2GB per month.
There’s growing preference for integrated productivity tools like Microsoft Copilot and Google Gemini, as companies shift from general-purpose chatbots to context-aware solutions tied into their existing workflows.
ChatGPT, once the default enterprise genAI app, just saw its first dip in adoption since 2023. Other apps—Anthropic Claude, Perplexity AI, Grammarly, Gamma—are climbing in use, suggesting that companies are getting savvier about selecting tools that are specialized, secure, or both.
Elon Musk’s Grok also made its way into the top 10 most-used apps for the first time—despite still appearing on the most-blocked list. Interestingly, its block rate is dropping as security teams fine-tune access controls rather than issue blanket bans.
Shadow AI: Still a CISOs Worst Headache
With employees building apps, deploying agents, and experimenting across sanctioned and unsanctioned environments, the definition of “shadow IT” is evolving fast—and CISOs are playing catch-up.
Netskope recommends a four-pronged response for enterprises trying to keep pace:
- Inventory the AI landscape: Understand who’s using what, where, and how.
- Control the chaos: Enforce approved app lists and enable real-time user feedback, not just blocklists.
- Secure on-prem deployments: Apply LLM-specific security frameworks like OWASP Top 10 for AI apps.
- Monitor AI agents: Pay special attention to agentic AI, especially those pulling data from external APIs or SaaS tools.
The future of AI in the enterprise isn’t just about adoption—it’s about governance. With AI agents and shadow platforms on the rise, security teams must become partners in innovation, not roadblocks.
Those who fail to adapt may find that their AI strategy is being written by the very tools they thought they were controlling.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI.