Netskope, a global leader in modern secure access service edge (SASE) and AI-native security solutions, today announced major enhancements to its Netskope One platform, extending its capabilities to address critical and emerging AI security use cases. These new features provide granular controls, visibility, and governance to protect private applications and sensitive data being used within or fed into AI models. With the exponential rise in AI adoption—including public generative AI (genAI), embedded AI tools, private LLMs, and autonomous agents—enterprises face a rapidly expanding attack surface. Shadow AI use has surged, often involving personal accounts on platforms like ChatGPT and Google Gemini. According to Netskope Threat Labs’ 2025 Generative AI Cloud and Threat Report, data sent to genAI tools by internal users has increased 30-fold year-over-year, with 72% of interactions occurring via unmanaged personal accounts.
“Organizations need to know that the data feeding into any part of their AI ecosystem is safe throughout every phase,” said Sanjay Beri, CEO of Netskope. “Netskope One removes the mystery from AI use, enabling secure deployment at scale with full visibility and context.”
Enhancements to Netskope One Include:
Data Security Posture Management (DSPM) Upgrades
- AI Training Protection: Prevents sensitive or regulated data from being used in LLM training or RAG pipelines across SaaS, IaaS, PaaS, and on-prem environments.
- Contextual AI Risk Assessment: Combines DSPM insights with Netskope’s DLP engine to evaluate the risks of AI-driven activity by data type, origin, and sensitivity.
- Automated AI Governance: Enforces real-time policies to control how data is used in AI prompts, inference, or training—across both public and private AI tools.
👁 Full-Stack AI Ecosystem Visibility
- Unified view of AI usage across employees, tools, applications, and agents—including both managed and unmanaged environments.
- Risk intelligence powered by Netskope’s Cloud Confidence Index (CCI), covering 370+ genAI tools and 82,000+ SaaS apps.
- Visibility into third-party integrations, model training behaviors, and AI-generated output to prevent sensitive data leakage.
Adaptive Controls to Combat Shadow AI
- Fine-grained policies based on user behavior, intent, and data classification.
- Guide users toward enterprise-approved AI tools such as Microsoft Copilot or ChatGPT Enterprise.
- Prevent high-risk actions—uploading, copying, printing, pasting—within AI interfaces.
- Real-time DLP scanning of prompts and responses to block inadvertent leaks of regulated data.
Why It Matters: AI at the Crossroads of Innovation and Risk
As AI becomes foundational to digital transformation, data misuse, IP exposure, and compliance violations are becoming more common—and more costly. Enterprises must manage the dual challenge of accelerating AI innovation while maintaining zero trust security standards.
“You can’t just block AI. You need to enable it—safely, smartly, and at scale,” said Beri. “Netskope One gives organizations a secure, end-to-end foundation for AI readiness.”
Meet Netskope at RSA Conference 2025
Netskope will be demonstrating Netskope One’s full capabilities, including the new AI security features, at the RSA Conference in San Francisco this week.Visit Booth #1135, Moscone South, to see live demos of:
- AI Training Data Protection
- DSPM-Driven Risk Scoring
- Real-Time Prompt & Response DLP
- Shadow AI Governance in Action