Enterprises are racing to deploy AI, but most are doing it with their data wide open. Confident Security wants to change that. Today, the company announced OpenPCC, the first open-source privacy standard that lets organizations use large language models (LLMs) without leaking sensitive information—a milestone that could redefine how businesses adopt AI safely.
Built by engineers from Databricks and Apple, OpenPCC (short for Open Private Cloud Compute) acts as a secure middleware layer between enterprise systems and AI models. Whether running in the cloud or on-premises, the protocol ensures prompts, outputs, and logs remain fully encrypted and inaccessible—even to the AI providers themselves.
The AI Privacy Crisis
The timing couldn’t be sharper. AI adoption has exploded across sectors, but privacy protections lag far behind. Studies show:
- 98% of companies rely on third-party vendors that have suffered data breaches.
- 78% of employees have pasted internal data into AI tools.
- 1 in 5 of those cases involves personally identifiable or regulated data (PII, PHI, or PCI).
The result? Enterprise leaders caught between innovation and compliance. “Companies are being pushed to adopt AI faster than they can secure it,” said Jonathan Mortensen, founder and CEO of Confident Security. “Most tools ask you to trust that data is safe. OpenPCC proves it.”
Inside OpenPCC: Encryption Without Friction
At its core, OpenPCC standardizes secure AI communication—like SSL did for the web. It enables encrypted streaming between clients and models, ensuring sensitive data never escapes enterprise boundaries.
The release includes:
- OpenPCC Specification and SDKs (Apache 2.0): a universal framework for secure AI usage across providers.
- OpenPCC-Compliant Inference Server (FSL): a production-ready reference model showing how to deploy privacy-verified AI.
- Core Privacy Libraries:
- Two-Way for end-to-end encrypted client–AI streaming,
- go-nvtrust for GPU-level attestation,
- Go implementations of Binary HTTP (BHTTP) and Oblivious HTTP (OHTTP) for fully private client–model communication.
This means companies can plug OpenPCC into existing LLM workflows with minimal code changes—an unusually practical approach in a field often dominated by theoretical security promises.
Open Source, Open Governance
OpenPCC’s open-source release under the Apache 2.0 and FSL licenses signals an intentional break from vendor lock-in. Confident Security is also creating an independent foundation to steward the standard—ensuring it remains neutral, transparent, and free from future license changes.
“What makes OpenPCC different is that it was built by engineers who understand both innovation and security,” said Aditya Agarwal, General Partner at South Park Commons and former CTO at Dropbox. “By open-sourcing the framework and committing to independent governance, Confident Security is giving enterprises a standard they can finally trust to run AI safely.”
Beyond Compliance: Toward a Private AI Ecosystem
The ambition behind OpenPCC echoes an earlier era of web transformation. Just as SSL made secure online transactions universal, OpenPCC aims to make AI privacy infrastructure foundational—a default layer for every enterprise model deployment.
Funded by a $5 million seed round from Decibel, Ex/Ante, South Park Commons, Halcyon, and SAIF, Confident Security’s roadmap points toward a future where organizations no longer have to choose between innovation and data safety.
As AI’s next wave brings autonomous agents, multi-modal prompts, and cross-cloud orchestration, frameworks like OpenPCC could be the backbone that makes enterprise-scale AI both powerful and private.
In Mortensen’s words: “Privacy will define which companies earn trust and lead the market. OpenPCC is our blueprint for that future.”
Power Tomorrow’s Intelligence — Build It with TechEdgeAI










