AI may be the hottest productivity tool in the office, but according to CalypsoAI’s new Insider AI Threat Report, it’s also fueling risky behavior—and shifting workplace loyalties from humans to algorithms.
The nationally representative survey of 1,000+ U.S. office workers found 45% trust AI more than their coworkers, while 38% would rather report to an AI manager than a human. Over a third (34%) say they’d quit if their employer banned AI entirely.
Rules? What Rules?
Even though 87% of respondents say their employer has an AI policy, more than half (52%) admit they’d break it if AI made their job easier, and one in four have already used AI without checking if it’s allowed.
The risks are not hypothetical:
- 28% used AI to access sensitive data.
- 28% uploaded proprietary company information into AI tools.
Executives are hardly immune. Half (50%) of C-suite leaders said they’d prefer AI managers over human ones, 35% admitted to giving proprietary data to AI, and 38% confessed they don’t even know what an AI agent is.
Industry Hot Zones
The temptation to bend—or break—AI rules is even higher in certain sectors:
- Finance: 60% violated AI rules; one-third accessed restricted data.
- Security: 42% knowingly break policy; 58% trust AI over coworkers.
- Healthcare: Only 55% follow AI policy; 27% would prefer an AI boss.
Entry-level workers are also a weak point—37% say they wouldn’t feel guilty breaking AI policy, and 21% claim unclear rules leave them to “just do what works.”
A People Problem as Much as a Tech Problem
“These numbers should be a wake-up call,” said CalypsoAI CEO Donnchadh Casey. “We’re seeing executives racing to implement AI without fully understanding the risks, frontline employees using it unsupervised, and even trusted security professionals breaking their own rules.”
The report’s conclusion is blunt: AI security isn’t just about protecting systems—it’s about managing human behavior and the erosion of trust inside organizations.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI