A Harris Poll commissioned by Collibra surveyed more than 300 U.S. technology decision‑makers between February 17‑24, 2026. The results paint a clear picture: enterprises are moving beyond pilot projects and seeking concrete standards for AI transparency, data verification, and regulatory compliance.
Key Findings
- Spending on AI is set to rise. Eight‑in‑ten respondents (AI spend) say their organizations must boost AI budgets over the next twelve months to stay competitive with the likes of Big Tech.
- AI potential remains under‑leveraged. A striking 88 % believe most firms are not yet extracting the full value from AI technologies.
- Transparency is non‑negotiable. An overwhelming 93 % agree that companies should be required to disclose their use of AI tools and agents, a step the poll’s authors argue will improve monitoring and responsible deployment.
- Data verification is a prerequisite for trust. 90 % of executives say they cannot fully trust AI‑driven insights until the underlying data has passed a formal governance review.
- Federal oversight enjoys broad support. Nine‑in‑ten respondents favor legislation that forces companies to document high‑risk AI systems, protects media publishers from unlicensed AI content use, and establishes a national cybersecurity strategy that advances “American AI.”
From Experimentation to Accountability
Felix Van de Maele, Collibra’s co‑founder and CEO, summed up the shift:
“Our latest survey shows a decisive shift from experimentation to a demand for rigorous, transparent standards,”
“The results send a clear message that the industry is expecting a level of accountability that matches the speed of the AI.”
The data suggest that enterprises are no longer comfortable treating AI as a sandbox activity. Instead, they are demanding verifiable processes that can withstand scrutiny from regulators, partners, and customers.
Governance Gaps and the “Trust Gap”
While most respondents recognize AI’s strategic importance, 40 % cite data privacy, security, or regulatory compliance as the primary barriers to broader adoption. Moreover, 55 % admit they sometimes have to intervene manually to correct AI outputs—a symptom of what the poll describes as a “trust gap.” The lack of confidence in AI results underscores the need for robust data governance frameworks that can certify data quality before it feeds models.
Talent Shortages Complicate the Picture
The survey also highlights a growing skills bottleneck. Sixty‑four percent of decision‑makers view it as a “red flag” when a job candidate cannot demonstrate familiarity with AI tools during an interview. Adding to the challenge, 30 % say finding professionals who blend AI expertise with domain knowledge and curiosity is difficult.
“Data and AI literacy are no longer optional but rather core capabilities for any organization that wants to turn AI ambition into real impact. Without it, AI leaders and AI producers struggle to become truly risk aware and to understand the value and risks associated with AI systems,”
Van de Maele added.
Regulatory Landscape: A Near‑Consensus
When asked about a national framework to govern AI, 90 % of participants expressed agreement, with more than 70 % labeling such a framework as “very important” or “absolutely essential.” Human oversight also emerged as a critical safeguard: 91 % believe it is essential, especially in high‑impact sectors like healthcare and finance, where 84 % deem it absolutely necessary.
These findings align with recent legislative efforts in the United States and Europe aimed at curbing opaque AI deployments and protecting intellectual property. The poll’s strong support for federal requirements suggests that industry pressure could accelerate policy formulation.
Methodology at a Glance
- Sample size: 313 full‑time U.S. professionals aged 21 + serving as data‑management, privacy, or AI decision‑makers (director level or higher).
- Fielding period: February 17‑24, 2026.
- Precision: ±5.7 percentage points at a 95 % confidence level, based on a Bayesian credible interval.
What This Means for Enterprises
The Collibra‑sponsored poll underscores a pivotal moment for AI adoption in the corporate world. Companies that already operate mature data governance programs are likely to capitalize on the upcoming wave of AI investment, while those lagging behind may face heightened scrutiny from regulators and talent markets.
Enterprises should consider:
- Formalizing AI disclosure policies to meet emerging expectations for transparency.
- Embedding data verification steps into AI pipelines to close the trust gap.
- Investing in workforce upskilling to ensure that staff can navigate AI tools confidently.
- Monitoring legislative developments to stay ahead of potential compliance requirements.
As AI continues to mature, the pressure to balance rapid innovation with responsible stewardship will only intensify. Organizations that proactively address governance, transparency, and talent gaps will be better positioned to reap AI’s promised benefits while mitigating risk.
media publishers and other stakeholders are urging clearer standards as the conversation moves from AI transparency to actionable policy frameworks.
Power Tomorrow’s Intelligence — Build It with TechEdgeAI











