With 95% of generative AI pilots at companies failing to launch, the rush to integrate powerful new models is stalling out. According to a recent MIT report, the vast majority of these projects deliver little to no measurable impact on P&L, revealing a “learning gap” where generic tools fail to adapt to complex enterprise workflows.
The pressure to adopt AI in the enterprise space has created a frantic “rat race” mentality, where speed is often prioritized over strategy.
By rushing to implement generic tools without understanding the necessary structural changes, companies are setting themselves up for “graceful failures” rather than sustainable success.
In what ways are so many organizations mistaking AI adoption speed for strategic advantage, and how does this result in missed opportunities?
Adoption speed does matter. In AI, execution and velocity are real differentiators. But the trap is reducing “strategy” to “adopt fast.” Speed is a great first imperative, not a long-term plan. The biggest winners in transformational waves aren’t the companies that merely adopt the new technology; they’re the ones who use it to create a business that couldn’t otherwise exist.
So yes: adopt AI and do it fast. Then ask the harder question: how does AI let me extend and reinvent my business, not just optimize the current one, through new services, new bundles, and new positions in the value chain.
We’re all going to use AI. The missed opportunity is using it to do what you already did yesterday, just slightly faster. Don’t confuse motion with progress.
Why does treating AI as a plug-and-play technology lead to wasted budgets and failed outcomes in complex enterprises?
Plug-and-play sounds attractive because it promises participation without pain, like you can capture the upside of AI without getting your hands dirty. And I get why leaders want that. Nobody wants to be “bad at innovation.”
But we’re still early, and AI is moving fast. In this phase, the scarcest asset isn’t access to tools; it’s organizational learning: developing a deep, situated understanding of what AI can do in your workflows, with your data, under your constraints. If you cleanly outsource that through a generic bolt-on, you may ship something, but you’re also choosing sameness.
Wasted budgets and failed outcomes are often downstream of that: attention and resources get spent on AI peacocking… “look, we did AI” … instead of building capability and differentiation. The better approach is to build safe spaces where experimentation doesn’t need to be “safe” in outcomes, because it’s safe in scope: clear guardrails, tight data boundaries, and explicit learning goals.
Right now, it’s not about putting AI everywhere. It’s about building the judgment to know where it matters and how to outlearn the competition.
Why does true AI resilience require re-engineering operating models instead of layering intelligence onto outdated ones?
I believe that the last two decades made leaders believe that you can buy your way into transformation: new tools, new platforms, accelerated rollout. Yet, AI breaks that pattern squarely.
AI isn’t just another tool that helps humans do the same work faster. It’s the first technology that can do parts of the work itself. We’re moving from humans executing tasks to humans supervising systems that execute tasks. That shift changes the unit of work, the accountability model, the control points, and the feedback loops.
So “AI resilience” really is organizational robustness: whether your operating model can absorb and exploit this new kind of capability without becoming brittle. That’s why layering intelligence onto outdated workflows caps upside. The upside you get from AI is tightly coupled to your willingness to re-engineer how value gets created.
How can companies establish AI governance that enables innovation without slowing decision-making or defaulting to risk avoidance?
My biggest pet peeve is when governance is treated like a veto machine. If governance exists only to say “no,” it will reliably produce the one outcome nobody wants: risk avoidance disguised as responsibility.
Good governance should reduce fear and friction so that teams can create value without constantly worrying about the sharp edges. That means governance has to be enabling, not merely restrictive.
Practically, you need an ambidextrous company that can run at different speeds in different lanes. Like the Autobahn: some lanes are built for heavy loads and caution, others for speed. Low-risk use cases should move fast with lightweight controls. Higher-risk use cases should get deeper review, stronger evaluation, and clearer accountability. One-size-fits-all governance guarantees either stagnation or accidents.
And risk teams can be vastly more valuable when they’re consulted early as partners in upside, not only guardians of downside mitigation. Not taking risks is, in the end, the biggest risk of all.
In what ways does SAP Signavio help leaders move from isolated AI experiments to enterprise-wide transformation to achieve desired business outcomes?
Enterprise transformation shouldn’t be a game of chance. It should be closer to a science: observable, measurable, and repeatable.
To make that possible, you need shared truth across all parts of the business, e.g., business and IT. And with it, the ability to change how the business actually runs. Culture matters, but the operational core is the process layer. Do you have transparency into what’s really happening? Can you identify where value leaks and why? Can you redesign workflows and make the change stick across systems and teams?
Without that foundation, every AI experiment has to reinvent alignment, measurement, and translation from a showcase into production reality. That’s why pilots feel exciting and scaling feels impossible.
Signavio focuses on the “unsexy” but decisive work: understanding process reality, connecting it to systems, creating a shared language, and making improvement governable and measurable. It sharpens the axe, so AI innovation can actually cut.
Lukas N.P. Egger is VP of Product Strategy & Innovation at SAP Signavio, where he helps global enterprises move from AI pilots to scalable transformation with measurable business impact. He specializes in de-risking ambitious AI initiatives by testing desirability, feasibility, viability, and organizational alignment, and by connecting strategy to execution through process intelligence. Lukas works closely with product, engineering, and customer teams to design AI experiences that are trusted, adoptable, and grounded in real workflows. He also hosts the Process Transformers podcast, featuring leaders at the intersection of processes and AI.










