If you’ve spent any time in executive conversations about AI, you’ll recognise the moment. Someone leans back, pauses, and asks a question that’s equal parts curiosity and unease: Why does this feel harder than it should?
Many executives are starting to feel a growing gap between the enormous promises coming from AI labs and the competitive advantage and tangible return on investment organisations expect to realise from their AI initiatives. But if you look closer, you see that the issue is not the lack of intelligence in the models nor AI’s potential, but because two assumptions turned out to be wrong:
- First, plug-and-play AI tools do not create a meaningful competitive advantage. While access to subscription-based AI can boost individual productivity, tools that are available to everyone do not create a level of tangible business value that can be considered a competitive advantage.
- Second, most organisations overestimated their readiness. As AI initiatives move beyond experimentation, they reveal gaps in evaluation, integration, governance, and operational maturity that only surface in real-world use.
These gaps are the organisational muscles that separate organisations that succeed with AI from those that stall. They are the capabilities required to build reliable, production-ready AI systems that achieve user adoption, scale with confidence, and deliver tangible business value. These capabilities are built through discipline in which pilots, experiments, and even unshipped prototypes are treated as learning mechanisms rather than failures.
So, how are these capabilities built in practice? And which capabilities truly differentiate organisations that succeed with AI?
These are the questions we set out to answer in this blog post.
Where Tangible AI Value Actually Comes From
Tangible AI value does not come from adopting generic tools faster than competitors. It comes from building AI systems that are specialised, embedded, and operationally grounded.
Real value emerges when AI is shaped around proprietary data, integrated directly into existing workflows, and aligned with real decision-making responsibilities. In these conditions, AI stops being a separate initiative and becomes part of how the organisation operates.
This explains why many AI pilots look promising but fail to compound. Generic tools are designed to work reasonably well across many contexts. Specialised systems are designed to work extremely well within one organisational reality.
The hard part is not proving that a model can generate plausible outputs. It is building systems that deliver outputs that are timely, reliable, auditable, and actionable under real-world constraints. That transition is where most initiatives stall and where most value is created.
Sustained AI advantage rarely lives in the model itself. It emerges from how data, systems, and human decision-making are woven together over time. And that requires organisational capability, not just better technology.
AI as a Capability-Building Discipline
Organisations that succeed with AI treat it as a capability-building discipline rather than a sequence of delivery projects. They understand that readiness is not something you assess once and then tick off. It is something you build incrementally, through repeated exposure to real-world constraints.
This is why pilots and experiments play such a central role in organisations that progress. Not because every pilot is expected to ship, but because each one strengthens the organisation’s ability to evaluate, integrate, govern, and operate AI systems in practice.
In applied AI, friction is not a sign of failure. It is a signal. Architecture that breaks under load, data that turns out to be unusable, models that behave differently in production than in demos, or teams that struggle with ownership and accountability all reveal where organisational muscles are still underdeveloped. These insights are not side effects of experimentation. They are the primary output.
Organisations that stall often interpret this friction as evidence that AI is not ready. Organisations that win interpret it as a signal of where to invest next. Over time, this difference in interpretation leads to a widening gap in capability, confidence, and outcomes.
Treating AI as a capability-building discipline allows learning to compound. Each initiative increases the organisation’s ability to deliver the next one more reliably. That is how experimentation turns into execution, and how promise turns into sustained value.
The Organisational Muscles Required to Win With AI
While every organisation’s context is different, a consistent set of organisational muscles appears across those that succeed with AI. These capabilities determine whether AI initiatives stall at the pilot stage or mature into production-ready systems that deliver real business impact.
In a subsequent essay, I will explore what that capability-building journey looks like in practice. Winning with Generative AI, it turns out, depends on developing a small set of critical muscles:
- Multi-layer AI fluency, so teams can reason clearly at the narrative, systems, and model-reality levels and see through hype to real constraints and trade-offs.
- Business-grounded use cases, anchored in concrete outcomes rather than speculative potential.
- Robust data and systems foundations, capable of surviving integration, scale, and operational reality.
- Rigorous evaluation practices, moving beyond demos and intuition to systematic assessment of quality, reliability, risk, and impact—before and after deployment.
- Governance and internal talent development, embedding responsibility, trust, and continuous learning into how AI is built and used.
Taken together, these capabilities don’t eliminate failure. They change its role. They allow organisations to learn faster, decide more confidently, scale more reliably—and turn experimentation into sustained advantage.
Conclusion
Winning with AI is not primarily a technology challenge. It is an organisational one.
The gap between AI’s promise and its realised value persists because many organisations approached AI as something to implement rather than something to learn how to do well. In practice, applied AI behaves like a discipline. One that rewards depth, compounds learning, and exposes superficial understanding quickly.
This is not a theoretical claim. It is something we have seen first-hand. In our work with large organisations such as Argenta, Baloise, and Tomra, we have had the opportunity to embed our engineers, methods, and evaluation frameworks directly into their day-to-day operations, not as an external delivery engine, but as a capability transfer mechanism. What followed was not a sudden absence of friction or failure but a visible shift in how teams reasoned about AI, how they evaluated trade-offs, and how quickly they were able to turn ideas into systems that actually held up in production. Roadmaps accelerated not because the technology improved overnight, but because organisational understanding did.
Tangible AI value does not come from better tools alone. It comes from building the organisational muscles required to design, evaluate, integrate, and operate AI systems under real-world conditions.
Organisations that invest in these capabilities do not avoid failure. They absorb it, learn from it, and move forward with greater confidence. Over time, that ability becomes a competitive advantage in its own right.
That is what it means to build the right organisational muscles to win with AI.














