I understand the unease. For decades, enterprises have invested heavily in technology stacks engineered for predictable data flows, reliable integrations, and sharp, well-defined boundaries between systems. Technology was deterministic, systematic, and comfortably controllable. Those boundaries are now dissolving.
We are entering an era in which every layer of the stack is quietly, steadily absorbing intelligence—not as a bolted-on feature or an afterthought, but as a native, inseparable part of how modern technology functions. The result is a new kind of infrastructure that can reason, adapt, and solve abstract problems on its own or in collaboration with us humans. What felt certain yesterday is becoming fluid today. This change is already underway, whether we’re ready or not.
What I am seeing is a new mindset emerging. Leaders are no longer asking whether they should use AI. They are asking how to reorganise their technology foundations to support it.
When I talk with CIOs or CTOs, I often sketch a simple picture. We used to think about technology in layers: operations at the bottom, cloud and integration above it, then data, analytics, and the business applications on top. Most companies still work with some version of this model.
But the reality has changed. Each of these layers now has its own AI evolution curve.
There is useful data behind this transition. One recent industry survey found that 86% of enterprises believe they must modernise parts of their tech stack to deploy AI agents properly. Another study shows that most companies experimenting with AI struggle not because of models, but because their underlying architecture was not built for continuous, interconnected intelligence.
This matches what I see every week. Companies run successful pilots, but the moment they try to scale, the stack starts to resist. Latency issues appear. Costs rise above the value added. Data pipelines cannot support the workload. Integrations break because they were never designed for bidirectional, real-time logic. Business systems cannot keep up with the demand for context-dependent actions.
It is not a model problem. It is an architecture balance problem.
A good starting point for leadership teams is not to ask “Where should we use AI?” but “How should our technology stack behave when AI becomes a built-in part of everything?”
This shift in thinking is important. It forces organisations to look at balance.
The companies that progress fastest are the ones that view their stack as a living system rather than a fixed structure. They accept that AI will not arrive evenly. Some layers will mature earlier. Some will need to be protected longer. And that balance will keep evolving.
Based on the conversations I am having with customers, and the work our teams are doing across industries, I believe three things will shape the next phase.
As a CEO, I feel a responsibility to help leaders bring clarity to something that can easily feel overwhelming. My goal is not to sell complexity, but to simplify what matters. I often say that AI does not replace your architecture, but it changes the meaning of it. It forces us to think about flow, context, trust and outcomes in a new way.
And it asks each company to define how much intelligence it wants embedded in each layer, and how much it is comfortable delegating to systems, models and agents—now and going forward as AI matures.
No organisation should navigate this transformation alone. But the work cannot start with tools or trends. It has to start with understanding your stack, your people, your data and your ambition. When those are aligned, AI becomes an accelerator rather than a stress factor.
This is the conversation I look forward to having with more leaders in the months ahead. It is clear that the future architecture will be more intelligent, more connected and more dynamic than anything we have built before. The opportunity is enormous. The challenge is to approach it with clarity and trust.
And that is where I believe we can make a real difference.

