How twenty years of cloud adoption already showed us what is about to happen with AI
AI feels unprecedented. Organisationally, it is surprisingly familiar.
In March 2006, Amazon launched a service called S3.
Storage in the cloud. Pay-as-you-go. No hardware to provision, no data centre to manage, no upfront capital cost. Five months later, EC2 followed – virtual machines on demand, billed by the hour.
The reception was sceptical. Real businesses do not run on someone else’s servers. Compliance teams will never allow it. The latency will kill performance. It is a fad for startups and hobbyists.
Twenty years later, the cloud is invisible infrastructure. Banks run on it. Healthcare runs on it. Defence runs on it. The question is no longer “should we move to cloud” but “which workloads remain on-prem and why?” Whole categories of business, Netflix, Airbnb, Stripe, Uber, Spotify, are global household names because the cloud was there to be built upon.
That arc has lessons. We are now in the early years of an AI arc that is repeating it, but compressed. The patterns are familiar. The timeline is shorter. And the same governance gaps that hurt organisations during cloud adoption are already opening up around AI.
How the Cloud Arc Actually Unfolded
It started with the tech giants pushing the technology and offering the simplest possible services. AWS led with storage and compute. Microsoft and Google followed. The early services were primitive by today’s standards, but they removed friction at exactly the right level.
Then a generation of cloud-native companies adopted it early and built business models that would have been impossible without it. Netflix became Netflix because cloud existed. Airbnb did not need to own data centres. Stripe could scale globally from day one. These companies did not just use cloud as cheaper hosting. They built operating models that assumed cloud existed and could not have existed in the on-prem world.
Then the platform layer emerged. PaaS and SaaS unbundled functions that organisations had previously built bespoke. Salesforce, Workday, Snowflake, Twilio, Stripe again. Capabilities you used to engineer in-house could now be consumed as services. The gap between cloud-native companies and traditional enterprises grew, because cloud-native companies could compose new capabilities faster than traditional enterprises could build them.
And then ubiquity arrived without anyone deciding. The most regulated industries, the ones who held out longest, are now cloud-by-default. Twenty years on, “should we use cloud” is a question only asked in narrow edge cases.
The Governance Story That Ran Underneath It
But that is only the surface arc. Underneath it ran a different story, and it is the one that matters most for the AI parallel.
Every wave of cloud adoption surfaced governance gaps that organisations had to retrofit painfully.
Cost was the first surprise. Engineers spinning up resources without finance oversight. The infamous unexpected bills. Companies discovering they had hundreds of unmanaged accounts, idle instances running for years, storage buckets nobody owned. FinOps as a discipline emerged years after cloud adoption began, because finance teams genuinely could not see what was being spent until it had already been spent.
Data leakage came next. Public storage buckets exposing customer data. Shadow IT moving sensitive data into SaaS tools without information security knowing. Compliance teams scrambling to retrofit governance frameworks designed for on-prem systems onto multi-cloud sprawl.
Identity and access at scale broke traditional models. Hundreds of thousands of resources, each with permissions, each potentially mis-set. The realisation that perimeter security was fundamentally broken in cloud environments. Zero trust as a discipline emerged because the old models no longer applied.
And then lock-in. Organisations realising mid-journey that they were so deeply embedded in one cloud’s services that exit cost would be measured in years rather than months. Multi-cloud strategies emerging not for technical reasons but for negotiating leverage and risk management.
In every one of these cases, the pattern was identical. Technology adoption raced ahead. Governance frameworks lagged badly. Organisations got hurt before disciplines emerged to prevent the harm. The disciplines themselves became permanent fixtures of how cloud is operated, but they had to be built mid-flight, often after expensive incidents.
Where Methodologies Came From
Alongside the governance retrofits, methodologies emerged. The Cloud Adoption Framework. The Well-Architected pillars. The Twelve-Factor App. The DevOps movement and then DevSecOps. SRE practices. Platform engineering as a discipline.
Many of these disciplines either emerged or became strategically essential during the cloud era. Organisations needed structured ways to capture what worked, codify the disciplines that prevented the worst failures, and accelerate adoption beyond the early-adopter phase.
The same thing is happening with AI right now. Methodologies are starting to emerge for how to use these technologies well — for evaluation, prompt management, agent design, intent specification, model governance, autonomy boundaries. My own Intent-Driven Development work sits within this wave. It exists because the patterns that prevent the worst failures of agentic AI need to be codified now, before organisations spend the next decade discovering them through expensive incidents.
These methodologies will compound. The organisations that adopt them early will spend less time on remediation later. The organisations that wait will find themselves in the same position that on-prem-first enterprises found themselves in around 2015, discovering that the disciplines they now need were developed by competitors years earlier.
The AI Parallel, Compressed
The AI arc is following the cloud script, but on a shorter timeline.
The sceptical reception is already familiar. Real businesses will not run on hallucinating models. Compliance teams will never allow it. Accuracy will never be sufficient. It is a fad for technologists and demo videos.
The cloud-native generation has its AI equivalent. Anthropic, OpenAI, Cursor, Perplexity, every AI-native startup founded in the last three years. These companies are building operating models that assume agentic AI exists and could not have existed in the pre-AI world.
The platform layer is emerging. Bedrock, Azure AI Foundry, agentic platforms. The unbundling of AI capabilities into composable services has already begun. Within a few years, capabilities that organisations are currently building bespoke will be consumed as services.
And ubiquity will arrive. Probably faster than cloud’s twenty-year arc, because the global infrastructure is already in place. Cloud took twenty years partly because we had to build the underlying network, the data centres, the hyperscaler footprint. AI gets to skip that step. Anyone with an internet connection has access to capabilities that two years ago required a research lab.
AI is not just another infrastructure shift. It is the first infrastructure shift that changes the structure of human work itself.
The governance gaps are already opening. Organisations are discovering enormous AI bills from teams using API keys without controls. Token consumption spiralling on agentic workflows. The first “FinOps for AI” companies have already emerged, two years in. Employees pasting confidential information into hosted models. Proprietary code shared with external systems. Audit trails breaking when reasoning is opaque. Compliance frameworks designed for human decision-making struggling to accommodate machine reasoning. Lock-in to specific model behaviours after teams have tuned prompts and evals around them.
Same pattern. Faster timeline. Same governance lag. Same compounding cost for organisations that wait.
Why Human Intent Sits Above All of This
This is also where my Human Intent series comes in.
Cloud’s deeper lesson was not just technical. It was organisational. The companies that thrived through the cloud era did not just adopt the technology earlier. They restructured around it. Cloud-native companies built operating models that assumed cloud existed. Retrofit enterprises spent the next decade catching up, often spending vastly more on remediation than they would have spent on prevention.
The same is true for AI. The organisations that will thrive are the ones building structures, governance, and disciplines that assume agentic AI exists. Not retrofitting AI into existing structures, but redesigning structures around it. That is what the Human Intent series has been arguing for. The shift from managing work to governing intent. The redesign of organisations around the flow of intent rather than the coordination of human work.
And specifically, Human Intent is trying to prevent the governance failures that defined cloud’s first decade. The cost surprises, the data leakage, the lineage gaps, the audit trail breaks, the regulatory blind spots – these all came from one underlying problem. Organisations adopted technology faster than they developed the discipline to govern it.
The whole point of Intent-Driven Development at the practitioner level, and Human Intent at the organisational level, is to develop that discipline before the failures arrive. Intent specifications. Intent fidelity. Human gates. Autonomy boundaries. These are not theoretical constructs. They are the AI equivalent of FinOps, of zero trust, of cloud security postures, of data residency frameworks. Disciplines that organisations will eventually need, that the responsible early adopters are developing now, and that retrofitting later will be vastly more expensive than building in.
What This Means
The cloud arc tells us three things.
The first is that scepticism does not stop technology adoption. Whatever you think about AI today, twenty years from now it will be invisible infrastructure underpinning industries that do not yet exist.
The second is that early adopters who restructure around the technology pull ahead in ways that compound. The cloud-native companies of 2010 became the household names of 2020. The AI-native companies of 2026 will be the household names of 2030 – sooner, because the infrastructure is already there.
The third, and the one we most often forget, is that governance lag is where most organisations get hurt. The technology arrives faster than the disciplines to use it well. Organisations either invest in those disciplines early, or they pay vastly more to retrofit them after the failures.
Cloud taught organisations that infrastructure changes faster than governance.
The organisations that treated cloud as hosting caught up slowly. The organisations that treated it as a new operating model reshaped entire industries.
AI will follow the same pattern.
Frequently Asked Questions
How is AI adoption similar to cloud computing adoption?
The structural pattern is almost identical. Both started with sceptical reception from enterprise leaders. Both saw early adopters build native operating models that retrofit organisations could not easily copy. Both grew a platform layer that unbundled previously bespoke capabilities. Both exposed governance gaps – cost surprises, data leakage, lock-in, audit trail breaks – that organisations had to retrofit painfully after harm had already occurred. The key difference is timeline. Cloud took twenty years to become invisible infrastructure. AI is following the same arc on a compressed timeline because the global infrastructure is already in place.
What governance failures defined cloud computing’s first decade?
Five major governance gaps emerged painfully through the cloud era. Cost surprises drove the emergence of FinOps as a discipline, often a decade after cloud adoption began. Data leakage from misconfigured permissions and shadow IT exposed customer data and led to major breaches. Sovereignty issues caught enterprises off guard when GDPR and Schrems II required visibility into data residency that organisations had never tracked. Identity and access at scale broke perimeter security models, leading to zero trust as a discipline. And lock-in to specific cloud providers created exit costs measured in years. In every case, technology adoption raced ahead of governance, organisations got hurt before disciplines emerged, and the disciplines themselves had to be retrofitted at great cost.
What governance gaps are already opening up around AI?
The same five categories are appearing in AI, but compressed in time. Cost surprises are emerging from teams using API keys without controls, leading to the first FinOps-for-AI companies appearing only two years into the agentic AI era. Data leakage is happening through employees pasting confidential information into hosted models and proprietary code being shared with external systems. Audit trails break when reasoning is opaque and decisions cannot be reconstructed. Compliance frameworks designed for human decision-making struggle to accommodate machine reasoning. And lock-in to specific model behaviours after teams have tuned prompts and evals around them is already a real risk.
How does Intent-Driven Development relate to the cloud parallel?
Intent-Driven Development sits in the same wave as the methodologies that emerged during cloud’s middle years – Cloud Adoption Frameworks, Well-Architected pillars, Twelve-Factor Apps, DevOps, SRE, platform engineering. None of these existed when AWS launched S3. They emerged because organisations needed structured ways to capture what worked, codify the disciplines that prevented the worst failures, and accelerate adoption. IDD does the same job for agentic AI, providing a framework for intent specifications, intent fidelity measurement, human gates, and autonomy boundaries. The organisations adopting these disciplines now will spend less time on remediation later, exactly as cloud-native companies did with their early adoption of FinOps, security-as-code, and platform engineering.
What is Human Intent in the context of cloud parallels?
Human Intent is the broader organisational response to AI, equivalent to how cloud-native organisations restructured around cloud rather than retrofitting it. The Human Intent series argues that organisations must shift from managing work to governing intent – redesigning structures, governance, and operating models around the reality that agentic AI exists. This directly addresses the deepest lesson of the cloud era: that the technology eventually becomes invisible infrastructure, but the organisations that thrive are the ones who restructure around it early rather than retrofit later. Human Intent codifies the disciplines that prevent the governance failures that defined cloud’s first decade, applied at the organisational level.
Why is the AI timeline shorter than the cloud timeline?
Three reasons. First, the global infrastructure already exists. Cloud took twenty years partly because the underlying networks, data centres, and hyperscaler footprints had to be built. AI gets to skip that step because anyone with an internet connection has access to capabilities that two years ago required a research lab. Second, the cloud-native generation taught us how to build natively around emerging technology. The patterns of native adoption are now well understood. Third, AI capabilities themselves are improving faster than cloud capabilities did at equivalent maturity. The compounding effect of these three factors means the AI arc that took cloud twenty years is likely to play out in five to ten.
What should organisations do now to avoid retrofitting AI governance later?
The same things that cloud-native organisations did early in the cloud era, but for AI. Develop intent specifications and clear success criteria for AI use. Build cost controls and visibility into AI spending from the start. Establish data handling policies that account for hosted models and external services. Create audit trails for autonomous decisions. Define autonomy boundaries that specify which decisions stay human and which can be delegated. Codify these disciplines as methodologies so they scale beyond individual practitioners. Most importantly, redesign organisational structures around the reality that agentic AI exists, rather than retrofitting AI into structures designed for human-only execution. The organisations that build these disciplines now will pull ahead in ways that compound over the next decade.





0 Comments