- Monday Momentum
- Posts
- Renting the Future
Renting the Future
Inside the $300 billion race for AI infrastructure that's rewriting the rules of tech strategy
Happy Monday!
Last week, rumors surfaced that Google, one of the world's largest infrastructure owners, might be seeking to rent NVIDIA's latest Blackwell GPUs from cloud provider CoreWeave. This is a potential signal of how even the mightiest tech giants are shifting their approach towards AI infrastructure.
I spent the weekend analyzing this trend so you don't have to. Here's what you need to know.
Despite planning to spend hundreds of billions on their own data centers in 2025, tech giants like Google, Microsoft, and Meta are simultaneously renting AI infrastructure from specialized providers to meet explosive demand. This hybrid "build AND lease" strategy mirrors the early cloud computing transition but at unprecedented scale and speed.
The Meta Trend: From Building to Borrowing
For decades, the playbook for tech giants was clear: build and own your data centers. It provided control, cost advantages at scale, and competitive moats. But the AI revolution is rewriting these rules in real-time.
Even the companies with the deepest pockets – Google, Microsoft, Meta, and Amazon – are supplementing their massive build strategies with a surprising new approach: leasing and renting AI infrastructure from specialized providers. This isn’t entirely surprising, as tech companies face two massive hurdles with regard to scaling their AI infrastructure: not only is it prohibitively expensive (which is the easier problem to solve), but supply is still constrained and difficult to come by.
Shifting to a leasing model and leveraging the infrastructure of specialized providers like CoreWeave allows these companies to rapidly scale deployment and growth without worrying about data center construction and maintenance. Investing in the “picks and shovels” of the AI race is still smart money, but those picks and shovels look slightly different when tech companies spend less time building their own infrastructures.
Pattern Recognition: Signs of the Shift
Three key patterns highlight this sea change in AI infrastructure:
Microsoft Leases from Oracle: In a move that shocked the industry, Microsoft signed a multi-year deal to rent thousands of NVIDIA GPUs from longtime rival Oracle to power Bing's AI services. Microsoft chose to pay Oracle rather than wait for its own data centers to catch up with demand.
Google Explores Renting from CoreWeave: Despite planning to spend a staggering $75 billion on data centers in 2025, Google is reportedly in talks to rent NVIDIA's next-generation Blackwell GPUs from specialized AI cloud provider CoreWeave – a company that didn't even exist in its current form a few years ago.
OpenAI Weighs Alternatives to Azure: After relying exclusively on Microsoft's infrastructure, OpenAI is now reportedly exploring "Project Stargate" – a potential $500 billion initiative with SoftBank and Oracle to create dedicated AI infrastructure outside of Azure.
The Contrarian Take: Why Build vs. Lease is the Wrong Question
The conventional wisdom suggests tech giants would always prefer to own infrastructure when possible. But what we're seeing isn't a complete abandonment of building; it's simply the emergence of a sophisticated hybrid strategy.
The realization is actually that in the AI era, infrastructure is no longer just a cost center to be optimized; it's actually a critical constraint on innovation speed. When faced with the choice between waiting for your own infrastructure or renting someone else's, these companies are increasingly choosing "both."
Far from weakness, this is strategic pragmatism in a market where being six months late to deploy a capability could mean ceding significant ground to competitors.
A Historic Parallel
This shift mirrors the early cloud computing era (2006-2010), when companies faced exploding web traffic and had to decide whether to build more servers or use emerging cloud services. Many chose both as a transitional strategy.
The difference today is scale and speed – the capital expenditures are measured in tens of billions per company per year, and the innovation cycles in AI are compressed to months rather than years. These increased cost and speed demands are pushing companies to embrace this hybrid approach sooner rather than later.
We’ve already seen what can happen in the news cycles when tech companies with high expectations make slow progress or, worse yet, release half-baked AI tools that act in surprising ways. Sinking time and effort into scaling infrastructure instead of building meaningful products increases the likelihood that these types of problems may arise.
Practical Implications
For organizations and individuals navigating this evolving landscape, several actionable insights emerge:
For Investors:
The emergence of specialized AI infrastructure providers (CoreWeave, GlobalAI) represents a genuine opportunity, not just a temporary phenomenon
Expect to see massive capital flowing into AI data center construction beyond just the major tech players – global data center investments surged 51% year-over-year to $455 billion in 2024
Watch for creative financing models like Microsoft's AI Infrastructure Partnership with BlackRock that blur the line between building and leasing
For Enterprises:
The hybrid approach taken by tech giants validates a similar strategy for enterprises of all sizes
Don't lock into a single infrastructure strategy – maintain flexibility to tap different sources of AI compute as needed
Consider the full equation: building gives control but requires expertise and upfront capital; renting offers speed and flexibility but with premium pricing
For Startups:
There's a new and growing ecosystem of AI infrastructure providers beyond the traditional cloud giants
The supply constraints creating this shift also create opportunities for those who can secure access to specialized AI hardware
Unlike previous infrastructure booms that ended in overcapacity (dot-com fiber, etc.), the AI demand curve seems to be outpacing even the most aggressive supply projections. The combined 2025 AI infrastructure spending of Meta, Microsoft, Google, and Amazon is expected to exceed $300 billion – yet they're still turning to external providers for more.
In motion,
Justin Wright
If tech giants with effectively unlimited resources are choosing hybrid infrastructure strategies, what other "build vs. buy" assumptions might be overdue for reconsideration in the AI era?

Introducing GPT-4.1 in the API (OpenAI)
AI-guided POCUS bests experts in detecting Tuberculosis (AuntMinnie)
OpenAI co-founder Ilya Sutskever’s Safe Superintelligence reportedly valued at $32B (TechCrunch)
OpenAI is building a social network (The Verge)
Introducing OpenAI o3 and o4-mini (OpenAI)