AI’s capex conundrum: Growth, overbuild fears, and the road ahead

Vikram Malhotra
Vikram Malhotra Senior US Real Estate Equity Research Analyst
March 21, 2025

The AI boom over the past three years has been nothing short of transformative. With the rapid success of models like ChatGPT, the U.S. government’s support for AI infrastructure development, and the introduction of autonomous agents, AI has entered the mainstream. Hyperscalers such as Amazon, Microsoft, Meta, and Google have poured billions into data centers to support the next generation of this technology.

However, recent developments have cast a cloud on the sustainability of AI growth. In late January, Chinese research lab DeepSeek released its DeepSeek-R1 model, which beat the industry’s leading models on various math and reasoning benchmarks using far less computing power.

DeepSeek’s release led to a selloff on Wall Street in major AI players and raised questions around the need for capex spend, high-compute chips, power, and real estate. Shortly after, Microsoft CEO Satya Nadella suggested in a February interview that the industry was headed toward an “overbuild,” fueling fears that data center expansion could be nearing a breaking point.

Adding to the uncertainty, chatter and media reports surrounded Microsoft’s decision to walk away from multiple data center leases and letters of intent (LOIs) nationwide, sparking concerns that the expected capex surge – forecasted to grow from $230 billion in FY24 to $320 billion in FY25 – may not materialize as planned. As the debate intensifies, a closer look suggests that while some strategic adjustments are underway, AI’s infrastructure buildout remains on firm footing.

Is the Fear Justified?

The combination of DeepSeek’s advancements and hyperscaler lease terminations has injected an element of fear into the industry. However, history suggests that these developments should be viewed as part of a broader strategic evolution rather than a sign of widespread distress.

Hyperscalers have traditionally followed an expansion strategy that mirrors Amazon’s approach to warehousing during the COVID-19 boom. Faced with an explosion in e-commerce demand, Amazon aggressively built out its logistics footprint (especially during COVID). This strategy allowed Amazon to secure real estate, build a competitive moat, and service consumers – all with a goal of “last mile/speedy” delivery. But as post-pandemic demand normalized, Amazon re-evaluated its logistics network and scaled back, adjusting for efficiency.

The same pattern could likely play out in the AI space. While hyperscalers rushed to acquire real estate for training workloads, a maturing industry has led to a more nuanced acquisition and leasing approach. Over the long-term, data center real estate could theoretically be placed into three categories:

  • Category A – Primarily owned, strategic assets intended for long-term use
  • Category B – A mix of owned and leased facilities, typically featuring long-term leases (around 4-5 years) that, due to their strategic location, have a high chance of renewal
  • Category C - Excess capacity, comprising around 5-15% of total infrastructure, acquired in anticipation of AI training needs but now under review for efficiency

It is these Category C assets that are most at risk of being cut. Recent shifts in leasing strategy indicate hyperscalers may be trimming this inventory, not as a retreat from AI investments, but as a strategic effort to optimize their real estate portfolios for greater efficiency.

Adapting to a Shifting Landscape

Over the past nine months, several hyperscalers have walked away from leases and LOIs – preliminary agreements made before finalizing leases – across multiple regions. While such moves are unusual, they are not without precedent. LOIs are often preferred for this very reason: they offer companies flexibility in adjusting plans before making full commitments.

Ultimately, it’s important to consider the rationale behind these decisions. These moves aren’t being driven by financial strain or a crisis of confidence in AI. Instead, macroeconomic conditions and supply chain constraints are forcing companies to be more selective about where and how they expand.

One key issue is chip availability. The extended lead time for high-performance AI chips has made it difficult to scale compute power in certain regions. In response, some hyperscalers are shifting focus to alternative markets with better access to supply chains. For example, if a company planning to deploy new servers expects to receive chips within three months, and then sees their shipments delayed to twelve months, they may choose to relocate to a region or country where they can secure the necessary infrastructure on time.

Looking to the Future

While AI’s infrastructure buildout isn’t ending, its composition may evolve going forward. For example, the bulk of capex spending could shift from real estate – which includes leased and owned data centers – to other categories, such as equipment for existing facilities or retrofitting for efficiency. 

Looking ahead, data centers specializing in AI inference, rather than training, could see a surge in demand. DeepSeek’s model demonstrated that training cycles could become more efficient, shifting infrastructure needs away from massive training complexes toward facilities optimized for inference workloads.

Another significant debate revolves around pricing power. With vacancy rates in the low single digits, market rent growth should remain strong, leading to higher rent spreads – the difference between expiring and new lease rates. 

However, there’s a growing concern that only a handful of hyperscalers – including Amazon, Google, Meta, Microsoft – control most of the demand. While tight supply conditions typically drive rent increases, landlords may face limited pricing leverage when negotiating with some of the world’s largest companies. The next few years will reveal whether strong market conditions translate into actual pricing power.

Ultimately, the long-term outlook for AI infrastructure hinges on multiple unknowns. Could a superior foundational model emerge that significantly alters computing requirements? Will a breakthrough in chip technology render today’s investments obsolete? The telecom industry could provide an instructive analogy, with each generation of wireless technology (3G, 4G, 5G) requiring substantial new infrastructure while also introducing efficiencies that reshaped network investment strategies.

While AI’s next inflection point may still be 5-10 years away, data centers remain a foundational pillar of the industry in the near-term. Companies will continue to refine their strategies, balancing capex efficiency with the need for cutting-edge compute infrastructure.

Get Mizuho's insights in your inbox

Please complete all required fields.

Thanks for your submission.

Back to top