Beyond compute: Infrastructure that powers and cools AI data centers

| McKinsey Direct

As AI use becomes continuously more widespread, the demand for data centers is surging in tandem. Data center demand is expected to grow at a CAGR of 22 percent and reach 220 gigawatts by 2030—nearly six times larger than demand in 2020 and driven predominantly by AI adoption and hyperscaler spending.1 Moreover, McKinsey research shows that by 2030, data centers are projected to require $6.7 trillion in cumulative capital outlays worldwide to keep pace with the demand for compute.2 A large part of that capital spending will be allocated to the underlying systems that deliver electricity to IT equipment and the cooling systems that remove the heat generated by the IT equipment.

As data center designs evolve to keep up with increasing compute requirements, power, cooling, and IT components are no longer seen as separate entities; they must be codesigned and considered in the context of the holistic performance of the data center. More opportunities are opening for stakeholders across the data center value chain to design equipment with the overall architecture in mind, either reducing silos or allowing manufacturers to capture vertical integration opportunities to expand their market footprint. What’s more, time to market is now one of the most important buying preferences for data center operators. Therefore, to deliver end-to-end solutions to customers, it’s becoming increasingly important for stakeholders to provide services including equipment repair and maintenance, start-up, and power and cooling equipment commissioning.

This article presents the prospective advancements in power and cooling for data centers and the opportunities stakeholders have to advance this area of data center technology ahead of the curve.3

To read the full article, download the PDF here.

Explore a career with us