Product Code: TRi-0095
This report analyzes the capital expenditure structure of a typical hyperscale AI data center, breaking down the spending into physical infrastructure, IT computing facilities, and networking equipment. Among these, the investment in computing systems, which are servers, is the most significant. As the proportion of higher-end, higher-power AI servers in new projects continues to rise, the cost share of computing systems is expected to increase further.
Key Highlights:
- Compute-led capex dominates overall spending.
- Networks split into front-end and back-end for wide interconnect.
- Back-end uses spine-leaf to deliver non-blocking GPU comms.
- Front-end links connect servers to core, enabling efficient traffic.
- Industry trend shows hyperscalers expanding data center infrastructure for AI.
Table of Contents
1. Introduction
- Distribution of Capex for Data Center
- CRAH Is Better Suited for Large-Scale Data Centers Since It Has Lower Energy Consumption, Thus Achieving Lower PUE
- Data Center Capex: UPS Takes Largest Share After Building Structure
2. Data Center Computing Systems: Average Cost per MW Continues to Rise
- NVIDIA Remains the Dominant Player in AI Server Shipments
3. Network Systems of Data Centers: Average Cost per MW Continues to Increase
- Back-End Networks Advance from 800G to 1.6T; Front-End Networks Evolve from 100G/400G to 400G/800G
4. 125MW Data Centers CapEx Continues to Rise; Average Cost per MW Increasing
- Servers Account for Roughly 60% of Capex; This Ratio to Rise as Adoption for High-End AI Servers Increases