Confidential

100 MW High-Performance Computing Data Center

AT&T Discovery District
AT&T Discovery District
AT&T Discovery District
AT&T Discovery District
AT&T Discovery District
AT&T Discovery District
100MW-HPC-DC-1
100MW-HPC-DC-2
100MW-HPC-DC-3
100MW-HPC-DC-4
100MW-HPC-DC-5
100MW-HPC-DC-6
previous arrowprevious arrow
next arrownext arrow

Status

Construction

Owner

Confidential

Delivery Method

Design-Bid-Build

Size

550,000 SF

Construction Type

New

Awards

Multiple (Under NDA)

100 MW High-Performance Computing Data Center

A confidential client of DFWCGI had a strategic vision to develop a 100-acre site adjacent to a power substation in rural North Dakota. The site was evaluated for its topography related to drainage, its adjacency to the power substation, the power utility providers available capacity and the power generation mix. The power utility was able to guarantee 500-MW within the 5-years, with 250-MW available within 18-months. The long lead item for the utility company was the large power transformers.

After determining the viability of power within the desired timeframe, the A/E team began the site master planning process. The site’s intent was for a peak PUE of 1.35, with a blended annual average PUE of 1.16. Given the peak PUE, the site development allows for 370 MW of compute. Site planning ultimately landed with three main data center buildings, along with a campus water building, campus generator plant, and the main utility substation.

A basis of design for the facility was then developed. The electrical system architecture included a 4-to-make-3 (4M3) failover design, with 415V power from the unit substation, through the UPS systems, and into bus-duct in the data hall. Gas insulated switchgear (GIS) accepts the utility feeds from the adjacent substation and distributes it to unit substations, which step the voltage down from 35 KVa to 415V.

Mechanically, a 20-year extreme temperature was done, with the system being able to handle 90% of peak load at a 50-year extreme temperature. Water-cooled chillers were coupled to dry coolers, allowing for closed-loop heat rejection with a system WUE of 1.0. Low PUEs are achieved by using free cooling heat exchangers piped parallel to the chillers. These free cooling heat exchangers are anticipated to run over 70% of the year based on outside air temperature bin data. Chilled water temperatures were designed around ASHRAE W17 recommendations.

From this BOD, the team has since designed the first building on the campus, a 100-MW building designed for HPC/AI workloads with mechanical systems designed for liquid cooling. Four (4) 25-MW data halls were laid out in a stacked configuration, allowing for the InfiniBand communications cabling to support each data hall within its communications distance limitations. Ancillary functions such as the central utility plants, UPS rooms, and switchgear rooms were designed adjacent to the data halls to reduce the material costs associated with delivering power and cooling to the critical functions. HMA analysis was performed to ensure the large quantity of lithium-ion batteries within the USP battery rooms were appropriately handled for their hazards.

The BMS system was designed with dual-redundant PLC modules per central plant. Schneider Modicon controllers were utilized for both reliability and availability. EMPS was integrated into the SCADA system, which was Ignition by Inductive Automation.

The facility has been energized as of February 2025, and is currently undergoing level 3 and 4 commissioning of major systems. Level 5 commissioning is anticipated to start in May for the first 50-MW, with the second 50-MW beginning in Q3, 2025.