Prepared 4 May 2026/Version 0.2 Draft/Confidential
Zone 04 · Build · Cluster A
Data centredesign
For Jared Carl and Andria Zou
Liquid-cooled from day one. Sized for next-gen hardware envelopes (B300, Vera Rubin path). Architectural shell with civic intent. The 1.2 MW canonical pod is the unit of replication and the published reference we co-develop.
Joint owners
Jared Carl + Andria Zou
Engineering + sustainability
MicroLink lead
Sancha Olivier
Co-founder · design
Architectural partner
Jacobs Engineering
Tier-one engineering firm
Reference unit
1.2 MW canonical pod
576 GPUs · NVL72 baseline
Working session
5 May 2026
30 min · Teams
04
The Thesis
One pod, replicated. Liquid-cooled from day one, sized for the hardware envelopes of 2028, with an architectural shell designed to integrate with industrial host sites. The canonical 1.2 MW pod is the unit of replication. Engineering substance and civic intent in the same drawing.
01
Per-rack density
142 → 300+kW
NVL72 today. B300 and Vera Rubin path. Pod envelope sized to absorb both.
RoadmapIndicative
02
Canonical pod
1.2MW
576 GPUs · 8 NVL72 racks · the unit of replication. San José runs 8.
LockedReference
03
Cooling topology
Liquid only
No air-cooled fallback path. Architecturally pure. Heat recovery only works this way.
Day oneLocked
04
Civic shell
Sub-12months
Architectural shell designed for industrial host integration. Planning permission on a civic timeline.
The 1.2 MW pod is the unit of replication. 576 GPUs across 8 NVL72 racks, Quantum-X800 plus AC Scalable Unit switching, liquid-cooled at 142 kW per rack today, sized for what comes next.
Design once. Replicate everywhere. Pod count scales with host loop capacity. Pod design does not.
The pod is the architectural primitive. Inside the pod: 8 NVL72 racks, each holding 72 GPUs, totaling 576 per pod. Quantum-X800 InfiniBand handles compute fabric, with the AC Scalable Unit providing the canonical switching topology that NVIDIA's reference architectures already document. Liquid-cooled from the manifold to the cold plates. CDU bank servicing the 8 racks at the cold side, plate heat exchanger crossing to the host process loop at the warm side.
San José runs 8 pods at 1.2 MW each, totaling 9.6 MW of IT plus 1.6 MW of MEP overhead and auxiliary loads. Site total: 11.2 MW. Other host sites take 1, 2, or 4 pods depending on the host's primary loop capacity and the available plot. The pod design does not change. Only the count. That is the property that makes the design exportable as a published reference.
Figure 01 · The 1.2 MW canonical pod · plan view
576 GPUs.One pod. The unit of replication.
Plan view of the 1.2 MW pod. 8 NVL72 racks in two rows, CDU bank along one long side, electrical and host-loop interface along the other. Indicative dimensions.
Confidence · medium-high · subject to engineering review
Source · MicroLink reference architecture v0.4 · Jacobs Engineering schematicMethod · Plan view · indicative dimensions · subject to host site survey
GPU count
576/ pod
8 NVL72 racks at 72 GPUs each.
IT envelope
1.2MW
Per pod. PUE 1.12. ERE 0.27.
Footprint
~110m²
22 m × 5 m indicative. Sized to industrial pad.
Heat to host
~1.0MW
Per pod, recoverable. PHE-coupled secondary loop.
Sized for what comes nextnot for what shipped last quarter#
NVL72 at 142 kW per rack is today's baseline. B300 and Vera Rubin step density up substantially. The pod envelope is sized to absorb both without rework.
The pod is built to the envelope, not to today's rack. Two generations of headroom, by design.
NVL72 is the production baseline at 142 kW per rack. The path forward steps density up: B300 (Blackwell Ultra) lands in the 200 to 260 kW per rack range per public roadmap signals, and the Vera Rubin generation is expected to push into 300+ kW per rack. Each generation tightens the demand on liquid cooling: higher heat flux, higher coolant flow rates, more aggressive thermal management.
The canonical pod is designed against the Vera Rubin envelope, not against NVL72. CDU sizing, manifold capacity, plate heat exchanger duty, electrical distribution, all sized for the highest-density generation on the published path. NVL72 today fits comfortably inside that envelope. When B300 lands, the pod absorbs it without architectural rework. When Vera Rubin lands, the pod still fits.
This is the only way to deploy a published reference that has a useful shelf life. A pod sized for last quarter's hardware is obsolete the moment the deployment commissions. Jared's team's roadmap signals are the design input. The envelope is the contract.
Figure 02 · Per-rack density envelope · across generations
The envelope.Sized for 2028, fits 2026.
Per-rack density step-up across NVIDIA generations on the published path. Pod envelope shown as horizontal band. Indicative ranges, subject to NVIDIA roadmap confirmation.
Confidence · medium · indicative · per public roadmap signals
Source · NVIDIA published roadmap signals · MicroLink reference architecture v0.4Method · Density step-up · indicative ranges · subject to roadmap confirmation
§
The pod is the published reference, not a snapshot of one product
A reference architecture sized to the latest shipping product is obsolete on day one. The canonical pod is sized to the third generation on the published path. NVL72 today fits comfortably. B300 absorbs cleanly. Vera Rubin still sits inside the envelope. That is what makes the design exportable as a published reference with a useful shelf life.
No air-cooled fallback. No CRAC units. The pod is liquid-coupled to the host process loop from commissioning day one. This is the principle that makes everything else work.
Most liquid-cooled data centre deployments today are retrofit hybrids: they start as air-cooled facilities with CRAC units and hot/cold aisle containment, then add liquid cooling at the rack level for the highest-density compute. This carries the cost of both systems. The air-side MEP, raised floors, and air handler zones never go away.
The canonical pod is liquid-only from day one. No CRAC. No raised floor for air distribution. No hot/cold aisle containment. No air handler MEP zone. The cooling path is rack manifold to CDU to PHE to host process loop. Residual air load (cold-aisle ambient, not chip cooling) is handled by rear-door HX inside the racks themselves. Architectural simplicity that compounds: less space, less capex, less embodied carbon, and most importantly, useful heat at the boundary.
Air-cooled simply cannot deliver heat at temperatures useful to a host process. A WWTP digester wants 37 °C continuous, sourced from a 45 °C secondary loop. That is impossible to achieve with air-cooled compute and a heat exchanger downstream. The whole closed-loop architecture depends on liquid being the cooling medium from the silicon outward.
§
For Andria: the embodied carbon argument
Liquid-only construction removes the air-side MEP entirely. No CRAC manufacturing footprint. No raised floor. No air handler ducting. The embodied carbon delta versus a retrofit-path data centre is meaningful and disclosable. Combined with the heat recovery story, the pod's lifecycle carbon position is structurally different from any conventional reference design.
The pod inside is the same canonical 1.2 MW unit. The shell outside is designed to integrate with the host site's existing architectural language. This is what unlocks the deployment timeline.
A data centre that imposes on its host site fights its way through planning. A data centre that belongs gets entitlements on a civic timeline.
The architectural shell is decoupled from the pod design. Pod is universal. Shell is contextual. At a wastewater treatment plant, the shell uses the industrial vocabulary of the plant: perforated metal cladding, exposed structural steel, civic-grade concrete plinth, treatment-plant siting language. At a hospitality site, the shell sits in back-of-house with hospitality-grade enclosure: rendered finish, integrated landscape, screened mechanical. At a brewery, the shell can match the existing process buildings: corrugated cladding, agricultural-industrial palette, sympathetic massing.
This is not aesthetic decoration. It is a deployment unlock. Planning departments approve infrastructure that reads as belonging. They object to infrastructure that reads as imposing. The same pod, deployed in a generic glass-and-steel hyperscale shell, would face 18 to 36 months of planning friction at most public-sector host sites. The same pod in a context-appropriate civic shell faces sub-12-month entitlements. That is the deployment math the architectural choice serves.
Jacobs Engineering is the architectural partner for the shell programme. The pod design is locked at MicroLink. The shell is delivered per site by Jacobs, drawing from a pattern book of treatments calibrated to host-site categories.
Shell A · WWTP
Industrial language
Perforated metal cladding, exposed structural steel, civic-grade concrete plinth. Reads as part of the plant's industrial vocabulary. Sits comfortably alongside digesters and process buildings.
Rendered finish, integrated landscape buffer, screened mechanical louvres. Sits in back-of-house at hospitality sites. Reads as service infrastructure that belongs on the property.
The 1.2 MW canonical pod is identical in all three deployments above. The shell language changes per host site so the building reads as belonging in its context. This is the architectural decoupling that makes the design exportable across host categories without redesigning the engineering each time.
Co-develop the canonical 1.2 MW pod as a published reference architecture. Engineering substance owned by Jared's team, sustainability framing owned by Andria, civic shell programme co-developed.
Tier · Co-developed reference architecture
The 1.2 MW canonical podliquid-cooled, generation-future-proof, civic-shelled
One pod, replicated. Engineering rigour and sustainability framing in the same drawing. The reference NVIDIA points other partners to when they ask about industrial-host deployments.
From Jared
Secure design portal access for the canonical pod
Roadmap signals on B300 and Vera Rubin density
Reference architecture review against NVIDIA patterns
NCP / DGX-Ready Colocation alignment
Engineering cadence through Q3 2026
From Andria
Sustainability framing for the published reference
Embodied carbon position aligned with disclosure regime