
Beyond PUE: Liquid Cooling and the Hunt for Stranded Power
January 15, 2026
Immersion or Liquid Spray Cooling for AI Data Centers: Which Fits Best?
January 26, 2026For a long time, data centers were designed around a fairly stable set of assumptions. Compute grew steadily. Heat was manageable. Cooling systems were sized once, optimized for air, and expected to last longer than most of the IT they supported.
That world is gone.
The industry is adapting, but not by standardizing on a single next generation cooling architecture. What’s taking shape instead is a more dynamic way of thinking, cooling as a constantly changing system that can be applied just as effectively in brownfield environments as in greenfield designs, allowing facilities to evolve without being locked into a single endpoint.
AI has changed not just how much power data centers need, but how quickly their requirements evolve. Racks are getting denser. Silicon roadmaps are accelerating. Many data centers are discovering that power delivery is not the only constraint; it’s whether that power can effectively be used.
There is no clean starting line
Most conversations about AI data centers tend to frame the discussion around either greenfield builds, purpose-designed for density from day one, or brownfield environments being pushed beyond their original intent.
In practice, the industry is working across two very different starting points. Some teams are breaking ground on new facilities. Many more are adapting air cooled halls that were never intended to support 100-kilowatt racks or highly variable, accelerator driven workloads.
Those brownfield facilities aren’t failing. They’re simply being pushed beyond the assumptions they were built on.
For both environments, the early moves look surprisingly similar. Operators are introducing liquid where it delivers the most value, at the chip. Direct to chip cooling and server-level liquid cooling unit (SLCU) systems relieve pressure on airflow, stabilize thermal conditions, and create headroom, whether the goal is extending the life of an existing building or establishing a flexible baseline in a new one.
“The goal isn’t to replace everything overnight,” says Yunshui Chen, CEO of Airsys. “It’s to create access to cooling infrastructure that works across generations, from legacy environments to current deployments and into what comes next. Each layer should add capability on its own but also fit cleanly with the rest of the system.”
That idea, optionality, shows up again and again in real deployments.
A brownfield site or legacy data center has typically been locked into air cooling designs in the past. Both power and cooling infrastructure are major constraints with the thought of shifting to any amount of liquid cooling. The addition of more heat rejection equipment, CDUs, and thermal energy storage takes up space that may not be available. This scenario requires unique alternative solutions outside of a standard direct-to-chip design such as SLCU, enhanced immersion, or two-phase technologies.
New greenfield developments have a dilemma of their own. The challenge is designing an AI data center today with infrastructure that will last a decade or more with the ability to adapt to increasing workload densities within the same timeframe.
For instance, a cooling design is expected to last a decade or more; however, HPC servers may only last three to five years, in which higher density workloads will increase dramatically. Cooling systems need to be designed with the thought of moving from DTC to SLCU or two-phase technologies in that 10- to 12-year timeframe.
The middle is where most of the work happens
As AI moves from pilot projects to production workloads, many organizations find themselves in an in-between state. These aren’t legacy data centers anymore, but they’re not fully future proofed either. They’re hybrid environments.
In this phase, liquid cooling becomes part of the baseline design, not an exception. Facilities are built with higher temperature water loops, purpose built liquid distribution, and a mix of air and liquid heat rejection. The emphasis shifts from adding liquid cooling to standardizing it.
Rack based, cassette style liquid cooling plays a big role here. Treating the rack as a repeatable thermal unit simplifies deployment, aligns cooling upgrades with server refresh cycles, and makes it easier to scale density without constantly revisiting the mechanical plant.
“With server densities constantly rising, cooling systems need to evolve to meet demand,” said Tony Fischels, VP of PowerOne at Airsys. “This requires unique, dynamic designs with the ability to adapt to new cooling technologies during the cooling system’s lifetime.”
This is also where broader system efficiency starts to show up in meaningful ways.
Effectiveness matters more than ever
Metrics like PUE still matter, but they no longer tell the full story in AI heavy environments. What operators increasingly care about is how much of their available power actually reaches compute.

That’s where Power Compute Effectiveness, or PCE, becomes useful. Rather than focusing solely on how efficiently a facility operates, PCE asks a more practical question – how much of the power do you have that is ultimately converted into usable compute? Power lost to cooling overhead, compression, or inefficient heat transfer is power that never reaches the GPUs.
The focus for the future is clear.
Improving effectiveness isn’t about chasing a new metric or optimizing for a headline number. It’s about removing friction from the system so power, cooling, and compute are aligned. When those pieces work together, the result is not just better efficiency, but a data center that can scale AI workloads with fewer constraints.
Designing for what comes next
Looking ahead, the direction of travel is obvious, even if the timing isn’t. Higher heat flux, higher operating temperatures, and growing interest in chiller-free and two-phase cooling architectures are all part of the AI roadmap. But very few facilities will move there in a single step.
The smarter approach is designing today’s infrastructure so it doesn’t block tomorrow’s options. Modular liquid distribution. Rack-level thermal systems. Controls and piping strategies that can support more advanced cooling methods when the time comes.
Cooling as an enabler, not an obstacle
The AI factory era isn’t defined by one cooling technology or one facility type. It’s defined by adaptability.
Facilities that succeed will be the ones built from layered, modular cooling building blocks, systems that can support air today, liquid tomorrow, and something more advanced down the road, all without starting over.
Cooling has moved out of the background. Not because it’s suddenly more exciting, but because it’s now central to how data centers grow, scale, and stay relevant. In this new era, cooling isn’t just about removing heat. It’s about making progress possible.
To find out more about building a dynamic cooling ecosystem fit for the AI era, check out The 2025 end-of-year review supplement.



