
AI Is Moving to the Water’s Edge, and It Changes Everything
January 5, 2026
Beyond PUE: Liquid Cooling and the Hunt for Stranded Power
January 15, 2026The past several days have generated more heated discussion around cooling than the systems themselves ever could — or should.
That didn’t come out of nowhere. It came from NVIDIA’s Rubin announcement, and from the wave of reactions that followed it. (To recap: NVIDIA unveiled its six-chip AI architecture designed for next-level agentic AI, advanced reasoning, and Mixture-of-Experts (MoE) models.) NVIDIA’s announcement marks a giant leap for AI infrastructure.
I’ve been reading the same social media threads, response blogs, and group chats everyone else has. Some of the takes were thoughtful. Some were emotional. Most were engineers doing what engineers do best, slowing the conversation down and asking for precision.
That’s a good thing.
But there’s another industrywide conversation that should evolve from NVIDIA’s announcement: It’s called Power Compute Effectiveness, which is a new way of thinking that shifts the conversation from energy usage to compute outcomes. (More on this later).
Let’s start with what engineers have been reacting to and what I believe is incredibly poignant, because a lot of the rest of it got blurred pretty quickly.
- 45 °C entering water temperature is not new. In fact, it’s still well below what these systems can safely handle.
- Blackwell NVL72 already established that 45 °C envelope back in 2024.
- Rubin does not introduce a new cooling regime.
- The inlet and outlet temperatures are fundamentally unchanged.
- What did change is the amount of heat being rejected inside that same window.
For the people designing and building direct-to-chip liquid systems, that last bulleted distinction matters — a lot. And they’re right. The thermal design point didn’t change. The heat flux did.
That observation deserves to be acknowledged clearly, because it tells us something important. The industry didn’t suddenly stumble into a new temperature frontier overnight. But it did prove something more consequential: thermal envelopes can remain stable while compute density explodes.
Once that happens, temperature itself stops being the main focal point. The system becomes the topic.
This is where the conversation should move, because of what the engineers calling out NVIDIA didn’t say. The chip giant didn’t announce higher entering water temperatures in the Rubin unveiling. What was clear, however, is that chip manufacturers don’t announce the outer edge of what infrastructure can do for evolving needs.They announce what they’re willing to warranty globally, across every climate, every operating condition, and every deployment model, right now. I’d call that, “today’s new comfort level.”
That distinction makes all the difference in the world, because it’s hard to change the world while fenced into a comfort-level. What we saw is a meaningful new stretch in reference design. What we did not see is the new ceiling.
And when that comfort level holds steady across generations while density doubles, it quietly opens the door to the next phase of infrastructure evolution. No one is even close to “breaking physics,” but we are getting better at stretching it with more confidence, better materials, tighter control, and system-level thinking.
Thermal stability has now been demonstrated. Repeatedly. At scale. Which leads to a more interesting question than the one dominating headlines.
Not “how do we cool this?”
But “how warm can we safely operate, consistently, without sacrificing reliability or performance?”
And yes, you’ll see headlines that say , “we can cool with hot water.” That’s the same idea -and it’s something that the industry has been building towards for years. For a lot of operators, that question is the difference between a data center that stays stranded and one that suddenly becomes viable again.
This is where the “Powered and Permitted” conversation comes back into focus. Across the industry, we have enormous amounts of infrastructure that already has power, already has permits, already has real estate and grid access, but can’t support modern AI workloads because of legacy cooling assumptions. Compressors are still one of the biggest mechanical barriers standing in the way.
If you want to unlock those sites, you don’t do it by chasing colder temperatures. You do it by learning how to operate warmer, safely, predictably, and at scale.
That’s why the reaction to Rubin feels less like disruption and more like validation. Modular, system-level cooling was never a bet on one silicon generation. It was a bet that infrastructure would eventually be asked to do something harder than just remove heat. It would be asked to convert power into usable, predictable compute, without waste.
Once cooling stops being the primary bottleneck, the way we measure infrastructure performance has to change, too.
Now back to Power Compute Effectiveness, or PCE.
For years, we’ve leaned on facility-level metrics like PUE and WUE. They were useful in their time, but they’re blunt instruments in an AI-driven world. As temperatures stabilize, water use drops, and chillers move from foundational assets to conditional tools, those metrics flatten out. They stop telling us what we actually need to know.
What matters now is conversion.
That’s where Power Compute Effectiveness (PCE) fits in as a practical way of thinking. PCE asks a simple question: how effectively does the power we deliver turn into sustained, usable compute?
That shift really defines what’s happening in cooling itself. Once the thermal envelope stabilizes, optimization moves in both directions at once. And that changes everything. Control systems matter more. Integration matters more. Modularity matters more. And suddenly, upgrading existing data centers becomes less about rebuilding buildings and more about upgrading systems.
As 2026 gets underway, that’s the conversation that should dominate headlines.
Later this month, I’ll be on a panel at Pacific Telecommunications Council (PTC)with people who have been living this work for a long time. The tone of those discussions is already different than it was a year ago. Less debate about whether this is coming. More focus on how we do it responsibly, economically, and at scale.
In a strange way, it reminds me of an old lesson. Progress rarely comes from racing toward a destination. It comes from understanding the journey well enough to stop fighting it.
Power didn’t disappear as the constraint. It just stopped being the answer.
And that’s a good thing.



