Skip to main content
Start of main content

The future of data centre cooling in Australia: Why liquid is leading the way

July 15, 2025

By Scott Shaw and Elliot Alfirevich

Data centre cooling is evolving fast. Liquid cooling technology is leading the way for data centres in Australia

What’s one of the biggest technical challenges in the data centre space right now? Designing cooling systems that can keep up with skyrocketing demand—without wasting time or energy.

Thanks to the AI boom, demand for fit-for-purpose data centres is skyrocketing. According to a recent JLL report, Australia needs 175 new data centres by 2030. Data centre capacity will also need to expand from 1,350 megawatts (MW) to 3,100 MW by 2030—requiring an AU$26 billion investment.

Companies need more data storage and higher processing power than ever before. Rack density (the amount of power consumed by a single server rack) is increasing. In 2020, the average rack density was 6 to 12 kilowatts (kW). Today, the average is 20 kW, with some racks hitting 100 kW. Nvidia is aiming to release 600 kW racks by 2027, and Google is discussing 1 MW racks by 2030.

But here’s the catch: If data centre cooling can’t keep up, none of this matters. That’s why we’re seeing data centre cooling evolving fast—and new solutions are emerging as the front-runners in the race to stay ahead.

Close-up of piping for an advanced cooling system for a data center.

We’re designing for liquid-first cooling at data centres, as fan-based methods can’t move enough heat to keep pace with today’s AI-ready loads.

From air to liquid: The optimal thermal media for data centre cooling

Step into a data centre in Australia today and you’ll likely find it a noisy and windy place. Most mission critical data centres built in the last five years use air movement to keep racks at the right temperature. Fan-based air cooling was fine five years ago—but it’s falling behind fast. For today’s AI-ready loads, it simply can’t move enough heat. That’s why we’re designing for liquid-first systems now.

Liquid cooling isn’t just more efficient—it’s in another league. It removes up to 3,500 times more heat compared with air, and water conducts heat about 25 times better. That performance is no longer a nice-to-have choice. It’s essential.

Still, not all liquid data centre cooling systems are built for what’s next.

  • Close-coupled rear door cooling places chilled radiators on the face of each rack. It works well, but it’s hardware-heavy and expensive to scale.
  • Immersion cooling involves submerging servers in cold liquid that doesn’t conduct electricity. It’s an effective approach. This approach is powerful but hard to maintain—fine in theory, tricky in the real world.
  • Direct-to-chip cooling is what we’re backing. It delivers coolant straight to the CPUs, works across a range of densities, and integrates cleanly with existing systems.

We’re already designing direct-to-chip solutions on projects running 50 to 150 kW per rack—and the results will change the game. We’re slashing reliance on energy-hungry fans, improving power usage effectiveness at the source, and opening up new prospects for heat reuse that would’ve been unthinkable with traditional systems.

This isn’t just theory—it’s happening. In the UK, waste heat from data centres is warming a public swimming pool. In Denmark and Sweden, it’s heating thousands of homes. That’s not just smart engineering. It’s what sustainable infrastructure should look like.

Sets of cooling towers with green pipine for a data center building.

Environments matter. More temperate climates currently rely on chillers to cool their data centres.

An alternative option for data centre cooling in temperate climates

There is another way we could approach data centre cooling in temperate climates: direct free cooling.

Instead of using chillers to cool down the data centre, fans are used to draw outdoor air directly into the data halls. This would be the most energy-efficient option in certain environments. But in practice, it only works in very specific climates with stable air quality and space to spare. For most data centres, it’s not going to be a reliable long-term data centre cooling solution.

It’s not just the outside temperature that matters—it is the equipment and relative climate. The challenge is on for equipment manufacturers. They need to create hardware that’s more resilient to enable direct free cooling during a wider window of ambient temperatures. Meaning even in more temperate climates, we get more hours per year that we can use direct free cooling.

On top of that, direct free cooling requires more space than traditional methods, making it hard to scale in multi-storey buildings.

We’re moving to live models to speed up the process and keep pace with the demands of clients.

The pressure on data centre energy consumption has never been greater

Designing and building a data centre isn’t easy. It’s all about balancing cost, efficiency, and environmental impact—especially when it comes to data centre cooling infrastructure, which now plays a key role in project timelines. Plus, the race to build these centres is fierce. Whoever gets there first wins big.

The drive to build data centres often comes from two groups. Hyperscalers—think Google, Amazon, Oracle, and Microsoft—who are building significant facilities for their own needs. For example, Amazon has proposed a $450 million data centre in Gregory Hills, southwest Sydney. Microsoft is extending its global data centre footprint to Western Australia.

Then there are colocation companies—“colos”—who rent space out to others. Colos usually want to build faster than the hyperscalers so they can sell space. The challenge for colos is designing for flexibility in cooling methodologies. The competition is intense. Take Private Group, a Swiss private equities giant, for example. Private Group is acquiring Australian data centre operator GreenSquare DC for $1.2 billion in order to get into the sector.

We’re moving to live models to speed up the process and keep pace with the demands of clients. Everything happens in real-time, so there’s less back-and-forth. In this race to build, our live model approach provides the speed and efficiency that’s needed. 

Exterior Building Design

In Indonesia, we transformed hot water into cold water via an absorption chiller to cool SpaceDC’s Hyperscale Data Centre

Designing data centre cooling in Australia for the demands of tomorrow

If you’d asked us 10 years ago what we’d be designing today, advanced liquid cooling for data centres in Australia probably wouldn’t have made the list. But here we are—building faster and smarter to keep up with the huge leaps in next-gen tech. We’re proud to be helping our clients move quickly and intelligently, staying first to market in an industry where milliseconds matter.

And while we’ve focused here on capturing heat at the source, the conversation doesn’t end there. What happens to that captured heat? How do we reject it efficiently from the facility itself? These questions are just as critical to the success of future-ready data centre cooling strategies—and we’re tackling those, too.

And that 600-kW rack challenge put forward by Nvidia for 2027? Game on.

  • Scott Shaw

    Experienced in mechanical building services, Scott has a background in contractor and consultancy design and project management. He’s provided input on all project stages from being engaged by developers for masterplan through to detailed development

    Contact Scott
  • Elliot Alfirevich

    The Buildings Operations Leader for WA, Elliot is adept at managing the integration of innovative technologies and techniques in a variety of sectors.

    Contact Elliot
End of main content
To top