Forget Atlantis: How data centers can use liquid above ground to stay cool

Glowing server racks inside a data center.
Image Credit: Shutterstock (Image credit: Shutterstock)

Microsoft recently ended a project involving an underwater data center, opting for liquid cooling opportunities on land. The core concept – leveraging liquid for cooling – definitely holds merit when applied within data centers. Liquid cooling offers a compelling alternative to traditional air-based methods.

It can enhance energy efficiency, reduce operational costs, and enable facilities to repurpose excess heat. These are significant benefits considering increasing demands, escalating energy expenses, and Environmental, Social, and Governance (ESG) regulatory pressures.

In short, by keeping liquid cooling on land, data centers can leverage its benefits without facing the logistical difficulties of underwater deployment. To fully reap these rewards, operators must thoroughly grasp liquid cooling, including its challenges, and understand how useful tools, like digital twins, can ensure success.

First, however, let’s uncover the benefits of liquid cooling.

Mark Seymour

Distinguished Engineer at Cadence.

The advantages of liquid cooling

Data centers rely on powerful components like computer processing units (CPUs) and graphic processing units (GPUs). Keeping them cool is crucial for optimal performance. Traditional air cooling can struggle to achieve this as heat generation and server rack densities keep rising. This is where liquid cooling shines.

Liquid has superior heat capacity than air (e.g., water is about 4.2 times that of air per kg). Importantly, when combined with density, it is around 3,500 times the amount of energy per unit volume. The result is that small volumes of liquid can be pumped in close proximity to CPUs, GPUs, and other high-power components to more directly extract the heat from them.

As heat can be removed from components more easily this way, it means the benefits are severalfold:

  • Higher chip power densities
  • Higher liquid temperatures in the facility cooling loop, creating the potential for more free cooling (although higher chip power densities may negate this as power densities rise)
  • Higher return temperature, creating greater potential for heat recovery and re-use because the liquid is segregated from the occupied environment
  • Less energy is needed – lower pump energy than fan energy for the same power dissipation using air-cooled systems
  • CPUs and GPUs can operate at optimal temperatures more effectively, preventing overheating and potential performance bottlenecks, which is particularly important as the heat load in data centers rises

Of course, a fundamental reason for turning to liquid cooling is that high-density server racks, demanding workloads, and steadily rising power densities are pushing the limits of air cooling. Air cooling can readily handle heat loads of up to around 20kW per rack. However, beyond 20-25kW, a combination of direct liquid cooling and precision air cooling becomes more efficient and economical.

By adopting liquid cooling, data centers can ensure optimal performance for their powerful equipment and contribute to a more sustainable future. Yet, like every innovation, there are still drawbacks to be considered.

Operational hurdles

Historically, liquid cooling has had potential electrical hazards. Although the risk was relatively low, the perception, alongside practicalities, hindered liquid cooling’s widespread adoption. Modern solutions have mitigated these risks through innovations like dripless quick connectors and negative pressure systems to stop liquid from leaking into the data center. These transformations have made this alternative option considerably safer, but many operators remain nervous. This attitude is changing but is not the only barrier to adoption.

Another major challenge is adding liquid cooling to air-cooled data centers. Coordination between air and liquid cooling systems is crucial for efficiency. Making it work requires managing logistical complexities and typically significant financial investments.

Even when implemented in new facilities, liquid cooling introduces operational challenges that can lead to hidden costs. Compared to air cooling, liquid cooling systems demand additional work during installation. For example, operators need to establish fluid distribution systems and connections in addition to the usual electrical connections, which is not a simple task.

However, these challenges aren’t insurmountable, and operators have multiple options to navigate them.

Ways to stay cool with liquid

There is no single recipe for keeping the data center cool with liquid. There are two fundamental methods.

The most heavily adopted method, ‘direct to chip’ or ‘hybrid’ cooling, passes a coolant, such as one based on water, through a cold plate in direct physical contact with IT equipment critical components to improve efficiency and effectiveness. It allows CPU and GPU memory to run faster and more efficiently at lower temperatures, resulting in improved energy efficiency – more compute per watt. One thing that should be noted is that it does not capture all the heat from the IT components, only from those with cold plates. Typically, 10-20% of the heat must still be captured by air. Given rising power densities, this heat could still present a cooling load for air systems measured in multiple kW.

The other method is immersion cooling. Here, the IT equipment is submerged in a dielectric liquid, bringing all the components into contact with the fluid that will carry the heat away. Not unlike air cooling, the system’s design must ensure adequate flow of liquid adjacent to the electronics. Because the IT equipment is submerged in the cooling liquid, heat dissipation to the surrounding air will likely be lower. However, material compatibility can be an issue—degradation, caused by coolant interaction with electronics (e.g., insulation), can shorten equipment life. Additionally, as data center power densities increase, the efficiency of immersion cooling systems, reliant on buoyancy-driven flow, may be compromised.

Both methods can use a single-phase cooling approach or a two-phase approach. Two-phase involves choosing a liquid that will reach boiling point at the operational temperatures and pressures to take advantage of the latent heat of evaporation. This offers great potential for high-density applications but has additional challenges, including global warming potential and open systems.

With these options, choosing the right fit for a facility can be a complex decision. That’s where digital twins – virtual replicas of data centers – can help operators make an informed choice.

Choosing the right liquid solution

Digital twins provide clear insight into what data center operators typically can't readily see or measure – including cooling efficiency.

With this technology, operators can evaluate the potential benefits and drawbacks of different liquid cooling methods before making any physical changes. Digital twins also allow operators to test various scenarios, such as analyzing how and where to introduce liquid cooling into an air-cooled data center. The result is a tailored solution that meets specific heat load requirements.

Once liquid cooling is installed, digital twins also help identify areas for continuing improvement. By considering any new hardware changes or increasing server density, the technology can assess the potential impact on the cooling system. This proactive approach prevents existing cooling infrastructure from becoming overwhelmed, which can lead to compromised resilience, IT slowdowns, and lost capacity.

The next step to a better tomorrow

Liquid cooling is no longer a trend but a must for modern data centers. While the allure of underwater data centers once promised ‘free’ cooling, the logistical reality has dampened its appeal. However, conventional data center liquid cooling offers a promising alternative. Using digital twins, facilities can reap the benefits of liquid cooling—such as reduced environmental impact and energy consumption—and gain confidence in their transition to a liquid-cooled strategy.

We've featured the best green web hosting.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS
Mark Seymour

As a founder and CTO at Furure facilities, Mark Seymour have been responsible for development of our products and services. A key part of this has been to been to build a team of enthusiastic, able and appropriately experienced and qualified professionals and at Future Facilities we have 4 offices globally with a team I believe I am rightly proud of - an important complement to out best in class software and associated services.