Rack densities of 10 kW or greater are the new norm in data centers — and the density will only continue to climb, according to the 2018 and 2019 Uptime Institute Global Data Center Surveys, respectively. Addressing higher density has become less of an “if” and more of a “when” — and in terms of cooling, “how” is an equally pressing question. Data center managers are wondering: How am I going to cool this?
There are many high density cooling options available for facility managers, especially as high computing capacity moves beyond gaming, blockchain mining, and other HPC applications. To support facility managers in facing this new reality, we’ll discuss the fundamentals of liquid cooling (the most effective process for high density cooling) and the specifics of rear door, direct-to-chip and immersion cooling technology. We’ll also share essential frameworks for evaluating these technologies (click here to go straight to the comparisons), guiding you to the best-fit high density cooling solution for your facility.
Liquid cooling technology for handling high densities
Before we explain the specific high density cooling options, it’s important to first discuss the technology at the basis of each: All are liquid-based cooling technologies.
Simply put, liquids (both water and engineered fluids) have a much greater capacity to capture heat by unit volume compared to air — nearly 3,300 times greater, to be exact. For data center managers, this means significantly greater cooling capacity. Air cooling (i.e. traditional cooling with CRAC units) can only provide a maximum of 6 kW to 20 kW per rack, depending on estimates. In comparison, liquid-cooled systems can cool densities of 100 kW per rack or greater.
In addition to handling higher densities, liquid cooling technology leads to:
- Energy savings: The specifics of the results will differ between the technology itself, the TCO, ROI and other specific variables of your facility, but the overall results are clear: Less energy is expended to maintain the room set temperature, leading to lowered energy spending.
- Reduced need for water: Air-cooled data centers use evaporative cooling and thus consume high volumes of water to achieve set temperatures. Liquid cooling can function with higher water temperatures, meaning evaporative cooling can be eliminated or drastically reduced and water demand decreases.
- Equipment streamlining: CRAC units, hot aisle / cold aisle containment and raised flooring are no longer necessary, freeing up valuable square footage and saving significant CAPex.
High density cooling options
Now we’ll explain the different iterations of liquid cooling technology, how they work and the kW densities each cooling system can handle.
Rear door cooling
Rear door cooling outfits a device directly on the computer room cabinets. The units, connected to the water mainline, pull ambient air into the cabinet via active equipment fans. The hot exhaust air passes over a heat exchanger matrix, then is transferred and rejected to coolant. The resulting chilled air passes back into the room at the predetermined ambient air temperature. With heat removal occurring so close to the heat source, greater efficiency is achieved.[The capabilities of rear door cooling units vary greatly across manufacturers. For the sake of our discussion today, we’ll discuss the capabilities of the ColdLogik rear door cooling unit. We find it beneficial to reference the most capable iteration of the technology.]
ColdLogik rear door coolers handle densities up to 135 kW and are overseen by the ColdLogik Management System (CMS). This system intelligently operates the cooling network and facilitates overall system efficiency. Essentially, the CMS ensures that individual units don’t unnecessarily deploy cold air and over- or under-cool the room; each unit only puts out as much chilled air as is needed to achieve the set temperature.
This cooling method delivers coolant via pipes directly to a cold plate incorporated into the processors to disperse heat. ColdLogik brings the cooling to the cabinets, while direct-to-chip brings the cooling directly into the server itself. Extracted heat is then fed into a chilled water loop and carried away to the chiller plant.
There are a few iterations of direct-to-chip technology. The liquid cooling medium can be either water or an engineered dielectric fluid. There are also single-phase and two-phase systems, which refers to the number of phases the liquid cooling component passes through. In single phase, the water or engineered fluid stays in a liquid state, both when it’s cooling the equipment and when it’s carrying the heat away. In two-phase, the liquid evaporates into a gas to carry the heat away. This cooling method has been shown to provide over 100 kW of cooling capacity.
Immersion cooling is just what it sounds like — the servers are fully or partially immersed in a dielectric liquid coolant. (For obvious reasons, water cannot be used in this iteration of liquid cooling technology.) The engineered fluid covers the board and the components, which ensures all sources of heat are removed.
The environment is also inherently slow to react to external changes in temperature, and the equipment is no longer under the influence of humidity and pollutants. As with direct-to-chip methods, there can be single-phase or two-phase cooling. This method has been shown to provide over 100 kW of cooling capacity.
How to evaluate high density cooling options for your facility
Not all liquid cooling technologies are created equal — here we’ll discuss the pros and cons of rear door, direct-to-chip and immersion cooling for your facility’s efficiency demands, logistical feasibility and bottom line.
Energy efficiency and savings
All three of these high density cooling options will achieve greater energy efficiency and savings compared to traditional CRAC units. Additionally, actual results will of course vary from computer room to computer room based on a number of factors at your data center.
That being said, here are the results facility managers can expect for each of the solutions:
ColdLogik rear door cooling operates with high water supply temperatures — up to 75oF. This opens the possibility for free cooling and potentially negates the need for traditional mechanical cooling products such as chillers. Additionally, the CMS intelligently operates the cooling network, automatically adjusting fan speed, water flow rate and, if necessary, output water temperature from the cooling medium in response to the heat removal demands placed on the system. This results in a consistent delivery of cooled air into the computer room. The energy efficiencies of this solution have led to 90% energy savings for a STILH data center, a PUE of 1.045 for an ARM data center and 98% power efficiency for a Cambridge University facility.
Direct-to-chip cooling can also operate with higher water supply temperatures, and the reduction in IT fan energy moves the needle on energy efficiency as well. In an estimation of overall energy savings for a Romonet data center using simulation software, savings for direct-to-chip ranged from 17% to 23% at full power-level measurements.
Immersion cooling, in theory, is the most efficient cooling method — over 95% of the heat generated by the servers is removed via the engineered fluids. Additionally, immersion cooling completely removes the energy expenditure of server fans, which both direct-to-chip and rear door cooling rely on in some capacity. However, theoretical efficiency isn’t the only lens through which to evaluate cooling methods; next we’ll discuss the logistical feasibility and implementation cost of all three systems.
Feasibility and cost
Immersion cooling technology will have a high upfront cost due to the price tag on the liquids used — these are complex and costly engineered fluids. Procedures must be in place to minimize loss through evaporation due to the fluid’s higher cost. In addition, there is the expense of training IT personnel on the technology; how to service it, how to troubleshoot it, how to access the servers. Finally, immersion cooling sometimes requires completely new server equipment, which should be a consideration both in terms of feasibility and TCO.
Both direct-to-chip and ColdLogik rear door cooling require the upfront expense of connecting the water mainline to the computer room and adding hard piping to the individual cabinets. Additionally, direct-to-chip requires alterations to the IT via the incorporation of the cold plate in the motherboard’s processor.
Besides the updates to the piping, ColdLogik doors offer the lowest barrier to entry for data center managers. The individual doors themselves only take about 10 to 20 minutes to install and the continued accessibility — doors simply swing open to access the servers — ensures streamlined maintenance processes. Finally, facility managers can potentially see proven ROI in as short as 12 months, ensuring a low TCO for the facility.
Greenfield data centers must address high density demands, but legacy facilities face growing density demands as well and must also have access to high density cooling options.
Rear door cooling functions as a viable retrofit option for data center managers, especially if the CRAC system will remain in place for the immediate future. ColdLogik rear door coolers can be deployed at low or medium frequency (every few cabinets) to address hot spots or high demand pockets within the room and increased in frequency as necessary. See low, medium and high duty deployment for ColdLogik rear door coolers in action here.
Direct-to-chip cooling can work for adapting existing air-cooled servers to remove heat from the chips, but alterations to the IT are required. There is retrofit capability, but it’s not as streamlined. Given the high capital investment for immersion cooling, it is the weakest candidate for retrofit capabilities.
Finding a best-fit cooling solution
Data center managers have more high density cooling options than ever — solutions that will facilitate energy efficient cooling while balancing feasibility, cost and retrofit capability.
Of these solutions, rear door cooling remains the most efficient, accessible and proven technology for facility managers seeking a solution they can implement now. Interested in seeing how rear door cooling can address your facility’s increased kW demands? Start a conversation with a Sealco data center cooling expert here.