The Only Data Center Cooling Plan You Need to Beat the Heat
Imagine it’s the hottest week of the year. You’re taking time off to be with family during these summer months and then you see it: a rack inlet alarm. Then another, and another. Your server room temperature is rising, and you have to step away from your time off to solve the problem.
Today, higher rack densities and longer heatwaves are pushing cooling systems harder than ever before. The truth is, most traditional cooling technology simply can’t handle these growing loads and ignoring inefficiencies now will cause unplanned downtime later on. That’s why you need to have the right data center cooling equipment and strategies in place, so your facility can stay online even when you’re away. But to make this a reality, there are a few details you must consider:
- Why thermal planning has changed forever
- Your main cooling options
- How to build the perfect system by density range
- The importance of regular maintenance
The following guide will explain every one of these factors to help you make the right choice when it comes to your data center cooling.
Why Thermal Planning Has Changed Forever
Simply put: high-density racks are rewriting the rules. Traditional enterprise racks might have lived in the 5 to 10 kW range, but necessary high-density racks can pull 40, 60, or even 100 kW. More power into the rack means more heat comes out of the rack. The higher the density, the harder your cooling technology has to work to maintain the ideal temperature. And at a certain point, air alone struggles to keep up.
It isn’t just evolving technology that’s affecting data center cooling needs; it’s the climate too. In many regions, heatwaves are becoming more frequent and lasting longer. This puts a massive strain on chillers, condensers, and dry coolers, which were designed for specific outdoor temperatures. When the real world runs hotter than the design, your margin shrinks and the chances of overheating go up.
Data Center Cooling Options
Modern thermal design includes four main building blocks, each operating differently and providing a unique solution:
Room Air Conditioning
Room air conditioning is the classic approach many facilities have started with. It works well when rack densities are low, the room is not packed too tightly, and hot and cold air are separated. But as densities rise, room systems can waste energy by pushing cold air into the entire room instead of straight to the hot spots. They can also struggle to keep up when racks move into the 15 to 20 kW range or higher. However, room air conditioning is still used in almost every facility. Other cooling strategies are not eliminating air conditioning but making it part of a layered plan.
Row Based and Rack Based Cooling
Row-based and rack-based systems move cooling much closer to the load. For example, row-based cooling sits between racks in the row and sends cold air right into the cold aisle. On the other hand, rack-based cooling builds a heat exchanger into the rack itself, such as a rear door that pulls heat off exhaust air.
Because these systems sit closer to the heat, they do not have to push air across the whole room. And since they don’t have to work as hard as room air conditioning, these systems use less energy and emit less heat. Generally speaking, row-based and rack-based cooling systems are a strong fit for rack densities up to 30 kW per rack and rooms where hot spots keep coming back despite traditional air conditioning. These systems are a key part of data center cooling because they bridge the gap between classic room air conditioning and full liquid systems.
Liquid Cooling Options
Liquid is better at moving heat than air, end of story. But that doesn’t mean all liquid cooling technology is the same. In fact, there are three distinct types:
- Direct to chip liquid cooling: liquid picks the heat up at the source.
- Rear door heat exchangers: a coil at the back of the rack absorbs heat from exhaust air.
- Immersion cooling: servers sit in a bath of dielectric liquid, which absorbs heat instantly.
As high-density server racks become more common, so too does liquid cooling. Its ability to instantly absorb heat makes it ideal for these advanced server racks that emit more heat than traditional cooling can handle. Liquid cooling technology also saves you money in the long run since it reduces the need for room air conditioning to run constantly.
Controls, Monitoring, and Services
Even the best design needs to be monitored. It’s the only way to make sure every piece of equipment is working like it should be no matter what. This requires thermal monitoring with sensors for inlet temperatures, humidity, and pressure, as well as data center infrastructure management (DCIM) tools to see hot spots in real time. Monitoring your environment isn’t the only way you can keep cooling equipment running smoothly, though. Be sure to implement preventive maintenance like cleaning coils and filters, checking fans and pumps, and verifying settings and alarms before summer months. Getting an expert to routinely check on your cooling technology keeps it working optimally and running so well you don’t even have to think about it.
Building the Perfect System by Density Range
Different rack densities will require different data center cooling solutions. After all, if you have a fairly low density, there’s no need to pay for an expensive liquid cooling system that provides more than you actually need.
5 kW to 10 kW per rack
At low rack densities, start by strengthening your room air conditioning. That’s likely all you need. If your goals are to keep your equipment safe, reduce hot spots, or trim AI energy waste from poor airflow, you will need the following:
- A well-designed room air conditioning system
- Hot aisle or cold aisle containment
- Blanking panels in empty rack spaces
- Good cable management and sealed floor cutouts
With these steps, you can improve your data center cooling without changing your mechanical plant.
10 kW to 30 kW per rack
If your racks use 10 kW to 30 kW, you may notice repeated hot spots and rising fan speeds. This will quickly turn into a money waster. To prevent overcooling the rest of the server room and control AI energy costs as loads climb, implement the following:
- Room air conditioning for the space
- Row based units beside the hottest racks
- Rear door heat exchangers on specific racks
By putting cold air or cooling coils right next to the heat, you reduce the need to flood the entire room with cold air. And in many cases, this results in fan power drops and chiller loads becoming more stable.
30 kW to 60 kW per rack
If you are dealing with these higher rack densities, air alone will not cut it. To improve total data center cooling efficiency, liquid cooling should be added to your system. This doesn’t mean you need to do away with room air conditioning altogether though. Hybrid cooling systems use both traditional air cooling and rear door heat exchangers to keep every piece of equipment at its ideal temperature. This may be a more expensive transition upfront, but according to Vertiv, optimized air and liquid hybrid designs cut total data center power use by about 10% and improve total usage effectiveness (TUE) by more than 15% in high density environments. That’s good news for uptime and AI energy efficiency.
60 kW to 100+ kW per rack
The more AI workloads a facility adopts, the more power it will need to keep these programs running. AI energy requires high density racks above 60+ kW. While necessary, this will dramatically increase heat generation and energy demand, making finding the right cooling technology more important than ever before. In these instances, liquid cooling options like direct to chip and immersion systems become necessary to stop the massive amount of heat generation at the source. A hybrid model is also often adopted to accommodate these high rack densities. In addition to liquid cooling solutions, air cooling technology should also be implemented to optimize temperature control.
The Importance of Regular Maintenance
Implementing the right cooling technology will help keep your equipment at their ideal temperature, but the only way to ensure this is always the case is by properly maintaining your cooling systems. This requires regular preventive measures carried out by experts who understand your cooling systems and how they interact with the rest of your facility. But what does this look like and how often should preventive maintenance be performed on your data center cooling equipment? Let’s break down what a strong cooling system maintenance plan should look like:
Weekly/Monthly Maintenance
While it’s important to have a trusted service provider you trust perform preventive maintenance, there are certain steps you can perform yourself to identify any issues early on. A good rule of thumb is to perform the following practices at least once a month, though some facilities may benefit from doing so once a week:
- Verify unit status and alarm logs
- Inspect return and supply airflow paths
- Check filters and replace if dirty
- Confirm condensate drains are clear
- Review room temperature and humidity trends
- Validate remote monitoring connectivity
- Log any new or redistributed loads
Essentially, you are routinely checking for anything that isn’t working quite right and cleaning components to prevent small issues from growing into major disruptions.
Quarterly Maintenance
Any other maintenance than what was listed above should be performed by an experienced service provider. They know the cooling technology inside and out and have access to the tools necessary to adjust or repair certain cooling system components. Every three months, your service provider should perform preventive maintenance such as:
- Replacing or cleaning air filters (if not done monthly)
- Inspecting and cleaning evaporator and condenser coils
- Verifying fan and motor operation
- Checking belt condition and tension (if applicable)
- Inspecting electrical terminations (visual/thermal)
- Verifying humidifier operation (if equipped)
- Testing staging and lead/lag sequences
- Reviewing trend logs for performance drift
- Checking refrigerant sight glass and pressures
Quarterly preventive maintenance is where more risk reduction happens, so it’s vital to never skip it.
Semi-Annual Maintenance
Basic inspections and cleanings are necessary, but they aren’t all that’s required to keep cooling technology running smoothly. Semi-annual maintenance checking for system reliability is also important and should involve the following:
- Perform detailed coil cleaning (chemical if required)
- Calibrate temperature and humidity sensors
- Inspect contactors and relays under load
- Validate high/low pressure safeties
- Functional test of emergency shutdown sequences
- Confirm economizer operation (if equipped)
- Review redundancy performance (N+1 validation)
These semi-annual inspections are often where technicians discover hidden degradation. The earlier this is caught, the sooner repairs or replacements can be issued to keep systems running smoothly.
Annual Maintenance
Yearly reviews help determine if the current data center cooling system can actually handle the current and future plans for the facility. These maintenance measures provide the information needed to plan for the future. This happens thanks to the these steps:
- Infrared scan under load
- Compressor performance analysis
- Refrigerant circuit leak check
- Control firmware and setpoint review
- Capacity verification vs. current IT load
- Airflow balance verification
- Review of maintenance history and failure trends
- Lifecycle condition assessment and budget planning
Remember, these preventive maintenance practices need to be performed by an expert technician who understands your cooling technology and how it works in your facility. Only then will you be able to identify inefficiencies and potential risks before than cause disastrous unplanned downtime.
Learn How to Prepare Your Facility for Anything
Data center cooling technology is changing fast to keep up with increasing climate temperatures. As a result, you need to understand your facility and the new cooling technology available to find the right solution for you. Sign up for our newsletter to get clear tips for improving data center cooling and updates on the latest cooling technology that make your thermal plan future ready.
Data Center Cooling FAQs
Room air conditioning uses HVAC units to cool the entire white space. Cold air is sent into the room, and hot air returns to the units. It works well for lower density racks, but it can waste energy and struggle with hot spots at higher loads.
If you see repeated hot spots above 15 to 20 kW per rack, or if your room air conditioning is running at full speed all the time, it is time to look at row or rack systems. These closer cooling technology options send cold air right to the hot racks and help control AI energy use.
Liquid cooling technology moves heat away from chips much better than air. Direct to chip and immersion systems pull heat off GPUs and CPUs right at the source, which helps dense AI racks run at full speed without throttling. Liquid also supports more efficient data center cooling and can lower total AI energy use at high densities.
Yes. Many modern sites use hybrid designs. Liquid cooling handles the hottest components, while room air conditioning and in-row systems handle the rest of the rack and the room. This mix lets you upgrade step by step instead of rebuilding everything at once.
Good thermal design lowers waste. Efficient data center cooling means less fan power, fewer chiller hours, and better use of free cooling. That leads to lower AI energy overhead and helps you hit carbon and cost targets without slowing down your AI roadmap.
At low density, you can usually rely on room air conditioning with strong airflow management. Use containment, blanking panels, and good cable practices to keep cold air at the front of racks. You likely do not need liquid cooling technology yet, but planning for future growth is still smart.
In this band, hybrid systems shine. Direct to chip liquid cooling plus rear door or row units give you strong control of hot spots and efficient data center cooling. Room air conditioning still plays a role for the room, but liquid handles most of the load from AI racks.
Combine design and operations. Make sure your cooling technology has enough capacity and redundancy, and build a simple playbook for hot weeks. Clean coils, test alarms, and plan how to adjust workloads when outdoor temps spike.
Ask about
- Their experience with data center cooling at your density levels
- How they support room air conditioning, row, rack, and liquid systems
- What monitoring and reporting they provide
- How often they perform preventive maintenance
You want a partner who can support both your current loads and your future plans.
Most facilities benefit from at least annual preventive maintenance, with more frequent checks in harsh climates or very dense rooms. Filters, coils, belts, pumps, and sensors should all be reviewed. Regular inspections keep cooling technology efficient and help you catch small problems before they cause downtime.