Tech-Guide

Data Center Cooling: The Key to Green Computing and a Low-Carbon Transition

by GIGABYTE
Advances in processing capabilities are changing our world through technological breakthroughs. For example, high performance computing (HPC) and cloud computing can aid in the training of artificial intelligence (AI). With the advent of 5G communications, inventions like the Internet of Things (IoT), the Internet of Everything (IoE), and the Artificial Intelligence of Things (AIoT) are getting closer and closer to becoming reality. Other pioneering creations are Eextended Rreality (XR) and the Mmetaverse. In order to support all these modern applications, processors are being pushed to compute faster and faster, which in turn causes them to generate more and more heat. The TDP (thermal design power) of the latest generation of processors has steadily crept from 150 watts to more than 300 watts per chip, and it’s still growing. According to a report by the Environmental Investigation Agency, EIA, data centers consume 1% of all the world’s electricity, and made up 0.3% of the world’s carbon emissions in 2020.
However, even as technology continues to march forward, sustainability has also become a pressing issue. The 2015 Paris Agreement, which has been adopted by 196 Nations, aims to limit global warming to below 2, preferably to 1.5 degrees Celsius, compared to pre-industrial levels. The Climate Neutral Data Centre Pact asks European data center operators to leverage technology and digitalization to achieve the goal of making data centers climate neutral by 2030, and carbon neutral by 2050. From this we can see that adopting data center cooling solutions that can reduce the carbon footprint while enhancing processer performance is an important part of an enterprise’s CSR (corporate social responsibility) and ESG (environmental, social, and corporate governance) programs, if it wants to achieve net zero emissions.

Learn More:
Setting the Record Straight: What is HPC? A Tech Guide by GIGABYTE
Server Processors: The Core of a Server’s Performance
Glossary: What is TDP?
What are the Benefits? Why are Data Center Cooling Solutions Pivotal to Progressing toward Net Zero Emission?
Generally speaking, the temperature inside a data center should be kept at around 21 to 24 degrees Celsius. The humidity and even the air quality should be controlled to keep the servers operating optimally. A data center cooling solution should help to achieve all this without expending more energy than is necessary. A useful unit of measurement is the “Power Usage Effectiveness”, or PUE for short. The PUE is the data center’s total energy consumption divided by its computing equipment’s energy consumption. The ideal number is 1.0, which means no additional power is used for thermal management. A good data center cooling solution pushes the PUE as close to 1.0 as possible. According to the Uptime Institute’s “2021 Data Center Industry Survey Results”, the average PUE of surveyed data centers was 1.57. In other words, the amount of electricity used for cooling was nearly 60% of what was used for computing. From this we can see that a data center cooling solution that improves the data center’s PUE, reduces carbon emissions, enhances overall performance and deployment flexibility—such a solution would be a good way for enterprises to find an optimal balance between sustainability and growth, and contribute to humanity’s ongoing effort to reach net zero.《Glossary: What is PUE?
What Keeps the Servers Cool? A Look at Data Center Cooling Solutions
Even when designing and building a data center, thoughts must be given to the air ducts and air conditioning used to cool the servers, in order to facilitate good air flow and make sure the whole data center is kept at an optimal temperature. The geographic location of the data center and the region’s climate may also help to reduce power consumption and improve the performance of the cooling systems. Currently, there are three primary ways to cool data centers: air cooling, which uses cold air to dissipate heat generated by the servers; liquid cooling, which uses tubes of coolant to remove heat from key components; and immersion cooling, which submerges servers directly in a bath of dielectric fluid to keep them cool—this method can be further categorized as single-phase or two-phase immersion cooling.

Learn More:
Air Cooling
Air cooling relies on heat sinks, fans, and other components inside the server to dissipate the heat and conduct thermal management. Heat sinks increase the surface area of key components and prevent any one point from overheating. Fans and air ducts pump cool air into the server from one side, and then remove the heated air from the other side. A good air cooling system can not only adjust fan speed to maximize heat dissipation, but also making it easier for the data center’s CRAC (computer room air conditioner) and CRAH (computer room air handling) units to keep the servers cool more efficiently.

GIGABYTE Technology’s air-cooled servers utilize an innovative airflow-friendly hardware design; this means the overall airflow direction of the chassis has been evaluated with simulation software, and then fine-tuned to optimize ventilation. Powerful fans inside the server can be adjusted automatically to achieve an optimal balance between temperature control and power efficiency. GIGABYTE servers are shipped with automatic fan speed control as standard. Sensors are placed in the chassis to monitor the temperatures of key components. If the baseboard management controller (BMC) detects a change at certain critical locations, the speed of the corresponding fan (or fans) will adjust automatically. This not only keeps the servers working without a hitch, it also improves the data center’s overall PUE.

A top technological university in Europe noticed rising demand for computing services across its various departments. It chose GIGABYTE’s H262-series of High Density Servers, which had the highest density of any air-cooled server available on the market at the time. The performance and computing power of a single H262-Z63 is nearly that of four standard 1U servers; however, it needs only half the space. The chassis of the H262-Z63 houses both the fan system and the power supply, reducing energy consumption and maintenance costs. The built-in Chassis Management Controller (CMC) monitors the four nodes simultaneously, resulting in less Top of Rack (ToR) cabling and switch connections. This helps to make the entire server easier to manage. Overall, it reduces power consumption by 4% and the number of power supply units by 75%; and it lowers the overall number of cables (including for power and 1GbE connection) by 56%. The customer's total cost of ownership is cut drastically as a result.

Learn More:
In the Quest for Higher Learning, High Density Servers Hold the Key
《Recommended for you: About GIGABYTE's High Density Server Series》
Liquid Cooling
Liquid cooling omits the need for the heat sinks and fans necessary in an air-cooled server cooling solution. It uses coolant as the primary way of dissipating heat. Sealed tubes of coolant (called cooling loops) coil around key components inside the server and transfer thermal energy from the components to the coolant through cold plates; then, a heat exchanger removes the heat from the coolant, allowing it to circulate back into the server and repeat the cycle. In a data center that was built for air-cooled servers and has no current plans to revamp the whole cooling infrastructure, a hybrid “liquid-to-air” cooling solution may be adopted. Liquid-cooled servers are installed on standard server racks, and the cooling loops are connected to heat exchangers fitted on the same rack. Heat is expelled into the data center’s “hot aisle”, making it possible to have a mix of air-cooled and liquid-cooled servers in the same facility. Liquid-cooled servers can house more processors, so the overall computing power is increased.

The German Aerospace Center (DLR), which has been studying the Earth and Solar System for decades, needed to develop highly precise technologies using large and complex sets of data, such as images returned from outer space. The DLR chose GIGABYTE’s liquid-cooled High Density Server. The GIGABYTE team worked with the R&D department to repeatedly study, re-plan, and re-verify various server configurations to present the most suitable customized design to the client. In the end, they were able to achieve the three goals of “being capable of operating in a data center with an ambient temperature of 40°C and with no air conditioning equipment”, “adopting the most suitable cooling method without changing the existing mechanical and electrical infrastructure within the data center”, and “limiting the temperature of the liquid used for heat exchange”. Not only did this result in a 15% reduction in power demand compared with competing products, the servers’ footprint was also reduced by 50% while the servers provided up to twice the maximum computing power of competing products on the market. The liquid-cooled solution helped the DLR successfully build a powerful green data center in the limited space available.

Learn More:
Liquid Cooling: A Flexible, Sophisticated, and Reliable Way to Cool Servers
GIGABYTE Servers Become Part of the German Aerospace Center’s Data Center
Immersion Cooling
Immersion cooling can be separated into single-phase and two-phase immersion cooling. In both methods, the servers are directly submerged in a bath of nonconductive liquid to transfer the heat directly from the server into coolant. Not only does it make it unnecessary to install heat sinks and fans in the servers, immersion cooling is also more efficient, reliable, and scalable than air cooling. This helps the enterprise to reduce costs and achieve better thermal management. The main difference between the two methods is how the coolant itself dissipates heat after absorbing the thermal energy from the servers.

Learn More:
Comparison between Two-phase and Single-phase Immersion Cooling
《Glossary: What is Immersion Cooling?

● Single-phase Immersion Cooling
In a single-phase liquid immersion cooling system, the server is installed in a tank with a non-conductive liquid medium, which is usually a hydrocarbon-based liquid similar to mineral oil. Heat generated by servers is conducted through the direct contact between the medium and components. This type of system requires a pump in the coolant distribution unit (CDU) to suck the medium into the cooling device for cooling, and goes a step further by using an additional heat rejection equipment to complete the heat exchange cycle. Due to the high boiling point of this type of liquid, the liquid is not volatile, so the liquid tank of a single-phase liquid immersion cooling system does not require a strict sealed design and environmental controls.

The Japanese telecom leader KDDI invited GIGABYTE to build its “container-type immersion cooling small data centers” using single-phase immersion cooling tech. This new class of data center not only support a higher density of servers, but also can be used for edge computing. By KDDI’s estimates, when the data center is operating at its maximum capacity, the servers will use 50kVA of power, while the data center’s total power consumption will only be around 53.5kVA. Therefore, the PUE is lowered to below 1.07, which is a 43% reduction in electricity use when compared with its air-cooled data centers, which may have an average PUE of around 1.7.

Learn More:
Japanese Telco Leader KDDI Invents Immersion Cooling Small Data Center with GIGABYTE
See It Now! GIGABYTE  Single-Phase Immersion Cooling 》

● Two-phase Immersion Cooling
In a two-phase liquid immersion cooling system, servers are installed in a specially designed sealed tank that uses a non-conductive liquid medium with a low boiling point of around 50 degrees Celsius. The heat generated by servers causes a phase change where the liquid surrounding the components boils and generates vapor, which in turn goes through a phase change as it touches cold condenser coils and returning it to a liquid state while removing the heat. The sealed tank maintains the phase change process through environmental controls, and the heat exchange process continues. Compared to the single-phase variant, two-phase immersion cooling does not need an additional pump to cycle the coolant, but it requires a tightly sealed tank and environmental control to make sure the heat exchange cycle goes on smoothly.

A prominent IC foundry giant in the semiconductor sector began operating sustainable, future-proof “green HPC data centers” in the first quarter of 2022. GIGABYTE worked with other industry leaders to build this “two-phase immersion cooling data center” that can improve chip computing performance and reduce power consumption. The solution has a high cooling capacity of 100KW, and it is able to reduce the data center’s total power consumption by up to 30% and lower its PUE (Power Usage Effectiveness) from 1.35 to below 1.08, while getting a 10% boost in computing performance. Not only will this be beneficial to the state-of-the-art semiconductor manufacturing process, but also will it help the IC foundry giant usher in a new era of “green HPC data centers”.

Learn More:
Semiconductor Giant Selects GIGABYTE’s Two-Phase Immersion Cooling Solution
Building High-performing, Energy-efficient Data Centers is Key to a Sustainable Future
As demand for processing power rises and the world moves toward a net zero future, modern data centers are mostly built with green computing and sustainable operations in mind. The end goal is to increase competitiveness while being sustainable. Due to this, a data center solution that can help servers perform better while reducing the negative impact to the environment is something all modern enterprises should consider when expanding their computing capacities.

GIGABYTE uses its innovative technological knowledge and industry experience to build data centers for clients in different sectors. Our data center cooling solutions include air cooling, liquid cooling, single-phase and two-phase immersion cooling. We can help clients build green and low carbon data centers that enhance chip performance, reduce energy consumption, and achieve better energy efficiency. This will reduce operational costs and the impact to the environment, which is a sure way toward a sustainable future. We encourage you to reach out to our sales representatives at [email protected] for consultation on the data center cooling solution that is more optimal for your investment plan and computing needs.
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates
Get the inside scoop on the latest tech trends, subscribe today!
Get Updates