Welcome to Heat Exchangers Information



Challenges of data center thermal management

The need for more performance from computer equipment in data centers has driven the power consumed to levels that are straining thermal management in the centers. When the computer industry switched from bipolar to CMOS transistors in the early 1990s, low-power CMOS technology was expected to resolve all problems associated with power and heat. However, equipment power consumption with CMOS has been rising at a rapid rate during the past 10 years and has surpassed power consumption from equipment installed with the bipolar technologies 10 to 15 years ago. Data centers are being designed with 15-20-year life spans, and customers must know how to plan for the power and cooling within these data centers. This paper provides an overview of some of the ongoing work to operate within the thermal environment of a data center. Some of the factors that affect the environmental conditions of data-communication (datacom) equipment within a data center are described. Since high-density racks clustered within a data center are of most concern, measurements are presented along with the conditions necessary to meet the datacom equipment environmental requirements. A number of numerical modeling experiments have been performed in order to describe the governing thermo-fluid mechanisms, and an attempt is made to quantify these processes through performance metrics.

Introduction
Because of technology compaction, the information technology (IT) industry has experienced a large decrease in the floor space required to achieve a constant quantity of computing and storage capability. However, power density and heat dissipation within the footprint of computer and telecommunications hardware have increased significantly. The heat dissipated in these systems is exhausted to the room, which must be maintained at acceptable temperatures for reliable operation of the equipment. Data center equipment may comprise several hundred to several thousand microprocessors. The cooling of computer and telecommunications equipment rooms has thus become a major challenge.

Background

Considering the trends of increasing heat loads and heat fluxes, the focus for customers is in providing adequate airflow to the equipment at a temperature that meets the manufacturers' requirements. This is a very complex problem considering the dynamics of a data center, and it is just beginning to be addressed [1-8]. There are many opportunities for improving the thermal environment of data centers and for improving the efficiency of the cooling techniques applied to data centers [8-10]. Airflow direction in the room has a profound effect on the cooling of computer rooms, where a major requirement is control of the air temperature at the computer inlets. Cooling concepts with a few different airflow directions are discussed in [11]. A number of papers have focused on airflow distribution related to whether the air should be delivered overhead or from under a raised floor [12-14], ceiling height requirements to eliminate "heat traps" or hot air stratification [12, 15], raised-floor heights [16], and proper distribution of the computer equipment in the data center [13, 17] such that hot spots or high temperatures would not exist. Different air distribution configurations, including those described in subsequent sections, have been compared with respect to cooling effectiveness in [14, 16, 18, 19].

Trends-Thermal management consortium for data centers and telecommunications rooms

Since many of the data center thermal management issues span the industry, a number of equipment manufacturers formed a consortium1 in 1998 to address common issues related to thermal management of data centers and telecommunications rooms. Since power used by datacom equipment was increasing rapidly, the group's first priority was to create the trend chart shown in Figure 1 on power density of the industry's datacom equipment to aid in planning data centers for the future. This chart, which has been widely referenced, was published in collaboration with the Uptime Institute [2O]. Since publication of the chart, rack powers have exceeded 28,000 W, leading to heat flux based on a rack footprint in excess of 20,000 W/m^sup 2^.

Recent activities of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE)

In January 2002 ASHRAE was approached to create an independent committee to specifically address highdensity electronic heat loads. ASHRAE accepted the proposal and eventually formed the technical committee known as TC9.9 Mission Critical Facilities, Technology Spaces, and Electronic Equipment. The first priority of TC9.9 was to create a Thermal Guidelines document that would help to standardize the designs of equipment manufacturers and help data center facility designers to create efficient, fault-tolerant operation within the data center. The resulting document, "Thermal Guidelines for Data Processing Environments," was published in January 2004 [21]. More recently, another publication entitled "Datacom Equipment Power Trends and Application," published in February 2005 [22], provided an update to the power trends documented in [20] and shown here in Figure 1. The heat load of some datacom equipment was found to be increasing at an even faster rate than that documented in the original chart published by The Uptime Institute [2O]. Other ASHRAE publications will follow on data center thermal management, with one planned for the end of 2005 that will provide information on air and liquid cooling of data centers, energy conservation, contamination, acoustics, etc.