Managing Energy Consumption for the Surging Data Center Market

30 October, 2018

Data center operations today are not the same as they were 10 years ago. Between the rapid growth of facilities, increases in data processing demands and new complexities that arise with scale, the way data centers operate has evolved significantly.

Measuring Value

Today, managing costs means tightly monitoring data center energy usage, captured by a metric called PUE - or power usage effectiveness. This metric provides a way of comparing the efficiency of a server closet with that of a large co-location center by scaling the total facility energy to the energy of IT equipment. Which leads to the question: for every dollar spent on computing power, how much is spent to keep the power on and the servers cooled? Operators constantly strive to reduce PUE, because every dollar spent on energy is one less being spent on operations.

When it comes to designing large-scale data centers, best practices are no longer sufficient. Rather, advanced analysis becomes critical to measuring airflow networks. By asking harder questions concerning IT loads, reliability and efficiency, one quickly reveals that PUE alone is insufficient for managing IT assets. What will efficiency be when the IT load ramps down? What if servers are overheating and sacrificing reliability for the sake of saving energy? How effectively is the data center utilizing its installed cooling capacity? These questions cannot be answered fully by PUE, and have spawned a new generation of performance metrics. Designers and operators must build on PUE by identifying and targeting more advanced - and more appropriate - metrics.

Optimizing Airflow

There are many contributing factors to inefficiencies in data centers beyond climate; namely poor design or operation of the cooling systems, and - more importantly - the variability that the facility’s IT equipment brings from a cooling standpoint. To this day, it is still common to see overuse of fans, wasted airflow and frigid conditions in data centers.  

The first step to improving cooling efficiency starts with airflow. The best air-cooled data centers circulate just enough air to accomplish the primary goal of keeping the servers stable. With smaller server rooms, designing airflow may be as simple as following key best practices. However, in a large data center with hundreds of server racks, designers must go beyond those techniques. Airflow networks in larger, underfloor air systems are very complex and can be tricky to balance. This is when advanced analysis becomes critical.

Figure 1. Using simulation allows one to look at full Room Data and evaluate high-level architecture such as hot/cold containment, CRAH units performance, power delivery networks and so on.

Maximizing Effectiveness

A key solution is to utilize a data center digital twin that can simulate airflow and temperature via computational fluid dynamics (CFD).

Data center CFD helps to answer several detailed questions:

• How much airflow do the servers really need?

• Should there be full aisle containment, or are end panels enough?

• How high can the cooling temperature be without risking the health of the IT equipment?

• Does there need to be this many computer room air conditioners?

• Does there really need to be this many perforated tiles?

• Why does a particular server see a lower inlet temperature than the one right next to it?

CFD provides a map to good design by offering specific feedback regarding airflow patterns, which designers can use to maximize cooling effectiveness.

Figure 2. The picture shows the vent temperatures on a row of cabinets. As seen, the difference in temperatures between the top and bottom of the cabinets within contained rows may not be obvious but using CFD gives one a peek into granular details such as this which will aid in better deployment planning.

Federal Implementation

Two years ago, the 2016 Data Center Optimization Initiative required that all federal agencies implement active PUE monitoring, and that all existing tiered federal data centers achieve a PUE of 1.5 or less by September 2018. You can track the federal progress on this initiative here: www.itdashboard.gov.

Putting It All Together

The Thomas Jefferson National Accelerator Facility conducts cutting-edge research of sub-atomic particles year-around. This makes energy efficiency critical to the laboratory’s operations, and means that making decisions on data center expansion can be very tough.

With a PUE goal of 1.4, space constraints and expanding IT needs, consolidating their data center with minimal disruption to operations and without a loss of computing capacity seemed almost impossible. By working directly with data center personnel, a digital twin model of the expanded data center was created, allowing optimization of the proposed layout before relocating a single server.

CFD yielded the following assertions:

• Confirmation that the proposed cooling solution would maintain inlet conditions for all servers.

• Assurance of continuous cooling in the event of failure of a single computer room air conditioner unit.

• Consolidation of cold aisles to create free space for future expansion.

• Validation of the aisle containment solution for energy efficiency.

• Establishment of server airflow limits to be used in IT equipment procurement.

• Elevation of the recommended cooling supply temperature to expand the availability of free cooling throughout the year.

To tie it all together, the results of the study were fed back into 6SigmaRoomLite, a CFD analysis software by Future Facilities. The data center airflow optimization, coupled with improvements to the chiller plant, was able to demonstrate a predicted annual PUE below the 1.4 threshold. Best of all, metering data gathered in the months since the data center returned to operation indicates an average PUE of 1.25.

Original article published in the September-October 2017 issue of The Military Engineer; Vol. 109, No. 710. Cited/Excerpted with permission of the Society of American Military Engineers.

For the original article, please click here.


Blog written by: Coles Jennings, Senior Energy Engineer at Mason & Hanger & Sarah Ikemoto, Marketing Specialist

Other Recent Posts


HF Lenz speeds up design process with 6SigmaRoom and Rescale

Engineering firm uses 6SigmaRoom and Rescale to Deliver Fast and Accurate Results to Demanding Clie…

Read More


Designing High-Density Data Centers in 6SigmaRoom

Will Standard Cooling Methods Suffice in High-Density Designs?In this blog, we’ll explore whether…

Read More