Data center operators in Europe are unsure whether their facilities are properly equipped to handle rising temperatures linked to climate change, according to a survey of 700 data center consultants. With summers growing hotter, only 40% are sure that their facilities have the necessary infrastructure in place to handle rising temperatures if the grid fails (DCD).
This fear has many looking into renewable energy options, which is commendable and an important step in creating carbon neutral data centers. We explore one such way designers can be greener by using CFD to design a purpose-built data center for waste heat recovery here.
We would argue; however, that the greenest data center is the one you never have to build. Data center operators are building three data centers for every two they need (The Digital Twin for Today’s Data Center). This doesn’t just mean another building, this means more power and cooling, a larger carbon footprint and wasted resources across the board.
So, how do you go about making sure your infrastructure can handle rising temperatures, potential grid failure and are being used to their fullest capability? This blog will walk you through how to stress-test your designs and maximize your capacity planning efforts using the Data Center Digital Twin.
A Digital Twin is a dynamic, digital representation of a real-world object or system, linked by measured data such as temperature, pressure, vibration, etc. and simulates its operations. The Data Center Digital Twin then is a 3D virtual replica of your data center that is linked to real-world data, but more significantly, can simulate its physical operations under any working conditions.
To reiterate, what’s important to note here is that the Data Center Digital Twin simulates operations. If your “Digital Twin” only models the space, it’s a 3D model. What makes a 3D model a true Digital Twin is its ability to simulate. What do we mean by simulate? “Simulate,” in this context, has to do with the physics of the model. Simulation, sometimes referred to as engineering simulation or predictive simulation, is typically associated with calculations that cannot be solved on a piece of paper, so in this case engineering simulation predicts data center airflow and temperature distribution.
Capacity planning on an analytical level through spreadsheets is still quite common in the data center industry. Through these spreadsheets, owner-operators must map out the necessary IT to meet application requirements as well as match these against the facility’s available power, space and cooling.
While spreadsheets and analytical capacity calculations may get the job done initially, the projections will become less accurate over time. For example, you can keep the overall airflow delivered to your data center consistent or even exactly the same over time, but with the internal designs constantly changing thanks to updated or new IT, the amount of air delivered to each zone will deviate from the data center’s original design. What you’ve always done will no longer work if the IT keeps fluctuating and upgrading, which it will. Increased demand, high-density designs and the introduction of liquid cooling systems will make sure of that.
The most accurate predictions are those that account for the physics of your data center. The only risk-free way to test your data center’s redundancy if part of your infrastructure fails, whether that’s an individual ACU or the power grid itself, is with simulation. This video goes into more detail:
2020 has brought to the surface the need to be proactive in almost everything that we do. It’s not just about accommodating more powerful, denser designs. Data center operators now have hot summers and lower carbon footprints to monitor.
We design our software with our customers’ success in mind, meaning we want to make sure you can implement remedial action while planning for future loads. Check out our case study on how Citigroup did just that using 6SigmaDCX here.
8 September, 2020