Data Center Expansion with the Digital Twin

28 January, 2019

‘Out with the old and in with the new’, as the saying goes. Unfortunately, it’s not so simple when it comes to data center expansion. Over the years, the increased volume of data exchange has prompted rapid data center expansion, which some might call a ‘data center revolution’. However, building new facilities to meet this demand can be a costly exercise. Instead, businesses are often forced to consolidate the assets in their current data centers, while ensuring their current performance and operational efficiency is maintained.

A Fortune 500 insurance company found themselves in a similar scenario: they were faced with impending deployments while their data center was also experiencing issues in its current state. Great Lakes Case and Cabinet Inc. was contracted to provide a comprehensive evaluation of the data center. They found that after years of differing management styles and ‘best practices’, the data center had severe airflow issues. It was concluded that a computational fluid dynamics (CFD) analysis of the data center was needed to establish baseline standards, against which potential future improvements could be measured. 

 

Figure 1. Data Center Digital Twin for Data Center Expansion

6SigmaRoom, Future Facilities’ CFD software, was used to build a ‘digital twin’ of the data center. The digital twin is a 3D virtual representation of the actual data center, encompassing its power and cooling infrastructure and its IT assets. This model provided insight on how airflow could be managed to help accommodate increased density inside the facility. The data center digital twin is invaluable for studies such as this: it allows potential solutions to be tested in a virtual environment prior to deployment, eliminating the enormous risk and cost that comes with implementing changes in the live facility without sufficient prior testing.

Phase One: Server Farm Consolidation

The data center was comprised of three areas: the server farm, the network core and SAN equipment. The server farm was approximately 550 sq ft. and contained the most cabinets with equipment, resulting in noticeable temperature differentials throughout the room. It also had a disparate arrangement of cabinets from multiple vendors. Before considering the impending future deployments, the room’s existing infrastructure needed to be consolidated so that it could better meet future business demands. As part of this process, the data center’s digital twin was first updated with the consolidation plans, including proper definition and segregation of the hot and cold aisles within the model. The results clearly indicated that the non-homogenous cabinets within the planned cold-aisle strategy were leading to hot-air recirculation issues, causing an increase in the inlet temperature of the cabinets. 

Custom supports and brush seals were utilized so that aisle doors could be properly installed, which helped to alleviate the recirculation issue. Using the data center digital twin, the engineers were able to quantify the difference between the two models - thereby minimizing the risk and cost associated with implementing the consolidation efforts.

Figure 2. Result planes showing the difference in cold aisle temperatures before (left) and after (right) the planned consolidation.

Phase Two: SAN Equipment and Other Infrastructure 

Continuing with the consolidation of the rest of the data center, the network core and storage racks were segregated and positioned following the same hot and cold aisle strategy employed previously for the server farm. These areas contained multiple cabinets from other vendors, which were housing side-ventilated Cisco Catalyst switches. These switches were migrated to 30”W ES enclosures from GLCC®, allowing for the use of baffle kits to direct the airflow from this side-breathing equipment correctly through the front and rear of the enclosures. 

Adjustable AisleLok gap panels were also installed between the cabinets, completing the aisle segregation while leaving the floor space available to meet changing data center demands as the business needs evolve. Similar efforts were carried out to move the facility’s networking cores to Great Lakes ES enclosures: these enclosures came attached with an external sidecar, which helped to improve the data center’s cable infrastructure.  

The data center digital twin’s CFD results provided numerical validation of the improvements delivered by the consolidation efforts, while also aiding in planning for the upcoming deployments.

Figure 3. ASHRAE temperature plots of the room before (left) and after (right) consolidation.


Phase Three: Planning for Capacity

As the final step of the consolidation, it was imperative to account for cooling and power redundancies within the room, as dictated by the client. After the successful elimination of the room’s airflow inefficiencies, the client was interested in exploring how an efficient control system for the cooling infrastructure could be implemented. Cooling infrastructure accounts for a large portion of the operating expenditure of a data center, so finding areas for improvement here could lead to significant cost savings. The proposed plans were again modeled in the data center digital twin, which mitigated the risk of uncertainty involved in such an undertaking. Additionally, using the digital twin in this way facilitated further cooling and power failure studies, helping to formulate the response in the event of cooling or power issues.

Figure 4. ASHRAE temperature plot during a cooling failure scenario, demonstrating how cabinets in the production rows are dependent on cooling systems further away.

Conclusion: The Digital Twin Enables Data Center Expansion

Great Lakes used Future Facilities’ 6SigmaRoom software to model a digital twin of their Fortune 500 client’s existing data center; this data center digital twin helped them to plan the consolidation of the facility and gave them tangible evidence of the value their infrastructure proposals would deliver. The digital twin model has become part of the client’s workflow since the consolidation and is being updated regularly with any changes that occur in the live facility. This provides IT and facilities teams with the tools and visibility they need to run the data center efficiently and successfully. 


Blog written by: Tom Demetris, Mechanical Engineer at Great Lakes Cabniets & Sarah Ikemoto, Marketing Specialist

Other Recent Posts


New EnergyStar Option for Defining Airflow

In Release 13, we introduced two options for defining airflow through servers based on data from AS…

Read More


How effective is your cooling system?

If one of your air conditioning units (ACU) fails, which part of your data center will be most affe…

Read More