by Angela Miller
One of the biggest issues for my little data center at NCTD is handling the heat load of the room.  I was reading an article online at the Georgia Institute of Technology that said that cooling the data center has become more complicated as the average heat load per cabinet has moved from  1-5 Kilowatts of heat to 28 in the last 5 years.  We can easily see this in our room – we have a cabinet with 10 rack-optimized HP DL360 servers each with a 1 unit space between for air flow sitting next to another rack with a C3000 8-server blade chassis and a SAN sitting next to a rack with a C7000 16-server chassis with the Cisco VOIP equipment and no spaces between servers.  This little example shows how in just 5 years the density of the typical server rack in our room has increased immensely.

We also have first-hand experience with the problems this increased heat load can cause for the facility itself.  While we have a raised floor in the room, it was designed primarily for cable management instead of heat mitigation.  So the floor tiles are solid and do not allow the cooler air to be funneled through to the cabinets.  About 3 years ago the heat in the room exceeded the capacity of the air conditioners resulting in both a flood of condensation in the room and a blown ac that took down the data center.

I have previously blogged about our current poor cooling solution installed as a result of this outage – two residential-class air handling units on the floor of the data center with a fabricated venting system designed to pull in the hot air from the floor (?) and push the cold air from the top vents directed at various angles throughout the room.  This system makes it extremely uncomfortable for my staff to work — and who can blame them?  I myself moved one of the vents just an inch higher so I could stand in the room for an hour and forgot to move it back.  This one little adjustment resulted in the temperature in the racks increasing on average 3 degrees while the vent was moved.

Clearly this is not a sustainable solution.  So as we embark on the redesign of the data center, cooling solutions have been front-and-center in the conversation.  The Green Grid has published seven steps to consider when designing a cooling solution for the green data center:

  1. Developing an air management strategy
  2. Moving cooling systems closer to the load
  3. Operating at a higher delta-T
  4. Installing economizers
  5. Using higher-specifi cation and performance equipment
  6. Using dynamic controls
  7. Maintaining higher operating temperatures

We kept these guidelines in mind when evaluating the options for cooling the data center.  We have looked at a variety of cooling solutions for the facility, but there are several things which make it difficult to be innovative in this space.  The first is that this room is in the basement of a former bank building.  We are strictly limited on the height and footprint of the room.  Therefore we cannot be more creative with our raised floor – it is only 8 inches high, but going higher to allow for venting under the floor is not in the cards.

We also must contend with walls that support vaults on two floors above the data center, limiting what we can do with the venting and air handling outside of the room.  Given these constraints, the recommendation by Logicalis and Roel was to install a pod system.  This approach will allow us to encapsulate the racks, create hot and cold zones, and provide in-line cooling for the racks right where the need is.  This is not ideal for every data center – for example, we almost were unable to use this solution because the footprint of the room was 1 foot too short for the necessary clearance around the pod.  Fortunately, we were able to recapture some space by moving an internal wall out slightly allowing us to just fit the equipment into the design.

It is also important to understand the tradeoff with an encapsulated system like the APC pods:  once it is in within the walls of this room, I will not have the ability to grow the data center past this size.  This will be the finite number of racks we can install in the foreseeable future.  I cannot move walls again, nor can I migrate to a new space within this building.  So we must be smart in the design phase in order to get us a full ten-year investment and growth opportunity in this space.  Choosing a pod also increases the budget versus sticking with the current approach of open racks in the space.  But given the other design criteria, the pod solution is the clear winner on energy efficiency and heat handling.

While this is the basic design we’ve elected to pursue, I also requested that we begin by first performing an air flow analysis of the current facility, both with the air conditioners running and without.  Such a study might reveal some interesting design criteria for us to keep in mind as we move forward.  I have a feeling that we might find we have significant bypass airflow issues to deal with (basically this means air that is infiltrating the room through gaps and openings in the walls).  Our initial monitoring of the humidity in the room using simple environmental monitors shows that we actually have a moisture problem in the room in addition to the cooling issues.

I will post more as we get into this project so that you can see the practical realities of how we make decisions that consider the energy efficiency, sustainability, and practical design considerations through the project.

Digg Deeper on the Issues:

I relied on the following sites for this post:

The Green Grid
American Power Conversion

Georgia Institute of Technology

None of the entities in this post has provided compensation or incentive to discuss their products or services.