
Credit: Eni, Flickr Creative Commons
The days of making energy efficient changes to data centers in bits and pieces are on their way out, an industry expert says. Instead, data center operators are having conversations about energy efficient design much earlier in the planning process, Erich Hamilton, the director of engineering for the data center infrastructure equipment company DAMAC, writes this week in a BetaNews article.
“What we are seeing is the overall solution of deploying racks and aisle containment structures at the same time, versus the traditional model of filling a data center full of hardware and then bringing in someone to help with efficiency and containment,” Hamilton writes. This is especially true for cooling solutions that improve a data center’s power usage effectiveness (PUE) readings.
One way data center operators are focusing their efforts is on using computer room air conditioning (CRAC) units so that cooler air from the outside gets pulled inside. Greg Peterson, IBM’s manager of global energy and environment, described how the company put temperature sensors throughout their data center, installed controllers on their CRAC units, and uses an intelligent cooling management system to avoid wasting energy. “We see the cooling energy go way down — many times cut by over 50%,” he told Energy Manager Today in July.
Hamilton also points to strategies that prevent hot and cold air from mixing. Hot aisle containment systems (HACS) and cold aisle containment systems (CACS) help improve the efficiency of a data center’s traditional cooling system, he says. CACS guarantee that the IT equipment gets cooled to a manufacturer’s specifications and creates an airflow that eliminates hot spots, increasing operating efficiency. At the same time, HACS can double the cooling capacity of a CRAC unit by sending hot air directly to the AC intake point for recycling, Hamilton explains.
Other ways that operators can make data centers more energy efficient include carefully planning the power distribution, hardware, power, and networking. Geographic location is also a big factor, as evidenced by Apple’s recent plans for a data center in Iowa and Capgemini’s unique modular data center on a brownfield in the UK.
A data center’s PUE is a metric determined by essentially dividing the total power coming into the center by the amount of power it actually uses to run the computer infrastructure. “A score of one depicts the optimal level of data center efficiency, meaning all power going into the data center is being used to power IT equipment,” Hamilton writes.
In recent years, data centers run by smaller firms and tech giants alike have actually started to get very close to that singular PUE ideal. What they have in common: A focus on energy efficiency from the very start.