Data Center Energy Savings: Start with the Servers

by | Nov 18, 2015

Lawrence Lab data center imageEfforts to manage energy use in data centers generally focus on improving the efficiency of HVAC systems, innovative architectural approaches (such as modularity) and adding solar, hydro and other alternative sources of energy.

Those are great ways to drive efficiency, of course. What may sometimes be overlooked – because it is an approach rooted in IT and not energy – is the efficiency of the equipment itself.  Unused energy is saved energy. Cutting expenditures in the servers leads to lower requirements and, ultimately lowers expenditures.

There are significant and ongoing ways to cut the energy. The first step, of course, is to measure the energy being used. One way of doing so, according to SearchDataCenter, is the Standard Performance Evaluation Corporation (SPEC) Power group’s SPECpower_ssj 2008. The story says that it measures energy use of a fixed payload on different servers.

The story features a two-question Q & A with Klaus-Dieter Lange, SPEC’s board director and committee chair for SPECpower. He said that the importance of server energy issues is growing because of the difficulty providers have meeting demand and for the increasing costs to end users. (More information on SPECpower_ssj 2008 is here.)

Power Not the Top Concern

Power is vital to data center operators, but not the issue that keeps them up nights. They worry more more about operational issues: Are the servers working fast enough? Is the failure rate low enough? These and related questions are life and death to their business (and jobs). Energy efficiency of course is important, but only one bullet point on the list.

Amir Michael and Elena Novakovskaia, who work with enterprise infrastructure analytics firm Coolan, blogged earlier this month on the tricky issue of when to replace equipment. It is a complex calculus. Servers are no different than any other technology: Cost per a set amount of work shrink over time and make new equipment attractive, demands on the business evolves and equipment reliability degrades as it ages. The question is when to make a move. Energy, Michael and Novakovskaia write, is one of the criteria:

Aging infrastructure costs more than you might think. Old machines lingering in a data center exact a hidden cost — with each new generation of hardware, servers become more powerful and energy efficient. Over time, the total cost of ownership drops through reduced energy bills, a lower risk of downtime, and improved IT performance.

Another server-related variable that impacts the energy efficiency of data centers is how close to capacity each device operates. The emergence of virtualization – the ability to distribute loads across different physical machines – means that clever management enables servers to run closer to capacity. One strategy, according to The Wall Street Journal, is to get most bang for the buck out of each server. The story says that servers at Intel’s new data center run in the low-90s utilization.

Server energy performance will continue to improve. It repeatedly has been proven that advances in one area are enfranchised in others. The most obvious example of that is in the battery sector. Research by Tesla and other vehicle companies is benefiting the mobile telecommunications and energy industries.

New materials also will drive server energy down. Power Electronics last week posted a piece on experiments at Stanford University that focus on the use of graphene instead of silicon for memory chips. The impact, the story says, will be felt keenly in data centers:

While consumers might appreciate the mobile application of these new technologies, engineers think post-silicon memory chips may also transform server farms that must store and deliver quick access to the vast quantities of data stored in the cloud.

Even more immediately, the explosion of the Internet of Things (IoT) will drive energy savings in microprocessors. IoT end points often are in inaccessible places — and there are billions of them. Ways to cut power demands is vital from a cost standpoint and simply because it is impossible to get to them all. The shortcuts, workaround and other tricks over time will be incorporated into general computer and server-specific hardware  and software designs.

Stay Informed

Get E+E Leader Articles delivered via Newsletter right to your inbox!

Share This