Power Savings on the Road to Cloud Computing

by | Jan 7, 2013

From the internet to corporate America and government entities, everyone seems to be talking about the cloud. Organizations can leverage multiple forms of the cloud. Public cloud services are represented by companies like Facebook, Google, and Amazon. Internal or “private cloud” services are provided behind the IT firewall of private companies and government entities. Hybrid clouds coordinate private cloud services with public clouds to optimize an overall cloud architecture.

Whether public, private, or hybrid, cloud computing offers a shared IT infrastructure where services are available on a pay-per-use basis. This environment emphasizes agility and rapid reuse of highly efficient, standard components.

Cloud computing’s case for power reduction

Without cloud computing, data centers consume anywhere from 1 to 1.5 percent of the world’s total power. With cloud computing, it’s possible for data centers to run much more efficiently.

So, how does cloud computing translate into power savings? Annual energy saved between now and 2020 could amount to over $12 billion, just due to the use of cloud computing by large U.S. companies. This is based on a study conducted by the Carbon Disclosure Project (CDP) in 2011. The study also estimates annual carbon reductions by 2020 that could power 5.7 million cars for a year.

Today, Google is another example of making its cloud-based data center more efficient. It claims to have achieved a 50% decrease in data center power consumption over most other data centers.

Many data centers are starting to show significant reductions in power as they embrace cloud-centric architectures. From air-side and water-side free cooling to hot and cold aisle containment, they have also begun to learn from the forward-thinking data center approaches of their public cloud counterparts.

Savings along the way

With private clouds, power savings happen along the journey to cloud, rather than just its final destination. For most corporations and government entities, a move to private cloud computing occurs in phases, often years. Upgrade costs are always a factor weighed against cloud computing’s benefits. Power savings are an added bonus that occurs along the way.

At Datalink, we see private cloud environments evolving with a few fundamental building blocks:

–Physical consolidation. Legacy servers are soon replaced by less space-consuming, more efficient servers. Server manufacturer Dell maintains its blade server uses less power and achieves 20 percent more performance per watt than a traditional rack server.

Older servers also utilize only about 5-10% of their resources. Many legacy servers sit idle and are seldom used by outdated applications. Separate server/data storage silos have proved inefficient, with many applications running better on consolidated servers and shared storage systems.

–VDCs and virtualization. Server virtualization allows application workloads that used to need their own physical servers to now run on their own virtual machine with other applications–all on the same physical server. Per VMware, physical servers can be reduced by 15:1. We’ve seen similar reductions. For physical servers that have been virtualized, companies can save 7000kWh on annual electricity and reduce their CO2 by 4 tons. This equates to taking 1.5 cars off the highway.

Virtual data centers (VDCs) extend virtualization technology past servers into virtual data storage and networks. VDCs are agile environments with resource pools that mimic early private clouds. They also use less overall servers, storage and networking components than legacy environments. For cloud computing, VDC platforms offer the option to migrate or power down virtualized application workloads, often in real-time, when needed.

–Unified computing. The next step for many VDCs is the use of a unified computing pool or “pod” that consists of pre-integrated server, network and storage components for use in the data center. Going by trade names like Vblock, FlexPod, VSPEX, SRA and V-Scape, these are pre-tested, multiple vendor solutions that accelerate a move to private clouds. They also significantly reduce power consumption.

One example is the Cisco Unified Computing System (UCS), which is part of many pod solutions. The Cisco UCS uses half the components and needs less cabling, as well as  power and cooling, than legacy servers. According to Cisco, over 30 billion kilowatt hours of energy can be saved by replacing outdated systems with UCS and virtualization technology. This is the same as 35 million tons of CO2 or the energy output of 15 U.S. coal-fired electric plants.

How efficiency breeds power reductions

When used as building blocks to cloud computing, such efficient systems and technologies bring significant power reductions. However, much of these can be hard to quantify. That’s because they tend to require metrics and specific sensor equipment not always tracked by most IT organizations.

Several published examples exist, however, that lend credence to the idea that the journey to cloud brings environmental savings. In one case, a large global law firm reduced data center power consumption by roughly 30% through storage consolidation and the virtualization of its servers. Another large financial firm was able to reduce its global data centers from 72 down to 20 by means of consolidation, server virtualization and a move to private cloud. Government entities at both the city and state level also noted significant savings and efficiencies in power use after consolidating and virtualizing their environments. In one case, this even amounted to over $3 million in savings on IT power and cooling.

Kent Christensen is the virtualization practice manager for Datalink (Minneapolis, MN). For more information on cloud computing, click here.

Stay Informed

Get E+E Leader Articles delivered via Newsletter right to your inbox!

Share This