Data centers are energy vampires*. According to the Environmental Protection Agency (EPA), data centers consumed about 1.5% of all U.S. electrical power as of 2006. The same EPA report predicts that data center power use would increase in double-digit percentages every year for the next several years.
Energy efficiency advances in data centers: what’s missing?
Most notably, the EPA-predicted growth in power consumption is likely to occur despite two waves of advances that have already occurred in data center energy efficiency:
1. The first wave was hardware. It involved advances in cooling (e.g., HVAC) and compact server form factors (e.g., blade servers).
2. The second wave was software. It focused on virtualization to allow data centers to consolidate software processes on fewer servers.
On average, servers in a data center run at 20-30% of peak capacity in order to accommodate occasional peak demand like viral videos, breaking news, ad campaigns, or periodic data processing tasks. An idle server consumes about half its peak power requirement since it still needs to keep the disks spinning, fans running, and the circuits energized.
So approximately half the energy consumed by data centers today is simply wasted.
While a future generation of servers would be capable of throttling power consumption down to zero as computational loads decrease, it’s beyond the state-of-the-art available now. The best we can do today is consolidate computational loads on the smallest possible hardware footprint, operate the hardware at peak efficiency (in terms of computations per watt), and shut down unused hardware resources. With these types of efficiency improvements, it’s possible to recover up to half the energy consumed in data centers today.
3. The third wave is software managing the hardware. It uses model-based optimization and control to “adapt” energy use to resource needs.
Over the last few years, PARC has developed a suite of energy management concepts and innovations that we refer to as Adaptive Energy. The Adaptive Energy suite is based on PARC competencies in control, optimization, and model-based planning — many of which we developed to optimize the reliability and adaptability of complex production printing systems for Xerox. The model-based approach requires multidisciplinary expertise because it incorporates software and particularly artificial intelligence reasoning into hardware/physical systems.
We have applied the model-based control and optimization approach in different industries including transportation, factory automation, aerospace, and of course, cleantech.
This first application of the Adaptive Energy suite:
- monitors virtual machines (VMs) in real time;
- optimizes VM resource utilization based on current and forecast demand; and
- works through a hypervisor to consolidate virtual machines on the smallest possible hardware footprint.
The consolidation enables active servers to be run at high loads — and thus at peak efficiency — while allowing unused resources to be shut down to save electricity. Better still, the Adaptive Energy suite allocates server resources based on task priorities, thus ensuring that high-priority tasks (such as sales transactions) satisfy their Quality-of-Service (QoS) requirements at all times.
We recently announced this approach to preventing energy waste in data centers with our commercialization partner for this application, PowerAssure, a developer of power management solutions for data centers. The work is partially funded by a U.S. Department of Energy grant focused on lowering energy use by data centers and telecommunications systems, which was funded by the American Reinvestment and Recovery Act (ARRA) of 2009. The PARC side of the effort is led by Dan Greene.
*[By the way, I borrowed the term “energy vampires” from a song by Peter Hammill (of Van Der Graaf Generator fame)]
Editor: Sonal Chokshi