Predictive Power Management

Energy consumption is increasing, and a growing proportion of energy is being consumed by computing. Energy costs for running computers in an organization are as significant as the purchase cost of the hardware itself. We seek to improve the performance of storage systems, while simultaneously reducing their energy consumption.

Predictive Grouping for Power Management

By combining predictive grouping with data placement we are able to reduce the physical range of data access. In other words, actively collocating data can be used to limit the number of devices that are accessed, or simply to reduce the amount of physical activity involved in retrieving data. This is the approach we have used in the PuRPLe (Predictive Reduction of Power and Latency) project, which has succeded in reducing the energy consumption of an individual disk, while simultaneously reducing access latencies. We are extending this approach to include collocation across multiple storage devices/servers, saving energy by allowing the shut-down of individual nodes beyond what could be achieved by exploiting idle periods (this is thanks to the fact that the workload is being effectively reshaped to create new energy-saving opportunities).


  1. David Essary, Ahmed Amer. “Predictive Data Grouping: Defining the bounds of energy and latency reduction through predictive data grouping and replication,” ACM Transactions on Storage,  4(1): pp.1-23, May 2008. (DOI)

  2. Matthew Craven, Ahmed Amer. “Predictive Reduction of Power and Latency (PuRPLe),” Proceedings of the 22nd IEEE/13th NASA Goddard Conference on Mass Storage Systems and Technologies (MSST 2005), Monterey, CA: April 2005, pp.237-244. (PDF, DOI, DBLP BibTeX, Presentation)


Power-Aware Caching

Predicting future data access requests can also be used to improve cache performance. As with the the Aggregating Cache, we were able to use successor predictions to form groups of related items. But in addition to grouping for prefetching, we were able to temper the behavior of our predictors based on their confidence in a particular prediction, coupled with the enrgy-impact of attempting to act on the prediction. The result was a collection of power-aware predictive caching algorithms, most notable of which was the STEP (self-tuning energy-safe predictors) work. With STEP, we were able to employ access prediction for latency reduction, while avoiding the resultant energy penalties of a prefetch workload. Compared to competing predictors, STEP requires half as much device activity to achieve the same performance.


  1. James A. Larkby-Lahet, Ganesh Santhanakrishnan, Ahmed Amer and Panos K. Chrysanthis. “STEP: Self-Tuning Energy-safe Predictors,” Proceedings of the 6th International Conference in Mobile Data Management (MDM 2005), Ayia Napa, Cyprus: ACM, May 2005, pp.125-133. (DOI)

  2. Jeffrey P. Rybczynski, Darrell D. E. Long, Ahmed Amer. “Adapting Predictions and Workloads for Power Management,” Proceedings of the 14th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems (MASCOTS 2006), Monterey, CA, USA: IEEE, September 2006, pp.3-12. (DOI)

  3. Jeffrey P. Rybczynski, Darrell D. E. Long, Ahmed Amer. “Expecting the unexpected: adaptation for predictive energy conservation,” Proceedings of the 2005 ACM workshop on Storage security and survivability (StorageSS 2005), Fairfax, VA: ACM, November 2005, pp.130-134. (PDF, DOI)