Practical Strategies for Power-Efficient Computing (original) (raw)
Practical Strategies for Power-Efficient Computing Technologies
Proceedings of the …, 2010
| After decades of continuous scaling, further advancement of silicon microelectronics across the entire spectrum of computing applications is today limited by power dissipation. While the trade-off between power and performance is well-recognized, most recent studies focus on the extreme ends of this balance. By concentrating instead on an intermediate range, an $ 8Â improvement in power efficiency
Reducing power in high-performance microprocessors
Proceedings of the 35th annual conference on Design automation conference - DAC '98, 1998
Power consumption has become one of the biggest challenges in high-performance microprocessor design. The rapid increase in the complexity and speed of each new CPU generation is outstripping the benefits of voltage reduction and feature size scaling. Designers are thus continuously challenged to come up with innovative ways to reduce power, while trying to meet all the other constraints imposed on the design. This paper presents an overview of the issues related to power consumption in the context of Intel CPUs. The main trends that are driving the increased focus on design for low power are described. System and benchmarking issues, and sources of power consumption in a high-performance CPU are briefly described. Techniques that have been tried on real designs in the past are described. The role of CAD tools and their limitations in this domain will also be discussed. In addition, areas that need increased research focus in the future are also pointed out.
A Performance On-Demand Approach to Power-Efficient Computing
Complexity-Adaptive Processing (CAP) addresses increasing microprocessor power dissipation through on-the-fly, low-cost hardware adaptation and related circuit techniques so as to better match hardware complexity and speed to application demands. Specific results include adaptive processing elements and hardware/software control techniques, a Multiple Clock Domain processor that saves energy via fine-grain voltage scaling, power-efficient issue queue and register file techniques, a low-leakage dynamic logic circuit and associated control logic for functional units, multi-threaded power and noise reduction, efficient on-chip dc-dc conversion and clock control circuits, low power domino logic and interface circuits, and interconnect width optimization for low power. Overall, a several-fold reduction in power is demonstrated via the collective application of these various techniques.
Improvement of Power-Performance Efficiency for High-End Computing
Left unchecked, the fundamental drive to increase peak performance using tens of thousands of power hungry components will lead to intolerable operating costs and failure rates. High-performance, power-aware distributed computing reduces power and energy consumption of distributed applications and systems without sacrificing performance. Recent work has shown application characteristics of single-processor, memorybound non-interactive codes and distributed, interactive web services can be exploited to conserve power and energy with minimal performance impact. Our novel approach is to exploit parallel performance inefficiencies characteristic of non-interactive, distributed scientific applications, conserving energy using DVS (dynamic voltage scaling) without impacting time-to-solution (TTS) significantly, reducing cost and improving reliability. We present a software framework to analyze and optimize distributed power-performance using DVS implemented on a 16-node Centrino-based cluster. We use our framework to quantify and compare the power-performance efficiency for parallel Fourier transform and matrix transpose codes. Using various DVS strategies we achieve application-dependent overall system energy savings as large as 25% with as little as 2% performance impact.
Trends and techniques for energy efficient architectures
2010 18th IEEE/IFIP International Conference on VLSI and System-on-Chip, 2010
Microprocessor architectures have become increasingly power limited in recent years. Currently power and thermal envelopes dictate peak performance limits more than any other design constraint. As voltage scaling has slowed down, innovative techniques have been pursued to improve the power efficiency of the increasingly demanding multi-core architectures. In this paper we look at recent trends in multi-cores with a special focus on trends and techniques to address these challenges.
Low Power Processor Architectures and Contemporary Techniques for Power Optimization – A Review
Journal of Computers, 2009
The technological evolution has increased the number of transistors for a given die area significantly and increased the switching speed from few MHz to GHz range. Such inversely proportional decline in size and boost in performance consequently demands shrinking of supply voltage and effective power dissipation in chips with millions of transistors. This has triggered substantial amount of research in power reduction techniques into almost every aspect of the chip and particularly the processor cores contained in the chip. This paper presents an overview of techniques for achieving the power efficiency mainly at the processor core level but also visits related domains such as buses and memories. There are various processor parameters and features such as supply voltage, clock frequency, cache and pipelining which can be optimized to reduce the power consumption of the processor. This paper discusses various ways in which these parameters can be optimized. Also, emerging power efficient processor architectures are overviewed and research activities are discussed which should help reader identify how these factors in a processor contribute to power consumption. Some of these concepts have been already established whereas others are still active research areas.
Exploring the potential of architecture-level power optimizations
2005
This paper examines the limits of microprocessor energy reduction available via certain classes of architecture-level optimization. It focuses on three sources of waste that consume energy. The first is the execution of instructions that are unnecessary for correct program execution. The second source of wasted power is speculation waste–waste due to speculative execution of instructions that do not commit their results. The third source is architectural waste. This comes from suboptimal sizing of processor structures.
Power Reduction Techniques For Microprocessor Systems
Power consumption is a major factor that limits the performance of computers. We survey the "state of the art" in techniques that reduce the total power consumed by a microprocessor system over time. These techniques are applied at various levels ranging from circuits to architectures, architectures to system software, and system software to applications. They also include holistic approaches that will become more important over the next decade. We conclude that power management is a multifaceted discipline that is continually expanding with new techniques being developed at every level. These techniques may eventually allow computers to break through the "power wall" and achieve unprecedented levels of performance, versatility, and reliability. Yet it remains too early to tell which techniques will ultimately solve the power problem.
Changing computing paradigms towards power efficiency
2014
Power awareness is fast becoming immensely important in computing, ranging from the traditional high-performance computing applications to the new generation of data centric workloads. In this work, we describe our efforts towards a power-efficient computing paradigm that combines low- and high-precision arithmetic. We showcase our ideas for the widely used kernel of solving systems of linear equations that finds numerous applications in scientific and engineering disciplines as well as in large-scale data analytics, statistics and machine learning. Towards this goal, we developed tools for the seamless power profiling of applications at a fine-grain level. In addition, we verify here previous work on post-FLOPS/W metrics and show that these can shed much more light in the power/energy profile of important applications.