Paul Bogdan | Carnegie Mellon University (original) (raw)
Papers by Paul Bogdan
This paper identifies non-stationary effects in grid like Network-on-Chip (NoC) traffic and propo... more This paper identifies non-stationary effects in grid like Network-on-Chip (NoC) traffic and proposes QuaLe, a novel statistical physics-inspired model, that can account for non-stationarity observed in packet arrival processes. Using a wide set of real application traces, we demonstrate the need for a multi-fractal approach and analyze various packet arrival properties accordingly. As a case study, we show the benefits of our multifractal approach in estimating the probability of missing deadlines in packet scheduling for chip multiprocessors (CMPs).
2014 IEEE International Congress on Big Data, 2014
Cardiac diseases, like those related to abnormal heart rate activity, have an enormous economic a... more Cardiac diseases, like those related to abnormal heart rate activity, have an enormous economic and psychological impact worldwide. The approaches used to control the behavior of modern pacemakers ignore the fractal nature of heart rate activity. The purpose of this paper is to present a Cyber Physical System approach towards pacemaker design which exploits precisely the fractal properties of heart rate activity in order to design the pacemaker controller.
Cloud Computing is a promising approach to handle the growing needs for computation and storage i... more Cloud Computing is a promising approach to handle the growing needs for computation and storage in an efficient and cost-effective manner. Towards this end, characterizing workloads in the cloud infrastructure (e.g., a data center) is essential for performing cloud optimizations such as resource provisioning and energy minimization. However, there is a huge gap between the characteristics of actual workloads (e.g., they tend to be bursty and exhibit fractal behavior) and existing cloud optimization algorithms, which tend to rely on simplistic assumptions about the workloads. To close this gap, based on fractional calculus concepts, we present a fractal model to account for the complex dynamics of cloud computing workloads (i.e., the number of request arrivals or CPU/memory usage during each time interval). More precisely, we introduce a fractal operator to account for the time-varying fractal properties of the cloud workloads. In addition, we present an efficient (online) parameter estimation algorithm, an accurate forecasting strategy, and a novel fractal-based model predictive control approach for optimizing the CPU utilization, and hence, the overall energy consumption in the system while satisfying networked architecture performance constraints like queue capacities. We demonstrate advantages of our fractal model in forecasting the complex cloud computing dynamics over conventional (non-fractal) models by using real-world cloud (Google) traces. Unlike non-fractal models, which have very poor prediction capabilities under bursty workload conditions, our fractal model can accurately predict bursty request processes, which is crucial for cloud computing workload forecasting. Finally, experimental results demonstrate that the fractal model based optimization outperforms the non-fractal based ones in terms of minimizing the resource utilization by an average of 30%.
Cyber-physical systems (CPS) represent the information technology quest of the 21-st century for ... more Cyber-physical systems (CPS) represent the information technology quest of the 21-st century for a better, cleaner, safer life which integrates computation, communication, and control with physical processes. Physical processes are predominantly non-stationary and require timedependent models for modeling and understanding their behavior. In contrast, in most current computing platforms, their workloads and design methodologies lack proper models for the time component and mostly assume stationary (i.e., time independent) behavior. In this paper, we use empirical data to identify the main characteristics (e.g., self-similarity, nonstationarity) of the communication workload of real CPS. Then, starting from the complex characteristics of CPS workloads, we present a novel statistical physics inspired model which is used to define a new optimal control problem that not only accounts for the observed self-similarity and nonstationarity properties of the CPS workload, but also allows for accurate predictions on CPS dynamical trajectories during the optimization process. This opens new venues for CPS design and optimization for real life applications.
Abstract In order to face the growing complexity of embedded applications, we aim to build highly... more Abstract In order to face the growing complexity of embedded applications, we aim to build highly efficient Network-on-Chip (NoC) architectures which can connect in a scalable manner various computational modules of the platform. For such networked platforms, it is increasingly important to accurately model the traffic characteristics as this is intimately related to our ability to determine the optimal buffer size at various routers in the network and thus provide analytical metrics for various power-performance trade-offs. In this paper, we ...
Motivated by the complexity of spatio-temporal patterns of interconnected human processes (e.g., ... more Motivated by the complexity of spatio-temporal patterns of interconnected human processes (e.g., crowds, car traffic, social networks), this paper sets forth the fractal dynamic games as an analytical tool for modeling and predicting human dynamics. Starting from a statistical physics description of interactions between agents and from the observed statistical properties of economic measures, we construct a master equation characterizing the dynamics of cost functionals as stochastic variables affected by additive and multiplicative noise forces. Given the significance of human behavior, we allow the cost distribution to depend on the evolution of agents density. By coupling the description of agent dynamics through a fractal structure with a generic stochastic utility function, we formulate a new dynamic game. Employing optimal control theory concepts, we derive a continuum formulation of the car traffic dynamics optimization resulting in a nonlinear fractional partial differential equation.
Managing cardiac disease and abnormal heart rate variability remain challenging problems with an ... more Managing cardiac disease and abnormal heart rate variability remain challenging problems with an enormous economic and psychological impact worldwide. Consequently, the purpose of this paper is to introduce a fractal approach to pacemaker design based on the constrained finite horizon optimal control problem. This is achieved by modeling the heart rate dynamics via fractional differential equations. Also, by using calculus of variations, we show that the constrained finite horizon optimal control problem can be reduced to solving a linear system. Finally, we discuss the hardware complexity involved in the practical implementation of fractal controllers.
IEEE Design & Test of Computers, 2011
Built to interact with the physical world, a cyberphysical system (CPS) must be efficient, reliab... more Built to interact with the physical world, a cyberphysical system (CPS) must be efficient, reliable, and safe. To optimize such systems, a science of CPS design considering workload characteristics (e.g., self-similarity and nonstationarity) must be established. CPS modeling and design are greatly improved when statistical physics approaches - such as master equations, renormalization group theory, and fractional derivatives - are implemented in the optimization loop.
While traditional cluster computers are more constrained by power and cooling costs for solving e... more While traditional cluster computers are more constrained by power and cooling costs for solving extreme-scale (or exascale) problems, the continuing progress and integration levels in silicon technologies make possible complete end-user systems on a single chip. This massive level of integration makes modern multicore chips all pervasive in domains ranging from climate forecasting and astronomical data analysis, to consumer electronics, smart phones, and biological applications. Consequently, designing multicore chips for exascale computing while using the embedded systems design principles looks like a promising alternative to traditional cluster-based solutions. This paper aims to present an overview of new, far-reaching design methodologies and run-time optimization techniques that can help breaking the energy efficiency wall in massively integrated single-chip computing platforms.
This demonstration presents a complete software framework for dynamically mapping multi-threaded ... more This demonstration presents a complete software framework for dynamically mapping multi-threaded applications on a cycle accurate Network-on-Chip (NoC) architecture, analyzing the statistics of network workloads and drawing general guidelines regarding the design and optimization of NoCs.
Abstract Continuous technology scaling has enabled the integration of multiple cores on the same ... more Abstract Continuous technology scaling has enabled the integration of multiple cores on the same chip. To overcome the disadvantages of buses, the Network-on-Chip (NoC) architecture has been proposed as a new communication paradigm. To further mitigate the tradeoff between performance and power consumption, dynamic voltage and frequency scaling (DVFS) became the de facto approach in multi-core design. DVFS-based NoC communication was implemented in Intel's most recent Single-chip Cloud Computer (SCC ...
IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 2010
Networks-on-chip (NoCs) have recently emerged as a scalable alternative to classical bus and poin... more Networks-on-chip (NoCs) have recently emerged as a scalable alternative to classical bus and point-to-point architectures. To date, performance evaluation of NoC designs is largely based on simulation which, besides being extremely slow, provides little insight on how different design parameters affect the actual network performance. Therefore, it is practically impossible to use simulation for optimization purposes. In this paper, we present a mathematical model for on-chip routers and utilize this new model for NoC performance analysis. The proposed model can be used not only to obtain fast and accurate performance estimates, but also to guide the NoC design process within an optimization loop. The accuracy of our approach and its practical use is illustrated through extensive simulation results.
Abstract: We propose a natural process for allocating n balls into n bins that are organized as t... more Abstract: We propose a natural process for allocating n balls into n bins that are organized as the vertices of an undirected graph G. Each ball first chooses a vertex u in G uniformly at random. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. In our main result, we prove that this process yields a maximum load of only\ Theta (\ log\ log n) on expander graphs.
Abstract The proliferation of complex phenomena and the tightening competition for limited resour... more Abstract The proliferation of complex phenomena and the tightening competition for limited resources are two fundamental challenges for the modeling, analysis, and optimization of dynamical processes taking place in networked environments/architectures. Modes of collective and competitive behavior can be noticed across a wide array of social, biological, and technological contexts.
Abstract One of the greatest challenges of the emerging VLSI technology is the shift from design ... more Abstract One of the greatest challenges of the emerging VLSI technology is the shift from design determinism to design uncertainty. Indeed, at nanoscale, the chip manufacturability entails failure increase and unpredictable behavior and thus, in order to ensure system-level fault-tolerance, we need to consider alternative paradigms for on-chip communication.
This paper investigates the benefits of a recently proposed communication approach, namely on-chi... more This paper investigates the benefits of a recently proposed communication approach, namely on-chip stochastic communication, and proposes an analytical model for computing its mean hitting time. Towards this end, we model the stochastic communication as a branching process taking place on a finite mesh and estimate the mean number of communication rounds.
As CMOS technology scales down into the deep-submicron (DSM) domain, the costs of design and veri... more As CMOS technology scales down into the deep-submicron (DSM) domain, the costs of design and verification for Systems-on-Chip (SoCs) are rapidly increasing. Relaxing the requirement of 100% correctness for devices and interconnects drastically reduces the costs of design but, at the same time, requires SoCs to be designed with some degree of system-level fault-tolerance. Towards this end, this paper introduces a novel communication paradigm for SoCs, called stochastic communication.
Abstract Managing cardiac disease and abnormal heart rate variability remain challenging problems... more Abstract Managing cardiac disease and abnormal heart rate variability remain challenging problems with an enormous economic and psychological impact worldwide. Consequently, the purpose of this paper is to introduce a fractal approach to pacemaker design based on the constrained finite horizon optimal control problem. This is achieved by modeling the heart rate dynamics via fractional differential equations.
This paper identifies non-stationary effects in grid like Network-on-Chip (NoC) traffic and propo... more This paper identifies non-stationary effects in grid like Network-on-Chip (NoC) traffic and proposes QuaLe, a novel statistical physics-inspired model, that can account for non-stationarity observed in packet arrival processes. Using a wide set of real application traces, we demonstrate the need for a multi-fractal approach and analyze various packet arrival properties accordingly. As a case study, we show the benefits of our multifractal approach in estimating the probability of missing deadlines in packet scheduling for chip multiprocessors (CMPs).
2014 IEEE International Congress on Big Data, 2014
Cardiac diseases, like those related to abnormal heart rate activity, have an enormous economic a... more Cardiac diseases, like those related to abnormal heart rate activity, have an enormous economic and psychological impact worldwide. The approaches used to control the behavior of modern pacemakers ignore the fractal nature of heart rate activity. The purpose of this paper is to present a Cyber Physical System approach towards pacemaker design which exploits precisely the fractal properties of heart rate activity in order to design the pacemaker controller.
Cloud Computing is a promising approach to handle the growing needs for computation and storage i... more Cloud Computing is a promising approach to handle the growing needs for computation and storage in an efficient and cost-effective manner. Towards this end, characterizing workloads in the cloud infrastructure (e.g., a data center) is essential for performing cloud optimizations such as resource provisioning and energy minimization. However, there is a huge gap between the characteristics of actual workloads (e.g., they tend to be bursty and exhibit fractal behavior) and existing cloud optimization algorithms, which tend to rely on simplistic assumptions about the workloads. To close this gap, based on fractional calculus concepts, we present a fractal model to account for the complex dynamics of cloud computing workloads (i.e., the number of request arrivals or CPU/memory usage during each time interval). More precisely, we introduce a fractal operator to account for the time-varying fractal properties of the cloud workloads. In addition, we present an efficient (online) parameter estimation algorithm, an accurate forecasting strategy, and a novel fractal-based model predictive control approach for optimizing the CPU utilization, and hence, the overall energy consumption in the system while satisfying networked architecture performance constraints like queue capacities. We demonstrate advantages of our fractal model in forecasting the complex cloud computing dynamics over conventional (non-fractal) models by using real-world cloud (Google) traces. Unlike non-fractal models, which have very poor prediction capabilities under bursty workload conditions, our fractal model can accurately predict bursty request processes, which is crucial for cloud computing workload forecasting. Finally, experimental results demonstrate that the fractal model based optimization outperforms the non-fractal based ones in terms of minimizing the resource utilization by an average of 30%.
Cyber-physical systems (CPS) represent the information technology quest of the 21-st century for ... more Cyber-physical systems (CPS) represent the information technology quest of the 21-st century for a better, cleaner, safer life which integrates computation, communication, and control with physical processes. Physical processes are predominantly non-stationary and require timedependent models for modeling and understanding their behavior. In contrast, in most current computing platforms, their workloads and design methodologies lack proper models for the time component and mostly assume stationary (i.e., time independent) behavior. In this paper, we use empirical data to identify the main characteristics (e.g., self-similarity, nonstationarity) of the communication workload of real CPS. Then, starting from the complex characteristics of CPS workloads, we present a novel statistical physics inspired model which is used to define a new optimal control problem that not only accounts for the observed self-similarity and nonstationarity properties of the CPS workload, but also allows for accurate predictions on CPS dynamical trajectories during the optimization process. This opens new venues for CPS design and optimization for real life applications.
Abstract In order to face the growing complexity of embedded applications, we aim to build highly... more Abstract In order to face the growing complexity of embedded applications, we aim to build highly efficient Network-on-Chip (NoC) architectures which can connect in a scalable manner various computational modules of the platform. For such networked platforms, it is increasingly important to accurately model the traffic characteristics as this is intimately related to our ability to determine the optimal buffer size at various routers in the network and thus provide analytical metrics for various power-performance trade-offs. In this paper, we ...
Motivated by the complexity of spatio-temporal patterns of interconnected human processes (e.g., ... more Motivated by the complexity of spatio-temporal patterns of interconnected human processes (e.g., crowds, car traffic, social networks), this paper sets forth the fractal dynamic games as an analytical tool for modeling and predicting human dynamics. Starting from a statistical physics description of interactions between agents and from the observed statistical properties of economic measures, we construct a master equation characterizing the dynamics of cost functionals as stochastic variables affected by additive and multiplicative noise forces. Given the significance of human behavior, we allow the cost distribution to depend on the evolution of agents density. By coupling the description of agent dynamics through a fractal structure with a generic stochastic utility function, we formulate a new dynamic game. Employing optimal control theory concepts, we derive a continuum formulation of the car traffic dynamics optimization resulting in a nonlinear fractional partial differential equation.
Managing cardiac disease and abnormal heart rate variability remain challenging problems with an ... more Managing cardiac disease and abnormal heart rate variability remain challenging problems with an enormous economic and psychological impact worldwide. Consequently, the purpose of this paper is to introduce a fractal approach to pacemaker design based on the constrained finite horizon optimal control problem. This is achieved by modeling the heart rate dynamics via fractional differential equations. Also, by using calculus of variations, we show that the constrained finite horizon optimal control problem can be reduced to solving a linear system. Finally, we discuss the hardware complexity involved in the practical implementation of fractal controllers.
IEEE Design & Test of Computers, 2011
Built to interact with the physical world, a cyberphysical system (CPS) must be efficient, reliab... more Built to interact with the physical world, a cyberphysical system (CPS) must be efficient, reliable, and safe. To optimize such systems, a science of CPS design considering workload characteristics (e.g., self-similarity and nonstationarity) must be established. CPS modeling and design are greatly improved when statistical physics approaches - such as master equations, renormalization group theory, and fractional derivatives - are implemented in the optimization loop.
While traditional cluster computers are more constrained by power and cooling costs for solving e... more While traditional cluster computers are more constrained by power and cooling costs for solving extreme-scale (or exascale) problems, the continuing progress and integration levels in silicon technologies make possible complete end-user systems on a single chip. This massive level of integration makes modern multicore chips all pervasive in domains ranging from climate forecasting and astronomical data analysis, to consumer electronics, smart phones, and biological applications. Consequently, designing multicore chips for exascale computing while using the embedded systems design principles looks like a promising alternative to traditional cluster-based solutions. This paper aims to present an overview of new, far-reaching design methodologies and run-time optimization techniques that can help breaking the energy efficiency wall in massively integrated single-chip computing platforms.
This demonstration presents a complete software framework for dynamically mapping multi-threaded ... more This demonstration presents a complete software framework for dynamically mapping multi-threaded applications on a cycle accurate Network-on-Chip (NoC) architecture, analyzing the statistics of network workloads and drawing general guidelines regarding the design and optimization of NoCs.
Abstract Continuous technology scaling has enabled the integration of multiple cores on the same ... more Abstract Continuous technology scaling has enabled the integration of multiple cores on the same chip. To overcome the disadvantages of buses, the Network-on-Chip (NoC) architecture has been proposed as a new communication paradigm. To further mitigate the tradeoff between performance and power consumption, dynamic voltage and frequency scaling (DVFS) became the de facto approach in multi-core design. DVFS-based NoC communication was implemented in Intel's most recent Single-chip Cloud Computer (SCC ...
IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 2010
Networks-on-chip (NoCs) have recently emerged as a scalable alternative to classical bus and poin... more Networks-on-chip (NoCs) have recently emerged as a scalable alternative to classical bus and point-to-point architectures. To date, performance evaluation of NoC designs is largely based on simulation which, besides being extremely slow, provides little insight on how different design parameters affect the actual network performance. Therefore, it is practically impossible to use simulation for optimization purposes. In this paper, we present a mathematical model for on-chip routers and utilize this new model for NoC performance analysis. The proposed model can be used not only to obtain fast and accurate performance estimates, but also to guide the NoC design process within an optimization loop. The accuracy of our approach and its practical use is illustrated through extensive simulation results.
Abstract: We propose a natural process for allocating n balls into n bins that are organized as t... more Abstract: We propose a natural process for allocating n balls into n bins that are organized as the vertices of an undirected graph G. Each ball first chooses a vertex u in G uniformly at random. Then the ball performs a local search in G starting from u until it reaches a vertex with local minimum load, where the ball is finally placed on. In our main result, we prove that this process yields a maximum load of only\ Theta (\ log\ log n) on expander graphs.
Abstract The proliferation of complex phenomena and the tightening competition for limited resour... more Abstract The proliferation of complex phenomena and the tightening competition for limited resources are two fundamental challenges for the modeling, analysis, and optimization of dynamical processes taking place in networked environments/architectures. Modes of collective and competitive behavior can be noticed across a wide array of social, biological, and technological contexts.
Abstract One of the greatest challenges of the emerging VLSI technology is the shift from design ... more Abstract One of the greatest challenges of the emerging VLSI technology is the shift from design determinism to design uncertainty. Indeed, at nanoscale, the chip manufacturability entails failure increase and unpredictable behavior and thus, in order to ensure system-level fault-tolerance, we need to consider alternative paradigms for on-chip communication.
This paper investigates the benefits of a recently proposed communication approach, namely on-chi... more This paper investigates the benefits of a recently proposed communication approach, namely on-chip stochastic communication, and proposes an analytical model for computing its mean hitting time. Towards this end, we model the stochastic communication as a branching process taking place on a finite mesh and estimate the mean number of communication rounds.
As CMOS technology scales down into the deep-submicron (DSM) domain, the costs of design and veri... more As CMOS technology scales down into the deep-submicron (DSM) domain, the costs of design and verification for Systems-on-Chip (SoCs) are rapidly increasing. Relaxing the requirement of 100% correctness for devices and interconnects drastically reduces the costs of design but, at the same time, requires SoCs to be designed with some degree of system-level fault-tolerance. Towards this end, this paper introduces a novel communication paradigm for SoCs, called stochastic communication.
Abstract Managing cardiac disease and abnormal heart rate variability remain challenging problems... more Abstract Managing cardiac disease and abnormal heart rate variability remain challenging problems with an enormous economic and psychological impact worldwide. Consequently, the purpose of this paper is to introduce a fractal approach to pacemaker design based on the constrained finite horizon optimal control problem. This is achieved by modeling the heart rate dynamics via fractional differential equations.