Abhishek Das | University of Strathclyde, Glasgow (original) (raw)

Papers by Abhishek Das

Research paper thumbnail of Ethical aspect of duty of care and resource allocation during COVID-19 pandemic: An Indian overview

Journal of Public Health and Primary Care, 2021

Research paper thumbnail of Low Complexity Burst Error Correcting Codes to Correct MBUs in SRAMs

Proceedings of the 2018 on Great Lakes Symposium on VLSI, 2018

1 Multiple bit upsets (MBUs) caused by high energy radiation is the most common source of soft er... more 1 Multiple bit upsets (MBUs) caused by high energy radiation is the most common source of soft errors in static random-access memories (SRAMs) affecting multiple cells. Burst error correcting Hamming codes have most commonly been used to correct MBUs in SRAM cell since they have low redundancy and low decoder latency. But with technology scaling, the number of bits being affected increases, thus requiring a need for increasing the burst size that can be corrected. However, this is a problem because it increases the number of syndromes exponentially thus increasing the decoder complexity exponentially as well. In this paper, a new burst error correcting code based on Hamming codes is proposed which allows much better scaling of decoder complexity as the burst size is increased. For larger burst sizes, it can provide significantly smaller and faster decoders than existing methods thus providing higher reliability at an affordable cost. Moreover, there is frequently no increase in the number of check bits or a very minimal increase in comparison with existing methods. A general construction and decoding methodology for the new codes is proposed. Experimental results are presented comparing the decoder complexity for the proposed codes with conventional burst error correcting Hamming codes demonstrating the significant improvements that can be achieved.

Research paper thumbnail of A Single Error Correcting Code with One-Step Group Partitioned Decoding Based on Shared Majority-Vote

Electronics, 2020

Technology scaling has led to an increase in density and capacity of on-chip caches. This has ena... more Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square code...

Research paper thumbnail of Central serous chorioretinopathy: Recent trends

Egyptian Retina Journal, 2019

Central serous chorioretinopathy (CSCR) belongs to pachychoroid spectrum of diseases, and the pat... more Central serous chorioretinopathy (CSCR) belongs to pachychoroid spectrum of diseases, and the pathogenesis is still not fully understood. It is associated with multiple systemic factors. It is characterized by the collection of fluid between the retinal pigment epithelium (RPE) and the neurosensory retina. It mostly affects young men under perceived stress. The involvement of corticosteroids is undisputed although their exact role has not been clarified; other parts of the underlying mechanism of CSCR have been mainly elucidated by imaging techniques, such as fluorescein and indocyanine green angiography. The course is usually self-limiting and in most cases resolves spontaneously within a 3–6-month period, with visual acuity usually recovering to 20/30 or better. However, chronic CSC may develop as a consequence of recurrences or persistent neurosensory detachment and can result in progressive RPE atrophy and permanent visual loss. We used search terms such as “central serous retinopathy,” “central serous chorioretinopathy,” “CSCR,” “management,” and “treatment” and also the keywords of each type of treatment on PubMed and Medline to retrieve articles for this systematic review.

Research paper thumbnail of A simple optical thin film deposition monitor using LED’s and fiber optics

Journal of Vacuum Science and Technology, 1982

A thin film deposition monitor is described which can be used to monitor the rf‐sputter depositio... more A thin film deposition monitor is described which can be used to monitor the rf‐sputter deposition of dielectric and thin metallic films on transparent substrates in a multi‐target system. The monitor uses a green and a red LED as a light‐source, a bundle of optical fibers to transmit the light into and out of the sputtering chamber and Si‐photocells as detectors. The LED’s are pulsed at two different frequencies and phase‐sensitive detection allows the simultaneous monitoring of the substrate reflectivity at two different wavelengths. Applications of the deposition monitor include the exact control of individual layers in multilayer dielectric filter structures and monitoring the deposition of very thin, semitransparent metallic films.

Research paper thumbnail of Effect of stoichiometric ratios for synthesized epoxy phenolic novolac (EPN) resins on their physicochemical, thermomechanical and morphological properties

Pigment & Resin Technology, 2016

Purpose The purpose of this paper is to study the effect of various stoichiometric ratios for syn... more Purpose The purpose of this paper is to study the effect of various stoichiometric ratios for synthesised epoxy phenolic novolac (EPN) resins on their physicochemical, thermomechanical and morphological properties. Design/methodology/approach In the present study, EPN (EPN-1, EPN-2, EPN-3, EPN-4 and EPN-5) resins were synthesised by varying five types of different stoichiometric ratios for phenol/formaldehyde along with the corresponding molar ratios for novolac/epichlorohydrin. Their different physicochemical properties of interest, thermomechanical properties as well as morphological properties were studied by means of cured samples with the variation of its stoichiometric ratios. Findings The average functionality and reactivity of EPN resin can be controlled by controlling epoxy equivalence as well as cross-linking density upon its curing as all of these factors are internally correlated with each other. Research limitations/implications Epoxy resins are characterised by a three...

Research paper thumbnail of Physics-driven Shape Variation Modelling at Early Design Stage

Procedia CIRP, 2016

Modern markets are becoming increasingly competitive emphasizing the importance of achieving Righ... more Modern markets are becoming increasingly competitive emphasizing the importance of achieving Right First Time (RFT) during the early design stage as a key enabler facilitating cost and time-to-launch (or time-to-market) reduction. One of the leading challenges to deliver RFT is the lack of effective methods to model product errors at early design stage. Usually, the assembly process is designed under the assumption of ideal (nominal) products. On the contrary, it has been demonstrated that product errors (both geometrical and dimensional) affect the performance of the final assembly. To facilitate easy decision making at early design stage, new methods and models are required to support design engineers. In this study, a framework has been proposed for early design support to generate product variation. International standard provides guidelines for product control and inspection (ISO-GPS or ASME-GD&T); however, the integration of tolerance standard into nominal sized CAD models is not yet achieved. Current, Computer Aided Tolerancing (CAT) tools mainly capable to model orientation and position tolerance specifications, whereas part shape errors are omitted. This paper presents an innovative physics-driven simulation framework to model shape errors of compliant sheet metal parts at early design stage. The modelling framework consists of three important stages: (i) initial shape error prediction using physic-based simulation, such as, stamping process simulation; (ii) individual orthogonal shape error modes/patterns identification based on decomposition techniques, such as, Geometric Modal Analysis (GMA); and, (iii) simulation of shape error variation classes by assigning distribution to each orthogonal shape error modes. The proposed approach enables to generate shape errors at early design stage of assembly process which can be utilized to optimize the assembly process, including fixture design and joining process parameters. An industrial automotive component illustrates the proposed methodology.

Research paper thumbnail of Dissecting Transcriptomes of Cyanobacteria for Novel Metabolite Production

Bagchi/Genomics, Proteomics and Metabolomics in Nutraceuticals and Functional Foods, 2015

Research paper thumbnail of Dynamic Directories: A mechanism for reducing on-chip interconnect power in multicores

2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2012

Research paper thumbnail of Statistical Modelling of a Suitable Habitat Map

Research paper thumbnail of A reconfigurable architecture for network intrusion detection using principal component analysis

Proceedings of the internation symposium on Field programmable gate arrays - FPGA'06, 2006

Research paper thumbnail of Enhancing performance of asynchronous data traffic over the Bluetooth wireless ad-hoc network

Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)

Emerging technologies such as Bluetooth are expected to become a ubiquitous solution for providin... more Emerging technologies such as Bluetooth are expected to become a ubiquitous solution for providing short range, low power, low cost, pico-cellular wireless connectivity. Bluetooth is a Master driven Time Division Duplex (TDD) system that supports an asynchronous channel for data traffic as well as synchronous channels for voice traffic. Data applications running over Bluetooth such as http, ftp and real audio will need transport layer protocols such as TCP and UDP to send packets over the wireless links. In this paper we study several schemes designed to improve the performance of asynchronous data traffic over a Bluetooth piconet that supports multiple active slaves. We propose and compare a number of SAR policies and MAC scheduling algorithms with a view towards enhancing the performance of transport layer sessions. We investigate the effect of different FEC and ARQ schemes at the baseband level, using a two-state Markov channel model for the Bluetooth RF link. We also study how the presence of circuit-switched voice impacts the performance of data traffic.

Research paper thumbnail of HCF: a starvation-free practical algorithm for maximizing throughput in input-queued switches

HPSR. 2005 Workshop on High Performance Switching and Routing, 2005.

Research paper thumbnail of Design Synthesis Methodology for Dimensional Management of Assembly Process with Compliant non-Ideal Parts

Research paper thumbnail of Detecting/preventing information leakage on the memory bus due to malicious hardware

2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010), 2010

Research paper thumbnail of Quantifying and coping with parametric variations in 3D-stacked microarchitectures

Proceedings of the 47th Design Automation Conference on - DAC '10, 2010

Variability in device characteristics, i.e., parametric variations, is an important problem for s... more Variability in device characteristics, i.e., parametric variations, is an important problem for shrinking process technologies. They manifest themselves as variations in performance, power consumption, and reduction in reliability in the manufactured chips as well as low yield levels. Their implications on performance and yield are particularly profound on 3D architectures: a defect on even a single layer can render the entire stack useless. In this paper, we show that instead of causing increased yield losses, we can actually exploit 3D technology to reduce yield losses by intelligently devising the architectures. We take advantage of the layer-to-layer variations to reduce yield losses by splitting critical components among multiple layers. Our results indicate that our proposed method achieves a 30.6% lower yield loss rate compared to the same pipeline implemented on a 2D architecture.

Research paper thumbnail of Evaluating the Imagine Stream Architecture

ACM SIGARCH Computer Architecture News, 2004

This paper describes an experimental evaluation of the prototype Imagine stream processor. Imagin... more This paper describes an experimental evaluation of the prototype Imagine stream processor. Imagine [8] is a stream processor that employs a two-level register hierarchy with 9.7 Kbytes of local register file capacity and 128 Kbytes of stream register file (SRF) capacity to capture producerconsumer locality in stream applications. Parallelism is exploited using an array of 48 floating-point arithmetic units organized as eight SIMD clusters with a 6-wide VLIW per cluster. We evaluate the performance of each aspect of the Imagine architecture using a set of synthetic microbenchmarks, key media processing kernels, and full applications. These micro-benchmarks show that the prototype hardware can attain 7.96 GFLOPS or 25.4 GOPS of arithmetic performance, 12.7 Gbytes/s of SRF bandwidth, 1.58 Gbytes/s of memory system bandwidth, and accept up to 2 million stream processor instructions per second from a host processor. On a set of media processing kernels, Imagine sustained an average of 43% of peak arithmetic performance. An evaluation of full applications provides a breakdown of where execution time is spent. Over full applications, Imagine achieves 39.4% of peak performance, of the remainder on average 36.4% of time is lost due to load imbalance between arithmetic units in the VLIW clusters and limited instruction-level parallelism within kernel inner loops, 10.6% is due to kernel startup and shutdown overhead because of short stream lengths, 7.6% is due to memory stalls, and the rest is due to insufficient host processor bandwidth. Further analysis included in the paper presents the impact of host instruction bandwidth on application performance, particularly on smaller datasets. In summary, the experimental measurements described in this paper demonstrate the high performance and efficiency of stream processing: operating at 200 MHz, Imagine sustains 4.81 GFLOPS on QR decomposition while dissipating 7.42 Watts.

Research paper thumbnail of Evaluating the effects of cache redundancy on profit

2008 41st IEEE/ACM International Symposium on Microarchitecture, 2008

Research paper thumbnail of Stream Scheduling: A Framework to Manage Bulk Operations in a Memory Hierarchy

16th International Conference on Parallel Architecture and Compilation Techniques (PACT 2007), 2007

Research paper thumbnail of Microarchitectures for Managing Chip Revenues under Process Variations

IEEE Computer Architecture Letters, 2008

Research paper thumbnail of Ethical aspect of duty of care and resource allocation during COVID-19 pandemic: An Indian overview

Journal of Public Health and Primary Care, 2021

Research paper thumbnail of Low Complexity Burst Error Correcting Codes to Correct MBUs in SRAMs

Proceedings of the 2018 on Great Lakes Symposium on VLSI, 2018

1 Multiple bit upsets (MBUs) caused by high energy radiation is the most common source of soft er... more 1 Multiple bit upsets (MBUs) caused by high energy radiation is the most common source of soft errors in static random-access memories (SRAMs) affecting multiple cells. Burst error correcting Hamming codes have most commonly been used to correct MBUs in SRAM cell since they have low redundancy and low decoder latency. But with technology scaling, the number of bits being affected increases, thus requiring a need for increasing the burst size that can be corrected. However, this is a problem because it increases the number of syndromes exponentially thus increasing the decoder complexity exponentially as well. In this paper, a new burst error correcting code based on Hamming codes is proposed which allows much better scaling of decoder complexity as the burst size is increased. For larger burst sizes, it can provide significantly smaller and faster decoders than existing methods thus providing higher reliability at an affordable cost. Moreover, there is frequently no increase in the number of check bits or a very minimal increase in comparison with existing methods. A general construction and decoding methodology for the new codes is proposed. Experimental results are presented comparing the decoder complexity for the proposed codes with conventional burst error correcting Hamming codes demonstrating the significant improvements that can be achieved.

Research paper thumbnail of A Single Error Correcting Code with One-Step Group Partitioned Decoding Based on Shared Majority-Vote

Electronics, 2020

Technology scaling has led to an increase in density and capacity of on-chip caches. This has ena... more Technology scaling has led to an increase in density and capacity of on-chip caches. This has enabled higher throughput by enabling more low latency memory transfers. With the reduction in size of SRAMs and development of emerging technologies, e.g., STT-MRAM, for on-chip cache memories, reliability of such memories becomes a major concern. Traditional error correcting codes, e.g., Hamming codes and orthogonal Latin square codes, either suffer from high decoding latency, which leads to lower overall throughput, or high memory overhead. In this paper, a new single error correcting code based on a shared majority voting logic is presented. The proposed codes trade off decoding latency in order to improve the memory overhead posed by orthogonal Latin square codes. A latency optimization technique is also proposed which lowers the decoding latency by incurring a slight memory overhead. It is shown that the proposed codes achieve better redundancy compared to orthogonal Latin square code...

Research paper thumbnail of Central serous chorioretinopathy: Recent trends

Egyptian Retina Journal, 2019

Central serous chorioretinopathy (CSCR) belongs to pachychoroid spectrum of diseases, and the pat... more Central serous chorioretinopathy (CSCR) belongs to pachychoroid spectrum of diseases, and the pathogenesis is still not fully understood. It is associated with multiple systemic factors. It is characterized by the collection of fluid between the retinal pigment epithelium (RPE) and the neurosensory retina. It mostly affects young men under perceived stress. The involvement of corticosteroids is undisputed although their exact role has not been clarified; other parts of the underlying mechanism of CSCR have been mainly elucidated by imaging techniques, such as fluorescein and indocyanine green angiography. The course is usually self-limiting and in most cases resolves spontaneously within a 3–6-month period, with visual acuity usually recovering to 20/30 or better. However, chronic CSC may develop as a consequence of recurrences or persistent neurosensory detachment and can result in progressive RPE atrophy and permanent visual loss. We used search terms such as “central serous retinopathy,” “central serous chorioretinopathy,” “CSCR,” “management,” and “treatment” and also the keywords of each type of treatment on PubMed and Medline to retrieve articles for this systematic review.

Research paper thumbnail of A simple optical thin film deposition monitor using LED’s and fiber optics

Journal of Vacuum Science and Technology, 1982

A thin film deposition monitor is described which can be used to monitor the rf‐sputter depositio... more A thin film deposition monitor is described which can be used to monitor the rf‐sputter deposition of dielectric and thin metallic films on transparent substrates in a multi‐target system. The monitor uses a green and a red LED as a light‐source, a bundle of optical fibers to transmit the light into and out of the sputtering chamber and Si‐photocells as detectors. The LED’s are pulsed at two different frequencies and phase‐sensitive detection allows the simultaneous monitoring of the substrate reflectivity at two different wavelengths. Applications of the deposition monitor include the exact control of individual layers in multilayer dielectric filter structures and monitoring the deposition of very thin, semitransparent metallic films.

Research paper thumbnail of Effect of stoichiometric ratios for synthesized epoxy phenolic novolac (EPN) resins on their physicochemical, thermomechanical and morphological properties

Pigment & Resin Technology, 2016

Purpose The purpose of this paper is to study the effect of various stoichiometric ratios for syn... more Purpose The purpose of this paper is to study the effect of various stoichiometric ratios for synthesised epoxy phenolic novolac (EPN) resins on their physicochemical, thermomechanical and morphological properties. Design/methodology/approach In the present study, EPN (EPN-1, EPN-2, EPN-3, EPN-4 and EPN-5) resins were synthesised by varying five types of different stoichiometric ratios for phenol/formaldehyde along with the corresponding molar ratios for novolac/epichlorohydrin. Their different physicochemical properties of interest, thermomechanical properties as well as morphological properties were studied by means of cured samples with the variation of its stoichiometric ratios. Findings The average functionality and reactivity of EPN resin can be controlled by controlling epoxy equivalence as well as cross-linking density upon its curing as all of these factors are internally correlated with each other. Research limitations/implications Epoxy resins are characterised by a three...

Research paper thumbnail of Physics-driven Shape Variation Modelling at Early Design Stage

Procedia CIRP, 2016

Modern markets are becoming increasingly competitive emphasizing the importance of achieving Righ... more Modern markets are becoming increasingly competitive emphasizing the importance of achieving Right First Time (RFT) during the early design stage as a key enabler facilitating cost and time-to-launch (or time-to-market) reduction. One of the leading challenges to deliver RFT is the lack of effective methods to model product errors at early design stage. Usually, the assembly process is designed under the assumption of ideal (nominal) products. On the contrary, it has been demonstrated that product errors (both geometrical and dimensional) affect the performance of the final assembly. To facilitate easy decision making at early design stage, new methods and models are required to support design engineers. In this study, a framework has been proposed for early design support to generate product variation. International standard provides guidelines for product control and inspection (ISO-GPS or ASME-GD&T); however, the integration of tolerance standard into nominal sized CAD models is not yet achieved. Current, Computer Aided Tolerancing (CAT) tools mainly capable to model orientation and position tolerance specifications, whereas part shape errors are omitted. This paper presents an innovative physics-driven simulation framework to model shape errors of compliant sheet metal parts at early design stage. The modelling framework consists of three important stages: (i) initial shape error prediction using physic-based simulation, such as, stamping process simulation; (ii) individual orthogonal shape error modes/patterns identification based on decomposition techniques, such as, Geometric Modal Analysis (GMA); and, (iii) simulation of shape error variation classes by assigning distribution to each orthogonal shape error modes. The proposed approach enables to generate shape errors at early design stage of assembly process which can be utilized to optimize the assembly process, including fixture design and joining process parameters. An industrial automotive component illustrates the proposed methodology.

Research paper thumbnail of Dissecting Transcriptomes of Cyanobacteria for Novel Metabolite Production

Bagchi/Genomics, Proteomics and Metabolomics in Nutraceuticals and Functional Foods, 2015

Research paper thumbnail of Dynamic Directories: A mechanism for reducing on-chip interconnect power in multicores

2012 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2012

Research paper thumbnail of Statistical Modelling of a Suitable Habitat Map

Research paper thumbnail of A reconfigurable architecture for network intrusion detection using principal component analysis

Proceedings of the internation symposium on Field programmable gate arrays - FPGA'06, 2006

Research paper thumbnail of Enhancing performance of asynchronous data traffic over the Bluetooth wireless ad-hoc network

Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213)

Emerging technologies such as Bluetooth are expected to become a ubiquitous solution for providin... more Emerging technologies such as Bluetooth are expected to become a ubiquitous solution for providing short range, low power, low cost, pico-cellular wireless connectivity. Bluetooth is a Master driven Time Division Duplex (TDD) system that supports an asynchronous channel for data traffic as well as synchronous channels for voice traffic. Data applications running over Bluetooth such as http, ftp and real audio will need transport layer protocols such as TCP and UDP to send packets over the wireless links. In this paper we study several schemes designed to improve the performance of asynchronous data traffic over a Bluetooth piconet that supports multiple active slaves. We propose and compare a number of SAR policies and MAC scheduling algorithms with a view towards enhancing the performance of transport layer sessions. We investigate the effect of different FEC and ARQ schemes at the baseband level, using a two-state Markov channel model for the Bluetooth RF link. We also study how the presence of circuit-switched voice impacts the performance of data traffic.

Research paper thumbnail of HCF: a starvation-free practical algorithm for maximizing throughput in input-queued switches

HPSR. 2005 Workshop on High Performance Switching and Routing, 2005.

Research paper thumbnail of Design Synthesis Methodology for Dimensional Management of Assembly Process with Compliant non-Ideal Parts

Research paper thumbnail of Detecting/preventing information leakage on the memory bus due to malicious hardware

2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010), 2010

Research paper thumbnail of Quantifying and coping with parametric variations in 3D-stacked microarchitectures

Proceedings of the 47th Design Automation Conference on - DAC '10, 2010

Variability in device characteristics, i.e., parametric variations, is an important problem for s... more Variability in device characteristics, i.e., parametric variations, is an important problem for shrinking process technologies. They manifest themselves as variations in performance, power consumption, and reduction in reliability in the manufactured chips as well as low yield levels. Their implications on performance and yield are particularly profound on 3D architectures: a defect on even a single layer can render the entire stack useless. In this paper, we show that instead of causing increased yield losses, we can actually exploit 3D technology to reduce yield losses by intelligently devising the architectures. We take advantage of the layer-to-layer variations to reduce yield losses by splitting critical components among multiple layers. Our results indicate that our proposed method achieves a 30.6% lower yield loss rate compared to the same pipeline implemented on a 2D architecture.

Research paper thumbnail of Evaluating the Imagine Stream Architecture

ACM SIGARCH Computer Architecture News, 2004

This paper describes an experimental evaluation of the prototype Imagine stream processor. Imagin... more This paper describes an experimental evaluation of the prototype Imagine stream processor. Imagine [8] is a stream processor that employs a two-level register hierarchy with 9.7 Kbytes of local register file capacity and 128 Kbytes of stream register file (SRF) capacity to capture producerconsumer locality in stream applications. Parallelism is exploited using an array of 48 floating-point arithmetic units organized as eight SIMD clusters with a 6-wide VLIW per cluster. We evaluate the performance of each aspect of the Imagine architecture using a set of synthetic microbenchmarks, key media processing kernels, and full applications. These micro-benchmarks show that the prototype hardware can attain 7.96 GFLOPS or 25.4 GOPS of arithmetic performance, 12.7 Gbytes/s of SRF bandwidth, 1.58 Gbytes/s of memory system bandwidth, and accept up to 2 million stream processor instructions per second from a host processor. On a set of media processing kernels, Imagine sustained an average of 43% of peak arithmetic performance. An evaluation of full applications provides a breakdown of where execution time is spent. Over full applications, Imagine achieves 39.4% of peak performance, of the remainder on average 36.4% of time is lost due to load imbalance between arithmetic units in the VLIW clusters and limited instruction-level parallelism within kernel inner loops, 10.6% is due to kernel startup and shutdown overhead because of short stream lengths, 7.6% is due to memory stalls, and the rest is due to insufficient host processor bandwidth. Further analysis included in the paper presents the impact of host instruction bandwidth on application performance, particularly on smaller datasets. In summary, the experimental measurements described in this paper demonstrate the high performance and efficiency of stream processing: operating at 200 MHz, Imagine sustains 4.81 GFLOPS on QR decomposition while dissipating 7.42 Watts.

Research paper thumbnail of Evaluating the effects of cache redundancy on profit

2008 41st IEEE/ACM International Symposium on Microarchitecture, 2008

Research paper thumbnail of Stream Scheduling: A Framework to Manage Bulk Operations in a Memory Hierarchy

16th International Conference on Parallel Architecture and Compilation Techniques (PACT 2007), 2007

Research paper thumbnail of Microarchitectures for Managing Chip Revenues under Process Variations

IEEE Computer Architecture Letters, 2008