Eyeriss Project (original) (raw)

Eyeriss: An Energy-Efficient Reconfigurable Accelerator

for Deep Convolutional Neural Networks
IEEE ISSCC 2016


Welcome to the Eyeriss Project website!


We will be giving a two day short course on Designing Efficient Deep Learning Systems on July 17-18, 2023 on MIT Campus (with a virtual option). To find out more, please visit MIT Professional Education.


Recent News


Abstract

Eyeriss is an energy-efficient deep convolutional neural network (CNN) accelerator that supports state-of-the-art CNNs, which have many layers, millions of filter weights, and varying shapes (filter sizes, number of filters and channels). The test chip features a spatial array of 168 processing elements (PE) fed by a reconfigurable multicast on-chip network that handles many shapes and minimizes data movement by exploiting data reuse. Data gating and compression are used to reduce energy consumption. The chip has been fully integrated with the Caffe deep learning framework. The video below demonstrates a real-time 1000-class image classification task using pre-trained AlexNet that runs on our Eyeriss Caffe system. The chip can run the convolutions in AlexNet at 35 fps with 278 mW power consumption, which is 10 times more energy efficient than mobile GPUs.


Video



Downloads


BibTeX


@inproceedings{isscc_2016_chen_eyeriss,
    author      = {{Chen, Yu-Hsin and Krishna, Tushar and Emer, Joel and Sze, Vivienne}},
    title       = {{Eyeriss: An Energy-Efficient Reconfigurable Accelerator for Deep Convolutional Neural Networks}},
    booktitle   = {{IEEE International Solid-State Circuits Conference, ISSCC 2016, Digest of Technical Papers}},
    year        = {{2016}},
    pages       = {{262-263}},
}
                

* Indicates authors contributed equally to the work



Acknowledgement

This work is funded by the DARPA YFA grant N66001-14-1-4039, MIT Center for Integrated Circuits & Systems, and gifts from Intel and Nvidia.