GPGPU 2020 @PPoPP (original) (raw)
13th Workshop on General Purpose Processing Using GPU (GPGPU 2020) @ PPoPP 2020
February 23rd, San Diego, CA, USA
The goal of this workshop is to provide a forum to discuss new and emerging general-purpose graphics processing architectures, programming environments, and platforms, as well as evaluate applications that have been able to harness the horsepower provided by these platforms. Papers are being sought on many aspects of GPUs or accelerators, including (but not limited to):
- GPU Applications
- GPU Programming Environments
- GPU Runtime Systems
- GPU Compilation
- GPU Architectures
- Multi-GPU Systems
- GPU Power/Efficiency
- GPU Reliability
- GPU Benchmarking/Measurements
- Heterogeneous Architectures/Platforms
- Non-von Neumann Architectures
- Domain-specific Architectures
- GPU Security
- Machine/Deep Learning
- Graphics
Workshop Program (02/23 - Sunday)
GPGPU workshop will be held in San Marino of the San Diego Mission Bay Resort, San Diego, CA, USA.
The Proceedings are available on ACM: Link
| 07:00 AM - 09:00 AM | Breakfast |
|---|---|
| 09:00 AM - 09:10 AM | Opening Remarks |
| 09:10 AM - 10:00 AM | Keynote - I: Memory System Hardware/Software Co-design for Scalable and Energy-efficient Neural Network Acceleration Jishen Zhao, UC San Diego [Link] |
| 10:00 AM - 10:30 AM | Break |
| 10:30 AM - 10:50 AM | The Minos Computing Library: Efficient Parallel Programming for Extremely Heterogeneous SystemsRoberto Gioiosa (Pacific Northwest National Laboratory), Burcu Mutlu (Pacific Northwest National Laboratory), Seyong Lee (Oak Ridge National Laboratory), Jeffrey Vetter (Oak Ridge National Laboratory), Giulio Picierro (University of Rome Tor Vergata), Marco Cesati (University of Rome Tor Vergata) [Link] |
| 10:50 AM - 11:10 AM | Unveiling Kernel Concurrency in Multiresolution Filters on GPUs with an Image Processing DSLBo Qiao (Friedrich-Alexander-Universität Erlangen-Nürnberg), Oliver Reiche (Siemens Healthcare GmbH), Jürgen Teich (Friedrich-Alexander-Universität Erlangen-Nürnberg), Frank Hannig (Friedrich-Alexander-Universität Erlangen-Nürnberg) [Link] |
| 11:10 AM - 11:30 AM | High-Level Hardware Feature Extraction for GPU Performance Prediction of StencilsToomas Remmelg (University of Edinburgh), Bastian Hagedorn (University of Münster), Lu Li (University of Edinburgh), Michel Steuwer (University of Glasgow), Sergei Gorlatch (University of Münster), Christophe Dubach (University of Edinburgh) [Link] |
| 11:30 AM - 11:50 AM | GPGPU Performance Estimation for Frequency Scaling Using Cross-BenchmarkingQiang Wang (Hong Kong Baptist University), Chengjian Liu (Shenzhen Technology University, College of Big Data and Internet), Xiaowen Chu (Hong Kong Baptist University) [Link] |
| 12:00 PM - 01:00 PM | Lunch |
| 01:30 PM - 02:20 PM | Keynote - II: The Path to Multi-GPU Computing David Kaeli, Northeastern University [Link] |
| 02:30 PM - 03:00 PM | Break |
| 03:00 PM - 03:20 PM | Automatic Generation of Specialized Convolutions for Mobile GPUsNaums Mogers (University of Edinburgh), Valentin Radu (University of Edinburgh), Lu Li (University of Edinburgh), Jack Turner (University of Edinburgh), Michael O’Boyle (University of Edinburgh), Christophe Dubach (University of Edinburgh) [Link] |
| 03:20 PM - 03:40 PM | Custom Code Generation for a Graph DSLBikash Gogoi (Indian Institute of Technology Madras), Unnikrishnan Cheramangalath (Indian Institute of Technology Palakkad), Rupesh Nasre (Indian Institute of Technology Madras) [Link] |
| 03:40 PM - 04:00 PM | Automated Test Generation for OpenCL Kernels using Fuzzing and Constraint SolvingChao Peng (University of Edinburgh), Ajitha Rajan (University of Edinburgh) [Link] |
| 04:00 PM - 04:10 PM | Closing Remarks |
Keynotes
Speaker: Jishen Zhao, UC San Diego
Title: Memory System Hardware/Software Co-design for Scalable and Energy-efficient Neural Network Acceleration
Abstract: Neural networks (NNs) have been adopted in a wide range of application domains, such as image classification, speech recognition, object detection, and computer vision. However, accelerating NNs – especially deep neural networks (DNNs) – can be energy and time consuming, because of frequent data movement between processor and memory. Furthermore, DNNs typically involve massive fine-grained operations with various computation and memory access characteristics. Exploiting high parallelism with such diverse operations is challenging. In this talk, I will describe our effort on a software/hardware memory system co-design to achieve scalable and energy efficient NN acceleration. I will start from exploring hardware and runtime system co-design to exploit heterogeneous processing-in-memory for accelerating DNN training. Then, I will elaborate on scalable and flexible memory fabric design to support large-scale DNN models. Finally, I will show our study on secure memory design for DNN attestation.
Bio: Jishen Zhao is an Assistant Professor in the Computer Science and Engineering Department at University of California, San Diego. Her research spans and stretches the boundary between computer architecture and system software, with a particular emphasis on memory systems, domain-specific acceleration, and system reliability. Her research is driven by both emerging technologies (e.g., nonvolatile memories, 3D-stacked memory) and modern applications (e.g., smart home and autonomous vehicles, deep learning, and big-data analytics). Before joining UCSD, she was an Assistant Professor at UC Santa Cruz, and a research scientist at HP Labs before joining UCSC. She is a recipient of NSF CAREER award in 2017.
Speaker: David Kaeli, Northeastern University
Title: The Path to Multi-GPU Computing
Abstract: Today, compute GPUs have become a primary enabler for accelerating a wide range of workloads ranging from medical imaging to cryptoanalysis, and from molecular dynamics to deep learning. This talk will begin by revisiting how GPUs transformed from serving as a graphics device and quickly became mainstream accelerators. Then this talk will fast forward to where we are today, faced with applications that can easily exhaust the resources of a single GPU, requiring us to find better ways to effectively exploit the resources of multiple GPUs.
Bio: David Kaeli is a College of Engineering Distinguished Professor of Electrical and Computer Engineering at Northeastern University, where he directs the Northeastern University Computer Architecture Research Laboratory (NUCAR). He received a BS and PhD in Electrical Engineering from Rutgers University, and an MS in Computer Engineering from Syracuse University. Prior to joining Northeastern in 1993, Kaeli spent 12 years at IBM, the last 7 at T.J. Watson Research Center, Yorktown Heights, NY. He has been a visiting faculty fellow at the University of Edinburgh, University of Ghent, Technical University of Munich and Barcelona Tech. His current research topics include hardware security, graphics processors, virtualization, heterogeneous computing, and multi-layer reliability. He is an IEEE Fellow and a Distinguished Scientist of the ACM.
Important Dates
- Papers due (FIRM Deadline): December 24, 2019 (AoE)
- Notification: January 20, 2020
- Final paper due: February 03, 2020
- Workshop Date: February 23, 2020
Submission Guidelines
- Full paper submissions must be in PDF format for US letter-size paper. They must not exceed 10 pages (all-inclusive) in standard ACM two-column conference format (review mode, with page numbers and both 9 or 10pt can be used). The review process will be double-blind. Templates for ACM format are available for Microsoft Word, and LaTeX at: here.
- Submission Site: GPGPU 2020
Workshop Organizers
Program Committee
| Akhil Arunkumar | AMD |
|---|---|
| Amir Yazdanbakhsh | Google Research |
| Anthony Gutierrez | AMD Research |
| Bin Ren | William & Mary |
| Biswabandan Panda | IIT Kanpur |
| Bo Wu | Colorado School of Mines |
| Daniel Wong | University of California, Riverside |
| David Kaeli | Northeastern University |
| Elaheh Sadredini | University of Virginia |
| Gunjae Koo | Hongik University |
| Huiyang Zhou | North Carolina State University |
| Hyeran Jeon | San Jose State University |
| Jieming Yin | AMD Research |
| Jin Wang | NVIDIA |
| Karthik Vadambacheri Manian | The Ohio State University |
| Meena Arunachalam | Intel |
| Mehmet E. Belviranli | Colorado School of Mines |
| Michael Gowanlock | Northern Arizona University |
| Michael LeBeane | AMD Research |
| Nael Abu-Ghazaleh | University of California, Riverside |
| Nandita Vijaykumar | Carnegie Mellon University |
| Newsha Ardalani | Baidu Research |
| Philip Garcia | Arm |
| Rachata Ausavarungnirun | TGGS, KMUTNB |
| Sonia Lopez Alarcon | Rochester Institute of Technology |
| Xia Zhao | Ghent University |
| Yifan Sun | Northeastern University |
| Zeid Samoail | Arm |
Proceedings
All accepted papers will be published in the ACM Online Conference Proceedings Series.
Travel Grant
The workshop presenters are eligible to apply for the PAC Fund.
History and Impact
David Kaeli (Northeastern) and John Cavazos (Delaware) started this GPGPU workshop series, which was first held in 2007 at Northeastern University. In 2008, the workshop was held with ASPLOS 2008. This trend continued and this GPGPU workshop was held with ASPLOS for the next 6 years. From 2015 to 2018, the GPGPU workshop was co-located with PPoPP. GPGPU 2019 workshop was held with ASPLOS 2019. GPGPU 2020 workshop returns to PPoPP. The average citation count (as per Google Scholar), for a GPGPU workshop paper is currently 37.5, where there have been 8 influential papers with 100+ citations.