The Fifth Workshop on the LLVM Compiler Infrastructure in HPC (original) (raw)

Agenda

Room: D175

Time Speaker Title
9:00 Hal Finkel Welcome
9:15 Bert Maher Keynote: Glow: An Optimizing Compiler for High-Performance Machine Learning
10:00 Coffee Break
10:30 Guray Ozen OpenMP GPU Offload in Flang and LLVM
11:00 David Truby Pointers Inside Lambda Closure Objects in OpenMP Target Offload Regions
11:30 Joel Denny Clacc: Translating OpenACC to OpenMP in Clang
12:00 Francesco Petrogalli LLVM and the automatic vectorization of loops invoking math routines: -fsimdmath
12:30 N/A Lunch
2:00 Matt Masten Function/Kernel Vectorization via Loop Vectorizer
2:30 Michael Kruse User-Directed Loop-Transformations in Clang
3:00 Coffee Break
3:30 Gábor Dániel Balogh OP2-Clang: A Source-to-Source Translator Using Clang/LLVM LibTooling
4:00 Fabian Schlebusch PInT: Pattern Instrumentation Tool for Analyzing and Classifying HPC Applications
4:30 Beau Johnston AIWC: OpenCL-based Architecture Independent Workload Characterization
Lightning Talks:
5:00 Dorit Nuzman Compiler Optimization for Heterogeneous Locality and Homogeneous Parallelism in OpenCL and LLVM
5:07 Lechen Yu A Study of OpenMP Device Offloading in LLVM: Correctness and Consistency
5:14 Victor Lomuller Challenges of C++ Heterogeneous Programming using SYCL Implementation Experience: the Four Horsemen of the Apocalypse
5:21 Final Discussion
5:30 The End
Sunday, November 11th, 6pm - 9pm: LLVM Social, and Flang Meetup **Note: This is the evening before the workshop!**Aloft Dallas Downtown 1033 Young St, Dallas, TX 75202Please RSVP to gklimowicz@nvidia.com (so that we can get a rough idea of headcount).
Tuesday, November 13th, 12:15pm - 1:15pm: BoF: LLVM in HPC: What’s New?Location: D171

Abstract

LLVM, winner of the 2012 ACM Software System Award, has become an integral part of the software-development ecosystem for optimizing compilers, dynamic-language execution engines, source-code analysis and transformation tools, debuggers and linkers, and a whole host of programming-language and toolchain-related components. Now heavily used in both academia and industry, where it allows for rapid development of production-quality tools, LLVM is increasingly used in work targeted at high-performance computing. Research in, and implementation of, program analysis, compilation, execution, and profiling has clearly benefited from the availability of a high-quality, freely-available infrastructure on which to build. This workshop will focus on recent developments, from both academia and industry, that build on LLVM to advance the state of the art in high-performance computing.

In cooperation with:

Held in conjunction with SC18: The International Conference for High Performance Computing, Networking, Storage, and Analysis

Format

This workshop will feature contributed papers and invited talks focusing on recent developments, from both academia and industry, that build on LLVM to advance the state of the art in high-performance computing.

Topics of interest include, but are not limited to:

The workshop will hold a lightning-talk session. Please contribute to making this session both vibrant and informative! An abstract and one-page summary are required for consideration.

Deadlines

Please see the SC18 home page for registration deadlines and other information associated with the parent event.

Submissions

Please submit papers using the SC18 Submissions system by selecting the "SC18 Workshop: LLVM-HPC2018 Full Papers" form. Papers must be in IEEE conference format (templates are available). Papers should be no more than 12 pages (including references and figures) and must be at least eight pages long. Please also note IEEE's Article-Posting Policy.

To submit a lightning talk, please use the "SC18 Workshop: LLVM-HPC2018 Lightning Talks" form.

Proceedings

The proceedings will be archived in IEEE Xplore through TCHPC. Lightning-talk summaries will not be included in the proceedings.

Organizers

Program Committee

Name Affiliation
Alexis Perry Los Alamos National Laboratory
Cameron McInally Cray
Chandler Carruth Google
Erik Schnetter Perimeter Institute for Theoretical Physics
Frank Winter Thomas Jefferson National Accelerator Facility
James Brodman Intel
Jeff Hammond Intel
Jim Cownie Intel
Keno Fischer Julia Computing, Inc.
Michael Wong Codeplay
Nadav Rotem Facebook
Pat McCormick Los Alamos National Laboratory
Ralf Karrenberg NVIDIA
Sameer Shende University of Oregon
Sunita Chandrasekaran University of Delaware
Teresa Johnson Google
Tobias Grosser ETH Zürich
Torsten Hoefler ETH Zürich

Contact Information

Hal Finkel (hfinkel@anl.gov)