Parallel Programming :: Fall 2019 (original) (raw)

Stanford CS149, Fall 2019

PARALLEL COMPUTING

From smart phones, to multi-core CPUs and GPUs, to the world's largest supercomputers and web sites, parallel processing is ubiquitous in modern computing. The goal of this course is to provide a deep understanding of the fundamental principles and engineering trade-offs involved in designing modern parallel computing systems as well as to teach parallel programming techniques necessary to effectively utilize these machines. Because writing good parallel programs requires an understanding of key machine performance characteristics, this course will cover both parallel hardware and software design.

Basic Info

Tues/Thurs 3:00-4:20pm

Gates B3

See the course info page for more info on course policies and logistics.

Fall 2019 Schedule

Sep 24 Course Introduction + Why Parallelism? Motivations for parallel chip designs, challenges of parallelizing code
Sep 26 A Modern Multi-Core Processor Forms of parallelism: multicore, SIMD, threading + understanding latency and bandwidth
Oct 1 Parallel Programming Models and their Corresponding HW/SW Implementations Ways of thinking about parallel programs, and their corresponding hardware implementations, ISPC programming
Oct 3 Parallel Programming Basics Thought process of parallelizing a program in data parallel and shared address space models
Oct 8 Program Optimization 1: Work Distribution and Scheduling Achieving good work distribution while minimizing overhead, scheduling Cilk programs with work stealing
Oct 10 Program Optimization 2: Locality and Communication Message passing, async vs. blocking sends/receives, pipelining, increasing arithmetic intensity, avoiding contention
Oct 15 GPU Architecture and CUDA Programming CUDA programming abstractions, and how they are implemented on modern GPUs
Oct 17 Data-Parallel Thinking Data parallel thinking: map, reduce, scan, prefix sum, groupByKey
Oct 22 Distributed Computing using Spark Producer-consumer locality, RDD abstraction, Spark implementation and scheduling
Oct 24 Snooping-Based Cache Coherence Definition of memory coherence, invalidation-based coherence using MSI and MESI, false sharing
Oct 29 Directory-Based Coherence + Implementing Synchronization Directory-based coherence, machine-level atomic operations, implementing locks, implementing barriers
Oct 31 Memory Consistency Consistency vs. coherence, relaxed consistency models and their motivation, acquire/release semantics
Nov 5 Midterm Exam
Nov 7 Fine-Grained Synchronization and Lock-Free Programming Fine-grained snychronization via locks, basics of lock-free programming: single-reader/writer queues, lock-free stacks, the ABA problem, hazard pointers
Nov 12 Transactional Memory Motivation for transactions, design space of transactional memory implementations, lazy-optimistic HTM
Nov 14 Heterogeneous Parallelism and Hardware Specialization Energy-efficient computing, motivation for heterogeneous processing, fixed-function processing, FPGAs, mobile SoCs
Nov 19 Domain-Specific Programming Systems Motivation for DSLs, case study on Halide image processing DSL
Nov 21 Programming for Hardware Specialization Performance programming for FPGAs and CGRAs using Spatial
Dec 3 Parallel Graph Processing Frameworks + How DRAM Works GraphLab, Ligra, and GraphChi, streaming graph processing, graph compression
Dec 5 Efficienly Evaluating DNNs + Course Wrap Up Scheduling convlayers, exploiting precision and sparsity, DNN acelerators (e.g., GPU TensorCores, TPU)

Programming Assignments

Written Assignments