Homepage of Christopher Re (Chris Re) (original) (raw)
Christopher Ré
Email: chrismre at cs.stanford.edu
Our Lab's Github | Blog | Twitter
Department of Computer Science
Stanford University
353 Jane Stanford Way
Stanford, CA 94305-9025
- I'm a professor in the Stanford AI Lab (SAIL), the center for research on foundation models (CRFM), and the Machine Learning Group (bio). Our lab works on the foundations of the next generation of AI systems.
- On the AI side, I am fascinated by how we can learn from increasingly weak forms of supervision, the basis of new architectures, the role of data, and by the mathematical foundations of such techniques.
- On the systems side, I am broadly interested in how machine learning is changing how we build software and hardware. I'm particularly excited when we can blend AI and systems, e.g,. Snorkel, Overton (YouTube), or Together.
Our work is inspired by the observation that data is central to these systems, and so data management principles (re-imagined) play a starring role in our work. This sounds like Silicon Valley nonsense, but oddly enough, these ideas get used due to amazing students and collaborations with Google ads, YouTube, Apple, and more.
While we're very proud of our research ideas and their impact, the lab's real goal is to help students become professors, entrepreneurs, and researchers. To that end, over a dozen members of our group have started their own professorships. With students and collaborators, I've been fortunate enough to cofound a number of companies and a venture firm. For transparency, I try to list companies I advise or invest in here and our research sponsors here. My students run the ML Sys Podcast.
- Neurips23 Keynote (pptx|pdf|video) about building blocks for foundation models. GitHub for SysAI building blocks.
- We're interested in improving the foundations of foundation models.
* We released ThunderKittens (quick blog post) for our opinionated take on building AI kernels.
* Blog post on sequence length and more. See the blog for more details
* Flash Attention is an IO-Aware algorithm for attention. This is widely used now including in ML Perf, see MLPerf Story on Tri!. Tri's Version 2
* We continue to work on long sequences. An explainer of a simplified version of S4 (S4 Explainer Blog). It's a convolution and an RNN based on simple ideas from signal processing. SOTA on long range arena and first to solve Path-X. update on this line of work.
* We've been working on Hyena using ideas from signal processing, and its application to HyenaDNA and now Evo led by Arc Institute--and Brian Hie. Evo selected for the cover of Science.
- Some Talks and resources
* Neurips23 Keynote (pptx|pdf|video) about building blocks for foundation models. GitHub for SysAI building blocks.
* Some resources for a budding community in Data-Centric AI and a blog post about it.
* SIGMOD keynote on Data-centric AI, Declarative ML, and Foundation Models in data slides (YouTube)
* SIGMOD panel on Service, Science and Startups changing research
* Software 2.0 Overview at HAI
* Thanks, NeurIPS! Our Test-of-time Award talk for Hogwild! is on YouTube
* A quick overview of video our work on Hidden Stratification.
* A narrated version of Overton, our high-level framework for machine learning built at Apple. (pptx|YouTube) and the paper.
* MLSys 20 keynote talk (pdf|pptx) or WWW BIG. More articles on new group website also see github.
A messy, incomplete log of old updates is here.