Christopher Manning, Stanford NLP (original) (raw)
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Associate Director, Stanford Institute for Human-Centered Artificial Intelligence (HAI)
Stanford NLP Group,Stanford AI Lab,HAI,Linguistics and Computer Science,Stanford University
What's New?
- I was elected to the American Academy of Arts and Sciences (2025).
- I was elected to the National Academy of Engineering (NAE) for the development and dissemination of natural language processing methods (2025).
- I gave the opening keynote at the new Conference on Language Modeling (COLM) on Meaning and Intelligence in Language Models: From Philosophy to Agents in a World [2024].
- GloVe: Global Vectors for Word Representation by Jeffrey Pennington, Richard Socher, and Christopher Manning won the 10-year Test of Time Award at ACL 2024 (2024).
- I received the 2024 IEEE John von Neumann Medal for advances in computational representation and analysis of natural language [TWIML podcast] (2024).
Bio
Christopher Manning is the inaugural Thomas M. Siebel Professor in Machine Learning in the Departments of Linguistics and Computer Science at Stanford University, a Founder and Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and was Director of the Stanford Artificial Intelligence Laboratory (SAIL) from 2018–2025. From 2010, Manning pioneered Natural Language Understanding and Inference using Deep Learning, with impactful research on sentiment analysis, paraphrase detection, the GloVe model of word vectors, attention, neural machine translation, question answering, self-supervised model pre-training, tree-recursive neural networks, machine reasoning, summarization, and dependency parsing, work for which he has received two ACL Test of Time Awards and the IEEE John von Neumann Medal (2024). He earlier led the development of empirical, probabilistic approaches to NLP, computational linguistics, and language understanding, defining and building theories and systems for natural language inference, syntactic parsing, machine translation, and multilingual language processing, work for which he won ACL, Coling, EMNLP, and CHI Best Paper Awards. In NLP education, Manning coauthored foundational textbooks on statistical NLP (Manning and Schütze 1999) and information retrieval (Manning, Raghavan, and Schütze, 2008), and his online CS224N Natural Language Processing with Deep Learning course videos have been watched by hundreds of thousands. In linguistics, Manning is a principal developer of Stanford Dependencies and Universal Dependencies, and has authored monographs on ergativity and complex predicates. He is the founder of the Stanford NLP group (@stanfordnlp) and was an early proponent of open source software in NLP with Stanford CoreNLP and Stanza. He is an ACM Fellow, a AAAI Fellow, and an ACL Fellow, and was President of the ACL in 2015. Manning earned a B.A. (Hons) from The Australian National University, a Ph.D. from Stanford in 1994, and an Honorary Doctorate from U. Amsterdam in 2023. He held faculty positions at Carnegie Mellon University and the University of Sydney before returning to Stanford.
Contact
M | Dept of Computer Science, Gates Building 3A, 353 Jane Stanford Way, Stanford CA 94305-9030, USA |
---|---|
E | manning@cs.stanford.edu |
T | @chrmanning |
W | +1 (650) 723-7683 |
F | +1 (650) 725-1449 |
R | Gates 348 |
O | Contact Suzanne |
A | Suzanne Lessard, Gates 232, +1 (650) 723-6319slessard@stanford.edu |
Brief CV
- I'm Australian 🇦🇺 (“I come from a land of wide open spaces …”)
- BA (Hons) Australian National University 1989 (majors in mathematics, computer science and linguistics)
- PhD Stanford Linguistics 1994
- Asst Professor, Carnegie Mellon University Computational Linguistics Program 1994–96
- Lecturer B, University of Sydney Dept of Linguistics 1996–99
- Asst Professor, Stanford University Depts of Computer Science and Linguistics 1999–2006
- Assoc Professor, Stanford University Depts of Linguistics and Computer Science 2006–2012
- Professor, Stanford University Depts of Linguistics and Computer Science 2012–
- President of the Association for Computational Linguistics 2015
- Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science 2017–
- Honorary Doctorate, University of Amsterdam 2023
Papers
Here is my old publications list. However, I've become lazy, so you're more likely to find recent stuff on Google Scholar,Semantic Scholar, or the NLP Group publications page.
Books
Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan (Cambridge University Press, 2008).
Manning and Schütze, Foundations of Statistical Natural Language Processing (MIT Press, 1999).
Andrews and Manning, Complex Predicates and Information Spreading in LFG (1999).
Ergativity: Argument Structure and Grammatical Relations (1996).
Talks/Videos
Some of my talks are available online.
Students
I have a page listing all my Ph.D. graduates. You can find all my current students on the Stanford NLP Group People page.
Research Projects
The general area of my research is robust but linguistically sophisticated natural language understanding and generation, and opportunities to use it in real-world domains. Particular current topics include deep learning for NLP, compositionality, question answering, large pre-trained language models, knowledge and reasoning, Universal Dependencies, and low-resource languages. To find out more about what I do, it's best to look at my papers.
- Unadmitted students: I don't do admissions. You need to apply to a program in the usual manner; see the pages for https://linguistics.stanford.edu/degree-programs/graduate-admissions, and for Computer Science.
- PhD students in CS/Linguistics or allied fields: please contact me directly about research opportunities.
- Masters students: I often employ a couple of masters students. Most appealing are people with a background in NLP, and time to devote to an RAship. It helps your case to have done well in CS 224N: NLP.
- Undergraduate students in CS/Linguistics or allied fields: please contact me directly.
Courses
Online videos! You can find complete videos for several NLP courses that I have (co-)taught online:
- CS224N: Natural Language Processing with Deep Learning is available on YouTube, with accompanying slides. There have been five editions so far: (i) 2017 playlist and slides, (ii) 2019 playlist and slides, (iii) 2021 playlist and slides, (iv) 2023 update playlist and slides, and (v) 2024 playlist and slides.
- Natural Language Processing (a.k.a. the 2012 Coursera NLP-class, one of the earliest MOOCs) by Dan Jurafsky and Christopher Manning on YouTube [slides]. If you don't have much background in AI, ML, or NLP, you should start with this class. A more modern take on a broader range of content appears in CS124/Linguist 180: From Languages to Information [slides], primarily by Dan Jurafsky, but I still make some cameo appearances for Information Retrieval. We haven't found the energy to keep doing this class.
- Natural Language Processing (a.k.a. CS224N Spring 2008) by Christopher Manning [slides]. This is an aging version of my traditional probabilistic NLP course. It looks like you can only watch these videos with Flash. ☹️
In Autum 2022 and 2024, I taught Linguistics 200: Foundations of Linguistic Theory. This is a class for Linguistics Ph.D. students, aimed at giving them a richer, broad appreciation of the development of linguistic thinking.
Nearly every year since 2000, I teachCS 224N / Ling 284. Natural Language Processing with Deep Learning.
From 2003 through 2019, I taughtCS 276: Information Retrieval and Web Search, in recent years withPandu Nayak. Earlier versions of this course include two years of two-quarter sequences CS276A/B on information retrieval and text information classification and extraction, broadly construed ("IR++"):Fall quarter course website, Winter quarter course website. Early versions of this course were co-taught by me, Prabhakar Raghavan, and Hinrich Schütze.
In Winter 2024, Dan Jurafsky and I led a seminar CS 324H: History of Natural Language Processing.
In Fall 2016, I taught Linguistics 278: Programming for linguists (and any other digital humanities or text-oriented social science students who think it might be a good match), mainly using Jupyter notebooks.
I co-taught tutorials on Deep Learning for NLP at ACL 2012 with Yoshua Bengio and Richard Socher, and at NAACL 2013 with Richard Socher. Slides, references, and videos are available.
In June 2011, I taught a tutorialNatural Language Processing Tools for the Digital Humanities atDigital Humanities 2011at Stanford.
In fall 2007 I taughtLing 289: Quantitative and Probabilistic Explanation in LinguisticsMW 2:15-3:45 in 160-318. I previously taught it in winter 2002 (née Ling 236) and Winter 2005 (as Ling 235).
In the summer of 2007, I taught at the LSA Linguistic Institute:Statistical Parsing and Computational Linguistics in Industry.
In fall 1999 and winter 2001, I taughtCS 121 Artificial Intelligence. The text book was S. Russell and P. Norvig,Artificial Intelligence: A Modern Approach.
I ran the NLP Reading Group from 1999-2002. The NLP Reading Group is now student organized.
Other stuff
LaTeX: When I used to have more time (i.e., when I was a grad student), I used to spend some of it writing (La)TeX macros. [Actually, that's a lie; I still spend some time doing it....]
We've got two kids: Joel [linkedin, github] and Casey [linkedin, github]. Here are my (aging) opinions on books for kids.
http://nlp.stanford.edu/~manning/
Christopher Manning manning@cs.stanford.edu.Hand-rolled HTML. Last modified: 2025-03-30.