Raymond J. Mooney's Home Page (original) (raw)
Research | Teaching | Personal | Contact | Note to Grad-Student Applicants |
---|
Research Interests and Publications
As a member of the Artificial Intelligence Laboratory I lead the Machine Learning Research Group which has explored a variety of areas, but my current focus is on natural language processing / computational linguistics.
Recent Publications:
Here are my publications for2025.2024.2023.2022. and2021.
Research Areas (click on an area for related publications):
- Natural Language Learning: Learning for syntactic and semantic parsing, lexicon acquisition, information extraction, language generation, machine translation, and word sense disambiguation.
- Connecting Language and Perception: Grounding natural language understanding in computer vision and robotics.
- Natural Language for Software Engineering: Using NLP to help software development by generating code from language or generating NL comments from code.
- Explainable AI: AI systems that can explain their reasoning to their human users.
- Statistical Relational Learning: Learning methods that combine the strengths of predicate logic and probability.
- Information Extraction: Identifying specific pieces of structured data in web pages or natural-language documents.
- Transfer Learning: Using previously acquired knowledge to aid learning on new related problems.
- Active Learning : Automated selection of good training examples for supervised or semi-supervised learning.
- Abductive Reasoning: Producing the best explanation for observed evidence using both logical and probabilistic reasoning.
- Text Categorization and Clustering : Supervised and unsupervised classification of documents and web pages.
- Text Data Mining: Discovering knowledge from text using information extraction and rule induction.
- Record Linkage & Duplicate Detection : Identifying textually similar but distinct database records that refer to the same entity.
- Bioinformatics: Learning to extract knowledge from biomedical literature.
- Semi-Supervised Learning : Learning from a mixture of labeled and unlabeled data.
- Ensemble Learning : Learning effective committees of hypotheses.
- Learning for Recommender Systems: Content-based and collaborative recommending.
- Inductive Logic Programming: Learning Prolog programs (rules in first-order predicate logic) from examples.
- Neuro-Symbolic Learning: Comparing and combining neural and symbolic learning methods.
- Knowledge-Base and Theory Refinement: Automatically modifying rule bases and Bayesian networks to fit empirical data.
For a complete list of areas and publications, see the UT Machine Learning Research Group home page. Also see my profile on Google Scholar.
Current Research Group Meetings:
- NLL: Natural Language Learning
- CLAMP: Connecting Language and Perception
- NL4SE: Natural Language for Software Engineering
Additional Affiliations:
- I am also a member of the UT Computational Linguistics Lab.
- I am also affiliated with the UT Department of Statistics and Data Science.
- I am also affiliated with the UT Center for Computational Biology and Bioinformatics.
- I was President of the International Machine Learning Society from 2008-2011.
- I was program co-chair (with Yolanda Gil) of the Twenty-First National Conference on Artificial Intelligence (AAAI-06) in Boston, July 16-20, 2006.
- I was general chair of the joint Human Language Technology Conference / Empirical Methods in Natural Language Processing Conference for 2005 (HLT/EMNLP-05).
- I was elected a Fellow of the American Association for Artificial Intelligence (AAAI) in 2005 "For significant contributions to machine learning, particularly explanation-based learning, theory refinement, and learning for natural-language processing."
- I was elected a Fellow of the Association for Computing Machinery (ACM) in 2010 "For contributions to machine learning and natural language processing."
- I was elected a Fellow of the Association for Computational Linguistics (ACL)in 2014 "For significant contributions to machine learning for semantic parsing, language generation, and multimodal integration."
Vita:
See my complete vita (in PDF).
Research Talks:
See a video of my UT issue-oriented talk (1/24/25) Has Machine Learning Theory Aided Experimental Progress?
See an interview of me for the Hidden Layers Podcast
See my invited talk at the EMNLP 2023 Big Picture Workshop (12/7/23),The Vision Thing: Finding and Pursuing Your Research Passion
See a video of my UT issue-oriented talk (1/20/23) The New Era of Big Science AI: How can academics adapt to the new reality?
See a video of my NLPCC'22 keynote talk on Answering Why Questions about Narrative Text.
See a video of my SIGDIAL'21 keynote talk on Robot Dialog: Perceptually Grounded Communication with Lifelong Learning.
See a video of my talk on Deep Learning for Automating Software Documentation Maintenance.
See a video of my invited talk on "The Deep Learning Revolution: Progress, Promise and Profligate Promotion" atComputing in the 21st Century 2017.
See videos of my invited talks on grounded language learning at Cornell Tech (2017),NIPS 2015 Multimodal Machine Learning Workshop, andAAAI-2013.
Also see my research talks on Deep Natural Language Semantics andGenerating Natural-Language Video Descriptions Using Text-Mined Knowledge, as well as Powerpoint presentations for some of my older talks.
Course Information
Fall 2025
Spring 2025
Spring 2018
Fall 2010
Spring 2009
Fall 2007
Personal History
I grew up in the 60's and 70's in the small town of O'Fallon Illinois where starting in 1967 I attended St. Clare grade school and, starting in 1975, O'Fallon Township Highschool. See a scanned version of a paper I wrote (on a typewriter!) for a high-school English class when I was only 17 years old entitled "High-level Artificial Intelligence: An Imminent Possibility with an Enormous Potential for Good". My enthusiasm for AI started early and has not waned; however, my expections about AI's rate of progress and its positive social implications have matured and (hopefully) become more realistic.
In the fall of 1979, I went to the University of Illinois in Champaign-Urbana to obtain all of the degrees listed above. In December 1987, I completed my Ph.D. thesis under the direction of Prof. Gerald DeJong and then began as a faculty member here in the Department of Computer Science at the University of Texas at Austin.
See more information on my academic genealogy, which traces my professorial lineage back through Danish Linguists to German Theologians.
I have become particularly well known for a certain strongly stated comment, which can be embedded into the following vector: (0.62384789, 0.232328242, 0.2394182754, 0.9234583745, 0.9034527345, 0.2348534598743, 0.789045724387, 0.34750893274895, 0.23475809273485723, 0.23452374958, 0.094358923475823475, 0.908452352348905, 0.024375823785, 0.980459238409582345) (click to decode).
Contact Information
Office:
3.806 GDC, (512) 471-9558
Email address:
Postal address:
Department of Computer Science
The University of Texas at Austin
2317 Speedway, Stop D9500
Austin, Texas78712-1757
U.S.A.
Home address:
4707 Eby Lane
Austin, Texas 78731-4507
U.S.A.
Note to Potential Grad-Student and Internship Applicants
Unfortunately, I am unable to personally respond to email requests regarding application to our graduate program or other solicitations for positions in my research lab. I encourage potential graduate-student applicants to see the department information on applying to our graduate program. I am no longer recruiting new Ph.D. students. I am afraid I currently have no funding or capacity to advise summer internships.