Yoon Kim (original) (raw)
I am an assistant professor at MIT (EECS/CSAIL). I obtained my PhD in computer science from Harvard University, where I was advised by Alexander Rush.
Research
I work on natural language processing and machine learning. Current interests include:
- Efficient training and deployment of large-scale models
- Understanding the capabilities and limitations of language models
- Symbolic mechanisms for controlling and augmenting neural networks
Group
Postdocs
Hadeel Al-Negheimish
PhD Students
Lucas Torroba Hennigen
Tiwa Eisape (co-advised with Roger Levy)
Han Guo (co-advised with Eric Xing)
Ani Nrusimha
Abbas Zeitoun
Linlu Qiu
Zhaofeng Wu
Songlin Yang
Isha Puri (co-advised with Marzyeh Ghassemi)
Former Members
Bailin Wang (Postdoc --> Apple)
Recent Papers [all publications]
On the Duality between Gradient Transformations and Adapters
Lucas Torroba-Hennigen, Hunter Lang, Han Guo, Yoon Kim
ICML 2025 [paper]
The Semantic Hub Hypothesis: Language Models Share Semantic Representations Across Languages and Modalities
Zhaofeng Wu, Xinyan Velocity Yu, Dani Yogatama, Jiasen Lu, Yoon Kim
ICLR 2025 [paper]
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, Yoon Kim
NeurIPS 2024 [paper, slides, blog]
Learning to Decode Collaboratively with Multiple Language Models
Shannon Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag
ACL 2024 [paper, code]
What Do Language Models Hear? Probing for Auditory Representations in Language Models
Jerry Ngo, Yoon Kim
ACL 2024 [paper]
Gated Linear Attention Transformers with Hardware-Efficient Training
Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim
ICML 2024 [paper, slides, code]
In-Context Language Learning: Architectures and Algorithms
Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas
ICML 2024 [paper, code]
Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim
NAACL 2024 [paper, code]
LQ-LoRA: Low-rank Plus Quantized Matrix Decomposition for Efficient Language Model Finetuning
Han Guo, Philip Greengard, Eric P. Xing, Yoon Kim
ICLR 2024 [paper, code]