Joel Jang (original) (raw)

2024

Latent Action Pretraining from Videos

Seonghyeon Ye*, Joel Jang*, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo

Preprint

Semiparametric Token-Sequence Co-Supervision

Hyunji Lee*, Doyoung Kim*, Jihoon Jun, Sejune Joo, Joel Jang, Kyoung-Woon Oh, Minjoon Seo

ACL 2024

LangBridge: Multilingual Reasoning Without Multilingual Supervision

Dongkeun Yoon, Joel Jang, Sungdong Kim, Seungone Kim, Sheikh Shafayat, Minjoon Seo

ACL 2024

Exploring the Practicality of Generative Retrieval on Dynamic Corpora

Chaeeun Kim*, Soyoung Yoon*, Hyunji Lee, Joel Jang, Sohee Yang, Minjoon Seo

EMNLP 2024

How Well Do Large Language Models Truly Ground?

Hyunji Lee*, Sejune Joo*, Chaeeun Kim, Joel Jang, Doyoung Kim, Kyoung-Woon Oh, Minjoon Seo

NAACL 2024

Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging

Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu

NeurIPS 2024 AFM Workshop

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

Seungone Kim*, Jamin Shin*, Yejin Cho*, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo

ICLR 2024

Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis

Sohee Yang, Jonghyeon Kim, Joel Jang, Seonghyeon Ye, Hyunji Lee, Minjoon Seo

TACL 2024

2023

The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning

Seungone Kim*, Se June Joo*, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, Minjoon Seo

EMNLP 2023

Retrieval of Soft Prompt Enhances Zero-shot Task Generalization

Seonghyeon Ye, Joel Jang, Doyoung Kim, Yongrae Jo, Minjoon Seo

EMNLP 2023 Findings

Knowledge Unlearning for Mitigating Privacy Risks in Language Models

Joel Jang, Dongkeun Yoon, Sohee Yang, Sungmin Cha, Moontae Lee, Lajanugen Logeswaran, Minjoon Seo

ACL 2023

Gradient Ascent Post-training Enhances Language Model Generalization

Dongkeun Yoon*, Joel Jang*, Sungdong Kim, Minjoon Seo

ACL 2023 (short)

Prompt Injection: Parameterization of Fixed Inputs

Eunbi Choi, Yongrae Jo, Joel Jang, Minjoon Seo

ACL 2023 Findings

Exploring the Benefits of Training Expert Language Models over Instruction Tuning

Joel Jang, Seungone Kim, Seonghyeon Ye, Doyoung Kim, Lajanugen Logeswaran, Moontae Lee, Kyungjae Lee, Minjoon Seo

ICML 2023

Guess the Instruction! Making Language Models Stronger Zero-Shot Learners

Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo

ICLR 2023

2022

Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts

Joel Jang*, Seonghyeon Ye*, Minjoon Seo

NeurIPS 2022 Workshop on Transfer Learning for NLP (TL4NLP)

TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models

Joel Jang*, Seonghyeon Ye*, Changho Lee, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Minjoon Seo

EMNLP 2022

Towards Continual Knowledge Learning of Language Models

Joel Jang, Seonghyeon Ye, Sohee Yang, Joongbo Shin, Janghoon Han, Gyeonghun Kim, Stanley Jungkyu Choi, Minjoon Seo

ICLR 2022

2021

Sequential targeting: A continual learning approach for data imbalance in text classification

Joel Jang, Yoonjeon Kim, Kyoungho Choi, Sungho Suh

Expert Systems with Applications (2021)

( * indicates equal contribution )