Matt Jordan @ UT (original) (raw)

Publications/Projects

DataComp-LM: In search of the next generation of training sets for language models Li et. al Preprint [PDF] [Code]
OpenLM: a minimal but performative language modeling (lm) repository, 2023 Gururangan et. al Open Source Project [Code]
Conditional Generative Models are Sufficient to Sample from Any Causal Effect Estimand Md Musfiqur Rahman*, Matt Jordan*, Murat Kocaoglu ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling. [PDF] [Code]
Lovasz Theta Contrastive Learning Georgios Smyrnis, Matt Jordan, Ananya Uppal, Giannis Daras, Alexandros G. Dimakis NeurIPS 2022 Workshop: Self-Supervised Learning - Theory and Practice [PDF] [Code]
Zonotope Domains for Lagrangian Neural Network Verification Matt Jordan*, Jonathan Hayase*, Alexandros G. Dimakis, Sewoong Oh Advances in Neural Information Processing Systems (NeurIPS) 2022. [PDF] [Code]
Inverse Problems Leveraging Pre-trained Contrastive Representations Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis Advances in Neural Information Processing Systems (NeurIPS) 2021. [PDF] [Code]
Provable Lipschitz Certification for Generative Models Matt Jordan, Alexandros G. Dimakis International Conference on Machine Learning (ICML) 2021. [PDF] [Code]
Quarantines as a Targeted Immunization Strategy Jessica Hoffmann*, Matt Jordan*, Constantine Caramanis. Preprint, arXiv:2008.08262. [PDF]
Exactly Computing the Local Lipschitz Constant of ReLU Networks Matt Jordan, Alexandros G. Dimakis. Advances in Neural Information Processing Systems (NeurIPS) 2020. [PDF] [Code]
Provable Certificates for Adversarial Examples: Fitting a Ball in a Union of Polytopes Matt Jordan, Justin Lewis, Alexandros G. Dimakis. Advances in Neural Information Processing Systems (NeurIPS) 2019. [PDF] [Code] [Talk]
Quantifying Perceptual Distortion of Adversarial Examples Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis. Preprint, arXiv:1902.08265. [PDF] [Code]