Plex: Towards Reliability using Pretrained Large Model Extensions (original) (raw)

Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models

zhihe lu

arXiv (Cornell University), 2023

View PDFchevron_right

Enabling Calibration In The Zero-Shot Inference of Large Vision-Language Models

Pranav Raj

arXiv (Cornell University), 2023

View PDFchevron_right

Task Residual for Tuning Vision-Language Models

zhihe lu

arXiv (Cornell University), 2022

View PDFchevron_right

ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models

Jyoti Aneja

arXiv (Cornell University), 2022

View PDFchevron_right

A Survey of Vision-Language Pre-Trained Models

Yifan Du

Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence

View PDFchevron_right

To what extent do DNN-based image classification models make unreliable inferences?

xiangyu zhang

Empirical Software Engineering, 2021

View PDFchevron_right

Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images

Jamiu A. Idowu

Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images, 2023

View PDFchevron_right

Analyzing the Domain Robustness of Pretrained Language Models, Layer by Layer

Min-Yen Kan

2021

View PDFchevron_right

Is Robustness the Cost of Accuracy? – A Comprehensive Study on the Robustness of 18 Deep Image Classification Models

Mingxiao Feng

View PDFchevron_right

A Comprehensive Evaluation Framework for Deep Model Robustness

Xinghai Gao

2021

View PDFchevron_right

ConfidenceBench: A Confidence Calibration Benchmark for Large Language Models

Matt ffrench-Constant

View PDFchevron_right

ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations

Mark Fayez Ibrahim

arXiv (Cornell University), 2022

View PDFchevron_right

Discrepancies among pre-trained deep neural networks: a new threat to model zoo reliability

Pongpatapee Peerapatanapokin

Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering

View PDFchevron_right

ECO: Ensembling Context Optimization for Vision-Language Models

Marco Bertini

arXiv (Cornell University), 2023

View PDFchevron_right

Robust Learning Through Cross-Task Consistency

Zhangjie Cao

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020

View PDFchevron_right

Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks

Helena Moniz

2021

View PDFchevron_right

REBUS: A Robust Evaluation Benchmark of Understanding Symbols

Arjun Panickssery

arXiv (Cornell University), 2024

View PDFchevron_right

Making Pre-trained Language Models End-to-end Few-shot Learners with Contrastive Prompt Tuning

Ziyun Xu

Cornell University - arXiv, 2022

View PDFchevron_right

GluonCV and GluonNLP: Deep Learning in Computer Vision and Natural Language Processing

Sheng Zha

J. Mach. Learn. Res., 2020

View PDFchevron_right

LAION-5B: An open large-scale dataset for training next generation image-text models

Romain Beaumont

Cornell University - arXiv, 2022

View PDFchevron_right

Developing reliability metrics and validation tools for datasets with deep linguistic information

Sergio De Castro

2011

View PDFchevron_right

Assessing the Robustness of Visual Question Answering

Marcel Worring

2019

View PDFchevron_right

Learning from the uncertain

Harald Sack

Proceedings of the 15th International Conference on Knowledge Technologies and Data-driven Business

View PDFchevron_right

Explore the Potential Performance of Vision-and-Language Navigation Model: a Snapshot Ensemble Method

Derry Wijaya

2021

View PDFchevron_right

Examining Large Pre-Trained Language Models for Machine Translation: What You Don't Know About It

Serge Gladkoff

2022

View PDFchevron_right

Toward Improving the Robustness of Deep Learning Models via Model Transformation

Zan Wang

Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering

View PDFchevron_right

Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters

Yunzhi Zhuge

arXiv (Cornell University), 2024

View PDFchevron_right

WenLan: Bridging Vision and Language by Large-Scale Multi-Modal Pre-Training

Jingyuan Wen

ArXiv, 2021

View PDFchevron_right

Data-centric Reliability Evaluation of Individual Predictions

Abolfazl Asudeh

Cornell University - arXiv, 2022

View PDFchevron_right

Semantically Distributed Robust Optimization for Vision-and-Language Inference

Pratyay Banerjee

2021

View PDFchevron_right

CatLIP: CLIP-level Visual Recognition Accuracy with 2.7x Faster Pre-training on Web-scale Image-Text Data

Mohammad Hossein Sekhavat

arXiv (Cornell University), 2024

View PDFchevron_right

Bag of Experts Architectures for Model Reuse in Conversational Language Understanding

Alex Marin

Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

View PDFchevron_right

cViL: Cross-Lingual Training of Vision-Language Models using Knowledge Distillation

Devansh Gautam

Cornell University - arXiv, 2022

View PDFchevron_right

Exploring Uncertainty Measures for Image-caption Embedding-and-retrieval Task

Kuniaki Uehara

ACM Transactions on Multimedia Computing, Communications, and Applications, 2021

View PDFchevron_right

Measuring and Mitigating Local Instability in Deep Neural Networks

Subhrangshu Nandi

arXiv (Cornell University), 2023

View PDFchevron_right