Exploring Uncertainty Measures for Image-caption Embedding-and-retrieval Task (original) (raw)
Related papers
Uncertainty Quantification in Multimodal Ensembles of Deep Learners
2020
Uncertainty quantification in deep learning is an active area of research that examines two primary types of uncertainty in deep learning: epistemic uncertainty and aleatoric uncertainty. Epistemic uncertainty is caused by not having enough data to adequately learn. This creates volatility in the parameters and predictions and causes uncertainty. High epistemic uncertainty can indicate that the model’s prediction is based on a pattern with which is it not familiar. Aleatoric uncertainty measures the uncertainty due to noise in the data. Two additional active areas of research are multimodal learning and malware analysis. Multimodal learning takes into consideration distinct expressions of features such as different representations (e.g., audio and visual data) or different sampling techniques. Multimodal learning has recently been used in malware analysis to combine multiple types of features. In this work, we present and analyze a novel technique to measure epistemic uncertainty fr...
Learning-based Composite Metrics for Improved Caption Evaluation
Proceedings of ACL 2018, Student Research Workshop, 2018
The evaluation of image caption quality is a challenging task, which requires the assessment of two main aspects in a caption: adequacy and fluency. These quality aspects can be judged using a combination of several linguistic features. However, most of the current image captioning metrics focus only on specific linguistic facets, such as the lexical or semantic, and fail to meet a satisfactory level of correlation with human judgements at the sentence-level. We propose a learning-based framework to incorporate the scores of a set of lexical and semantic metrics as features, to capture the adequacy and fluency of captions at different linguistic levels. Our experimental results demonstrate that composite metrics draw upon the strengths of standalone measures to yield improved correlation and accuracy.
Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval
2020
Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem. Current methods for estimating uncertainties are poorly calibrated, computationally expensive, or based on heuristics. We present a new method that views image embeddings as stochastic features rather than deterministic features. Our two main contributions are (1) a likelihood that matches the triplet constraint and that evaluates the probability of an anchor being closer to a positive than a negative; and (2) a prior over the feature space that justifies the conventional l2 normalization. To ensure computational efficiency, we derive a variational approximation of the posterior, called the Bayesian triplet loss, that produces state-of-the-art uncertainty estimates and matches the predictive performance of current state-of-the-art methods.
Retrieval Augmentation to Improve Robustness and Interpretability of Deep Neural Networks
2021
Deep neural network models have achieved state-ofthe-art results in various tasks related to vision and/or language. Despite the use of large training data, most models are trained by iterating over single input-output pairs, discarding the remaining examples for the current prediction. In this work, we actively exploit the training data to improve the robustness and interpretability of deep neural networks, using the information from nearest training examples to aid the prediction both during training and testing. Specifically, the proposed approach uses the target of the nearest input example to initialize the memory state of an LSTM model or to guide attention mechanisms. We apply this approach to image captioning and sentiment analysis, conducting experiments with both image and text retrieval. Results show the effectiveness of the proposed models for the two tasks, on the widely used Flickr8 and IMDB datasets, respectively. Our code is publicly available1.
LCEval: Learned Composite Metric for Caption Evaluation
International Journal of Computer Vision, 2019
Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metric's performance, this paper investigates the relationship between different linguistic features and the captionlevel correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.
Plex: Towards Reliability using Pretrained Large Model Extensions
Cornell University - arXiv, 2022
A recent trend in artificial intelligence is the use of pretrained models for language and vision tasks, which have achieved extraordinary performance but also puzzling failures. Probing these models' abilities in diverse ways is therefore critical to the field. In this paper, we explore the reliability of models, where we define a reliable model as one that not only achieves strong predictive performance but also performs well consistently over many decision-making tasks involving uncertainty (e.g., selective prediction, open set recognition), robust generalization (e.g., accuracy and proper scoring rules such as log-likelihood on inand out-of-distribution datasets), and adaptation (e.g., active learning, few-shot uncertainty). We devise 10 types of tasks over 40 datasets in order to evaluate different aspects of reliability on both vision and language domains. To improve reliability, we developed ViT-Plex and T5-Plex, pretrained l arge model ex tensions (plex) for vision and language modalities, respectively. Plex greatly improves the state-of-the-art across reliability tasks, and simplifies the traditional protocol as it improves the out-of-the-box performance and does not require designing scores or tuning the model for each task. We demonstrate scaling effects over model sizes up to 1B parameters and pretraining dataset sizes up to 4B examples. We also demonstrate Plex's capabilities on challenging tasks including zero-shot open set recognition, active learning, and uncertainty in conversational language understanding.
Data Augmentation to Stabilize Image Caption Generation Models in Deep Learning
International Journal of Advanced Computer Science and Applications
Automatic image caption generation is a challenging AI problem since it requires utilization of several techniques from different computer science domains such as computer vision and natural language processing. Deep learning techniques have demonstrated outstanding results in many different applications. However, data augmentation in deep learning, which replicates the amount and the variety of training data available for learning models without the burden of collecting new data, is a promising field in machine learning. Generating textual description for a given image is a challenging task for computers. Nowadays, deep learning performs a significant role in the manipulation of visual data with the help of Convolutional Neural Networks (CNN). In this study, CNNs are employed to train prediction models which will help in automatic image caption generation. The proposed method utilizes the concept of data augmentation to overcome the fuzziness of well-known image caption generation models. Flickr8k dataset is used in the experimental work of this study and the BLEU score is applied to evaluate the reliability of the proposed method. The results clearly show the stability of the outcomes generated through the proposed method when compared to others.
NNEval: Neural Network Based Evaluation Metric for Image Captioning
Lecture Notes in Computer Science, 2018
The automatic evaluation of image descriptions is an intricate task, and it is highly important in the development and fine-grained analysis of captioning systems. Existing metrics to automatically evaluate image captioning systems fail to achieve a satisfactory level of correlation with human judgements at the sentence level. Moreover, these metrics, unlike humans, tend to focus on specific aspects of quality, such as the n-gram overlap or the semantic meaning. In this paper, we present the first learning-based metric to evaluate image captions. Our proposed framework enables us to incorporate both lexical and semantic information into a single learned metric. This results in an evaluator that takes into account various linguistic features to assess the caption quality. The experiments we performed to assess the proposed metric, show improvements upon the state of the art in terms of correlation with human judgements and demonstrate its superior robustness to distractions.
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images, 2023
As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as out-of-distribution (OOD) or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALL-E to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations (filters, rotations, etc.) to new epistemic images from DALL-E or real-world captures. ResNet-50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them post-perturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions.
2020
Deep learning models achieve high predictive accuracy across a broad spectrum of tasks, but rigorously quantifying their predictive uncertainty remains challenging. Usable estimates of predictive uncertainty should (1) cover the true prediction targets with high probability, and (2) discriminate between high- and low-confidence prediction instances. Existing methods for uncertainty quantification are based predominantly on Bayesian neural networks; these may fall short of (1) and (2) -- i.e., Bayesian credible intervals do not guarantee frequentist coverage, and approximate posterior inference undermines discriminative accuracy. In this paper, we develop the discriminative jackknife (DJ), a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals. The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be ...