An Attention Matrix for Every Decision: Faithfulness-based Arbitration Among Multiple Attention-Based Interpretations of Transformers in Text Classification (original) (raw)

Toward Practical Usage of the Attention Mechanism as a Tool for Interpretability

Martin Tutek

IEEE Access

View PDFchevron_right

More Identifiable yet Equally Performant Transformers for Text Classification

Rishabh Bhardwaj

ArXiv, 2021

View PDFchevron_right

On the Lack of Robust Interpretability of Neural Text Classifiers

Sanjiv Das

2021

View PDFchevron_right

Attention Interpretability Across NLP Tasks

Gaurav Tomar

ArXiv, 2019

View PDFchevron_right

End-to-End Transformer-Based Models in Textual-Based NLP

Abir Rahali

AI

View PDFchevron_right

Tracr: Compiled Transformers as a Laboratory for Interpretability

Vladimir Mikulik

arXiv (Cornell University), 2023

View PDFchevron_right

Interpreting Convolutional Networks Trained on Textual Data

Christopher Crick

2021

View PDFchevron_right

AxFormer: Accuracy-driven Approximation of Transformers for Faster, Smaller and more Accurate NLP Models

Sanchari Sen

arXiv (Cornell University), 2020

View PDFchevron_right

Looking Deeper into Deep Learning Model: Attribution-based Explanations of TextCNN

Jan Veldsink

ArXiv, 2018

View PDFchevron_right

Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing

Sanchit Sinha

2021

View PDFchevron_right

Extractive Summarization for Explainable Sentiment Analysis using Transformers

Luca Bacco

2021

View PDFchevron_right

AbductionRules: Training Transformers to Explain Unexpected Inputs

Joshua Bensemann

Findings of the Association for Computational Linguistics: ACL 2022

View PDFchevron_right

TransICD: Transformer Based Code-Wise Attention Model for Explainable ICD Coding

biplob biswas

Artificial Intelligence in Medicine, 2021

View PDFchevron_right

BaSFormer: A Balanced Sparsity Regularized Attention Network for Transformer

Shuoran Jiang

2023

View PDFchevron_right

Hierarchical Interpretation of Neural Text Classification

hanqi yan

2022

View PDFchevron_right

Stress Test Evaluation of Transformer-based Models in Natural Language Understanding Tasks

Andres Carvallo

arXiv (Cornell University), 2020

View PDFchevron_right

DoLFIn: Distributions over Latent Features for Interpretability

Phong Le

Proceedings of the 28th International Conference on Computational Linguistics, 2020

View PDFchevron_right

Empirical Evaluation of Pre-trained Transformers for Human-Level NLP: The Role of Sample Size and Dimensionality

Huy Vu

Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

View PDFchevron_right

IDC: Quantitative Evaluation Benchmark of Interpretation Methods for Deep Text Classification Models

DR. MOHAMMED SAADAT KHALEEL

2021

View PDFchevron_right

Commonsense Statements Identification and Explanation with Transformer-based Encoders

Sonia - Teodora Cibu

Proceedings of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, 2020

View PDFchevron_right

A Study of the Plausibility of Attention between RNN Encoders in Natural Language Inference

Duc Hung Nguyen

2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), 2021

View PDFchevron_right

Investigation of Transformer-based Latent Attention Models for Neural Machine Translation

Nikita Makarov

2020

View PDFchevron_right

Revisiting Transformer-based Models for Long Document Classification

Sune Darkner

ArXiv, 2022

View PDFchevron_right

COMPARATIVE ANALYSIS OF TRANSFORMER BASED LANGUAGE MODELS

Computer Science & Information Technology (CS & IT) Computer Science Conference Proceedings (CSCP)

View PDFchevron_right

Enhancing Attention’s Explanation Using Interpretable Tsetlin Machine

Rohan kumar Yadav

Algorithms

View PDFchevron_right

Overview of the Transformer-based Models for NLP Tasks

anthony gillioz

Proceedings of the 2020 Federated Conference on Computer Science and Information Systems, 2020

View PDFchevron_right

Is Attention Interpretable?

Sofía Yaneth Marquina Serrano

Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019

View PDFchevron_right

RCMHA: Relative Convolutional Multi-Head Attention for Natural Language Modelling

herman sugiharto

arXiv (Cornell University), 2023

View PDFchevron_right

Local Interpretations for Explainable Natural Language Processing: A Survey

Josiah Poon

arXiv (Cornell University), 2021

View PDFchevron_right

Exploring the Role of Transformers in NLP: From BERT to GPT-3

IRJET Journal

IRJET, 2023

View PDFchevron_right

Hierarchical Transformers for Long Document Classification

Jesus Villalba

2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 2019

View PDFchevron_right

Explaining a neural attention model for aspect-based sentiment classification using diagnostic classification

Maria Trusca

Proceedings of the 36th Annual ACM Symposium on Applied Computing, 2021

View PDFchevron_right

An Introductory Survey on Attention Mechanisms in NLP Problems

xueda liu

View PDFchevron_right