BERTops: Studying BERT Representations under a Topological Lens (original) (raw)

BERT Probe: A python package for probing attention based robustness evaluation of BERT models

Mahnoor Shahid

Software Impacts

View PDFchevron_right

FireBERT: Hardening BERT-based classifiers against adversarial attack

Gunnar Mein

2020

View PDFchevron_right

A Primer in BERTology: What We Know About How BERT Works

Ольга Ковалева

Transactions of the Association for Computational Linguistics, 2020

View PDFchevron_right

Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet

Victor Makarenkov

2020

View PDFchevron_right

White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense

Mahnoor Shahid

ArXiv, 2021

View PDFchevron_right

BERT's output layer recognizes all hidden layers? Some Intriguing Phenomena and a simple way to boost BERT

Wei-tsung Kao

Cornell University - arXiv, 2020

View PDFchevron_right

BBAEG: Towards BERT-based Biomedical Adversarial Example Generation for Text Classification

Ishani Mondal

Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

View PDFchevron_right

An Interpretability Illusion for BERT

Fernanda Viégas

2021

View PDFchevron_right

BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks

Julien Kloetzer

Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

View PDFchevron_right

TAG: Gradient Attack on Transformer-based Language Models

Caiwen Ding

Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

View PDFchevron_right

CNN-Trans-Enc: A CNN-Enhanced Transformer-Encoder On Top Of Static BERT representations for Document Classification

Charaf Eddine Benarab

Cornell University - arXiv, 2022

View PDFchevron_right

Thieves on Sesame Street! Model Extraction of BERT-based APIs

Gaurav Tomar

2020

View PDFchevron_right

ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples

Changki Lee

Proceedings of the 13th International Workshop on Semantic Evaluation

View PDFchevron_right

TiltedBERT: Resource Adjustable Version of BERT

Mohammad Sharifkhani

2022

View PDFchevron_right

LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?

Emily Öhman

2020

View PDFchevron_right

AILAB-Udine@SMM4H 22: Limits of Transformers and BERT Ensembles

Emmanuele Chersoni

Cornell University - arXiv, 2022

View PDFchevron_right

Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples

Hezekiah Branch

2022

View PDFchevron_right

Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages

federico ranaldi

arXiv (Cornell University), 2023

View PDFchevron_right

Arabic Synonym BERT-based Adversarial Examples for Text Classification

Esma Wali

arXiv (Cornell University), 2024

View PDFchevron_right

Diagnosing BERT with Retrieval Heuristics

Arthur Câmara

Lecture Notes in Computer Science, 2020

View PDFchevron_right

Fusing Label Embedding into BERT: An Efficient Improvement for Text Classification

Yijin Xiong

2021

View PDFchevron_right

CyBERT: Cybersecurity Claim Classification by Fine-Tuning the BERT Language Model

JUAN ANDRES PACHECO LOPEZ

Journal of Cybersecurity and Privacy, 2021

View PDFchevron_right

Augmenting BERT Carefully with Underrepresented Linguistic Features

Jekaterina Novikova

arXiv (Cornell University), 2020

View PDFchevron_right

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Naman Goyal

arXiv (Cornell University), 2019

View PDFchevron_right

Con-Detect: Detecting Adversarially Perturbed Natural Language Inputs to Deep Classifiers Through Holistic Analysis

Junaid Qadir

2022

View PDFchevron_right

Rethinking of BERT Sentence Embedding for Text Classification

mona farouk

Research Square (Research Square), 2024

View PDFchevron_right

UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information

Harish Tayyar Madabushi

2020

View PDFchevron_right

SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions

Vinod Ganesan

ArXiv, 2021

View PDFchevron_right

TextGrad: Advancing Robustness Evaluation in NLP by Gradient-Driven Optimization

Jinghan Jia

arXiv (Cornell University), 2022

View PDFchevron_right

Enhancing Model Robustness by Incorporating Adversarial Knowledge into Semantic Representation

Tianyu Du

ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021

View PDFchevron_right

ConvBERT: Improving BERT with Span-based Dynamic Convolution

Daquan Zhou

Cornell University - arXiv, 2020

View PDFchevron_right

Adversarial Reprogramming of Sequence Classification Neural Networks

Shlomo Dubnov

arXiv (Cornell University), 2018

View PDFchevron_right

DocBERT: BERT for Document Classification

C Mih

View PDFchevron_right

Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models’ Transferability

Wei-tsung Kao

Findings of the Association for Computational Linguistics: EMNLP 2021, 2021

View PDFchevron_right

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Melison Dylan

View PDFchevron_right