Federated Freeze BERT for text classification (original) (raw)

Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet

Victor Makarenkov

2020

View PDFchevron_right

BERTweet: A pre-trained language model for English Tweets

vo thanh vu

Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020

View PDFchevron_right

Rethinking of BERT Sentence Embedding for Text Classification

mona farouk

Research Square (Research Square), 2024

View PDFchevron_right

SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions

Vinod Ganesan

ArXiv, 2021

View PDFchevron_right

SocBERT: A Pretrained Model for Social Media Text

Abeed Sarker

2023

View PDFchevron_right

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Naman Goyal

arXiv (Cornell University), 2019

View PDFchevron_right

Domain Effect Investigation for Bert Models Fine-Tuned on Different Text Categorization Tasks

Ferhat Bozkurt

Arabian Journal for Science and Engineering, 2023

View PDFchevron_right

DocBERT: BERT for Document Classification

C Mih

View PDFchevron_right

A Sentence-Level Hierarchical BERT Model for Document Classification with Limited Labelled Data

Jinghui Lu

Discovery Science, 2021

View PDFchevron_right

SunBear at WNUT-2020 Task 2: Improving BERT-Based Noisy Text Classification with Knowledge of the Data domain

Linh Bảo

2020

View PDFchevron_right

Fusing Label Embedding into BERT: An Efficient Improvement for Text Classification

Yijin Xiong

2021

View PDFchevron_right

Multi-pretraining for Large-scale Text Classification

YeaChan Kim

Findings of the Association for Computational Linguistics: EMNLP 2020, 2020

View PDFchevron_right

ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples

Changki Lee

Proceedings of the 13th International Workshop on Semantic Evaluation

View PDFchevron_right

Benchmarking Differential Privacy and Federated Learning for BERT Models

priyam basu

2021

View PDFchevron_right

Practical Text Classification With Large Pre-Trained Language Models

Raul Puri

ArXiv, 2018

View PDFchevron_right

Large Scale Language Modeling: Converging on 40GB of Text in Four Hours

Raul Puri

2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), 2018

View PDFchevron_right

Towards Transfer Learning Techniques—BERT, DistilBERT, BERTimbau, and DistilBERTimbau for Automatic Text Classification from Different Languages: A Case Study

Takeo Akabane

Sensors

View PDFchevron_right

COMPARATIVE ANALYSIS OF PRE-TRAINED MODELS FOR NATURAL LANGUAGE PROCESSING

Priyadharshini Ravichandran

International Journal of Reseach and Analytical reviews, 2023

View PDFchevron_right

On the Role of Text Preprocessing in BERT Embedding-based DNNs for Classifying Informal Texts

Lindung Parningotan Manik

International Journal of Advanced Computer Science and Applications (IJACSA), 2022

View PDFchevron_right

Improving language models by retrieving from trillions of tokens

Roman Ring

Cornell University - arXiv, 2021

View PDFchevron_right

DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference

Vladimir Araujo

Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP

View PDFchevron_right

FedPETuning: When Federated Learning Meets the Parameter-Efficient Tuning Methods of Pre-trained Language Models

Lizhen Qu

Findings of the Association for Computational Linguistics: ACL 2023

View PDFchevron_right

Attention is Not Always What You Need: Towards Efficient Classification of Domain-Specific Text

Nazim Madhavji

arXiv (Cornell University), 2023

View PDFchevron_right

Training Large-Vocabulary Neural Language Models by Private Federated Learning for Resource-Constrained Devices

Anmol Walia

2022

View PDFchevron_right

A Feasibility Study to implement Next Word Prediction Model using Federated Learning on Raspberry Pi

shivani tomar

2021

View PDFchevron_right

Improving the BERT model for long text sequences in question answering domain

Mareeswari Venkatachala, IJAAS Journal

International Journal of Advances in Applied Sciences (IJAAS), 2023

View PDFchevron_right

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Melison Dylan

View PDFchevron_right

Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption

GARAM LEE

Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

View PDFchevron_right

A Federated Learning approach for text classification using NLP

Md Humaion Kabir Mehedi

Pacific-Rim Symposium on Image and Video Technology, 2022

View PDFchevron_right

On Losses for Modern Language Models

Stéphane Aroca-Ouellette, Frank Rudzicz

Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

View PDFchevron_right

UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information

Harish Tayyar Madabushi

2020

View PDFchevron_right

How to Train BERT with an Academic Budget

Moshe Berchansky

Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

View PDFchevron_right

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Sench Galiedon

View PDFchevron_right

HUBERT Untangles BERT to Improve Transfer across NLP Tasks

Paul Smolensky

ArXiv, 2019

View PDFchevron_right