Federated Freeze BERT for text classification (original) (raw)
Related papers
Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet
2020
BERTweet: A pre-trained language model for English Tweets
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020
Rethinking of BERT Sentence Embedding for Text Classification
Research Square (Research Square), 2024
SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions
ArXiv, 2021
SocBERT: A Pretrained Model for Social Media Text
2023
RoBERTa: A Robustly Optimized BERT Pretraining Approach
arXiv (Cornell University), 2019
Domain Effect Investigation for Bert Models Fine-Tuned on Different Text Categorization Tasks
Arabian Journal for Science and Engineering, 2023
DocBERT: BERT for Document Classification
A Sentence-Level Hierarchical BERT Model for Document Classification with Limited Labelled Data
Discovery Science, 2021
2020
Fusing Label Embedding into BERT: An Efficient Improvement for Text Classification
2021
Multi-pretraining for Large-scale Text Classification
Findings of the Association for Computational Linguistics: EMNLP 2020, 2020
ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for out-of-domain samples
Proceedings of the 13th International Workshop on Semantic Evaluation
Benchmarking Differential Privacy and Federated Learning for BERT Models
2021
Practical Text Classification With Large Pre-Trained Language Models
ArXiv, 2018
Large Scale Language Modeling: Converging on 40GB of Text in Four Hours
2018 30th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), 2018
Sensors
COMPARATIVE ANALYSIS OF PRE-TRAINED MODELS FOR NATURAL LANGUAGE PROCESSING
International Journal of Reseach and Analytical reviews, 2023
On the Role of Text Preprocessing in BERT Embedding-based DNNs for Classifying Informal Texts
International Journal of Advanced Computer Science and Applications (IJACSA), 2022
Improving language models by retrieving from trillions of tokens
Cornell University - arXiv, 2021
DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
Findings of the Association for Computational Linguistics: ACL 2023
Attention is Not Always What You Need: Towards Efficient Classification of Domain-Specific Text
arXiv (Cornell University), 2023
2022
A Feasibility Study to implement Next Word Prediction Model using Federated Learning on Raspberry Pi
2021
Improving the BERT model for long text sequences in question answering domain
Mareeswari Venkatachala, IJAAS Journal
International Journal of Advances in Applied Sciences (IJAAS), 2023
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Privacy-Preserving Text Classification on BERT Embeddings with Homomorphic Encryption
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
A Federated Learning approach for text classification using NLP
Pacific-Rim Symposium on Image and Video Technology, 2022
On Losses for Modern Language Models
Stéphane Aroca-Ouellette, Frank Rudzicz
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020
UoB at SemEval-2020 Task 12: Boosting BERT with Corpus Level Information
2020
How to Train BERT with an Academic Budget
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
HUBERT Untangles BERT to Improve Transfer across NLP Tasks
ArXiv, 2019