Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning (original) (raw)
Related papers
Active Learning for BERT: An Empirical Study
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
ArXiv, 2020
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Improving Question Answering Performance Using Knowledge Distillation and Active Learning
ArXiv, 2021
SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions
ArXiv, 2021
Phrase-level Active Learning for Neural Machine Translation
2021
Federated Freeze BERT for text classification
Journal of big data, 2024
IEEE Access
RoBERTa: A Robustly Optimized BERT Pretraining Approach
arXiv (Cornell University), 2019
TiltedBERT: Resource Adjustable Version of BERT
2022
2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference
Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP
Bangla-BERT: Transformer-Based Efficient Model for Transfer Learning and Language Understanding
IEEE Access
Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020
Multi-Task Active Learning for Neural Semantic Role Labeling on Low Resource Conversational Corpus
Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, 2018
Deep Active Learning for Named Entity Recognition
Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017
2016
Domain Effect Investigation for Bert Models Fine-Tuned on Different Text Categorization Tasks
Arabian Journal for Science and Engineering, 2023
Factorization-Aware Training of Transformers for Natural Language Understanding on the Edge
Interspeech 2021, 2021
Active Learning for Reducing Labeling Effort in Text Classification Tasks
Communications in Computer and Information Science, 2022
Bag of Experts Architectures for Model Reuse in Conversational Language Understanding
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)
Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet
2020
Cornell University - arXiv, 2022
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Active learning for deep semantic parsing
ACL2018, 2018
Combining active and semi-supervised learning for spoken language understanding
Speech Communication, 2005
Enriched Pre-trained Transformers for Joint Slot Filling and Intent Detection
ArXiv, 2020
Cornell University - arXiv, 2022
Accelerating Natural Language Understanding in Task-Oriented Dialog
Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI