Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning (original) (raw)

Active Learning for BERT: An Empirical Study

Liat Ein-Dor

Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

View PDFchevron_right

Investigating the Effectiveness of Representations Based on Pretrained Transformer-based Language Models in Active Learning for Labelling Text Datasets

Jinghui Lu

ArXiv, 2020

View PDFchevron_right

Combining Active Learning and Task Adaptation with BERT for Cost-Effective Annotation of Social Media Datasets

Jens Lemmens

Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis

View PDFchevron_right

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Sench Galiedon

View PDFchevron_right

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Melison Dylan

View PDFchevron_right

Improving Question Answering Performance Using Knowledge Distillation and Active Learning

Shahin Amiriparian

ArXiv, 2021

View PDFchevron_right

SuperShaper: Task-Agnostic Super Pre-training of BERT Models with Variable Hidden Dimensions

Vinod Ganesan

ArXiv, 2021

View PDFchevron_right

Phrase-level Active Learning for Neural Machine Translation

junjie Hu

2021

View PDFchevron_right

Federated Freeze BERT for text classification

mona farouk

Journal of big data, 2024

View PDFchevron_right

Framework for Deep Learning-Based Language Models using Multi-task Learning in Natural Language Understanding: A Systematic Literature Review and Future Directions

Mrinal Bachute

IEEE Access

View PDFchevron_right

RoBERTa: A Robustly Optimized BERT Pretraining Approach

Naman Goyal

arXiv (Cornell University), 2019

View PDFchevron_right

TiltedBERT: Resource Adjustable Version of BERT

Mohammad Sharifkhani

2022

View PDFchevron_right

Query Strategies, Assemble! Active Learning with Expert Advice for Low-resource Natural Language Processing

Luisa Coheur

2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)

View PDFchevron_right

DACT-BERT: Differentiable Adaptive Computation Time for an Efficient BERT Inference

Vladimir Araujo

Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP

View PDFchevron_right

Bangla-BERT: Transformer-Based Efficient Model for Transfer Learning and Language Understanding

Abdullah As Sami 1704111

IEEE Access

View PDFchevron_right

Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks

rangga restu prayogo

Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020

View PDFchevron_right

Multi-Task Active Learning for Neural Semantic Role Labeling on Low Resource Conversational Corpus

Fariz Ikhwantri

Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP, 2018

View PDFchevron_right

Deep Active Learning for Named Entity Recognition

Zachary Lipton

Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017

View PDFchevron_right

TextPro-AL: An Active Learning Platform for Flexible and Efficient Production of Training Data for NLP Tasks

Manuela Speranza

2016

View PDFchevron_right

Domain Effect Investigation for Bert Models Fine-Tuned on Different Text Categorization Tasks

Ferhat Bozkurt

Arabian Journal for Science and Engineering, 2023

View PDFchevron_right

Factorization-Aware Training of Transformers for Natural Language Understanding on the Edge

Clement Chung

Interspeech 2021, 2021

View PDFchevron_right

Active Learning for Reducing Labeling Effort in Text Classification Tasks

Pieter Jacobs

Communications in Computer and Information Science, 2022

View PDFchevron_right

Bag of Experts Architectures for Model Reuse in Conversational Language Understanding

Alex Marin

Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)

View PDFchevron_right

Lessons Learned from Applying off-the-shelf BERT: There is no Silver Bullet

Victor Makarenkov

2020

View PDFchevron_right

To Softmax, or not to Softmax: that is the question when applying Active Learning for Transformer Models

Silvio Magino

Cornell University - arXiv, 2022

View PDFchevron_right

BERTAC: Enhancing Transformer-based Language Models with Adversarially Pretrained Convolutional Neural Networks

Julien Kloetzer

Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

View PDFchevron_right

Active learning for deep semantic parsing

Dominique Estival

ACL2018, 2018

View PDFchevron_right

Combining active and semi-supervised learning for spoken language understanding

Dilek Hakkani-Tür

Speech Communication, 2005

View PDFchevron_right

Enriched Pre-trained Transformers for Joint Slot Filling and Intent Detection

Ivan Koychev

ArXiv, 2020

View PDFchevron_right

Improving Named Entity Recognition in Telephone Conversations via Effective Active Learning with Human in the Loop

Xue-Yong Fu

Cornell University - arXiv, 2022

View PDFchevron_right

Accelerating Natural Language Understanding in Task-Oriented Dialog

Ojas Ahuja

Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI

View PDFchevron_right