On the Language-specificity of Multilingual BERT and the Impact of Fine-tuning (original) (raw)
Related papers
LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-specific BERT?
2020
Exploring Linguistic Properties of Monolingual BERTs with Typological Classification among Languages
arXiv (Cornell University), 2023
On the Prunability of Attention Heads in Multilingual BERT
ArXiv, 2021
Morphosyntactic probing of multilingual BERT models
Natural Language Engineering
arXiv (Cornell University), 2022
Multilingual BERT Post-Pretraining Alignment
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021
On the ability of monolingual models to learn language-agnostic representations
ArXiv, 2021
Distilling the Knowledge of Romanian BERTs Using Multiple Teachers
arXiv (Cornell University), 2021
Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank
Findings of the Association for Computational Linguistics: EMNLP 2020
Investigating Post-pretraining Representation Alignment for Cross-Lingual Question Answering
Proceedings of the 3rd Workshop on Machine Reading for Question Answering, 2021
Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021
mGPT: Few-Shot Learners Go Multilingual
Cornell University - arXiv, 2022
Exploring Fine-tuning Techniques for Pre-trained Cross-lingual Models via Continual Learning
ArXiv, 2020
Findings of the Association for Computational Linguistics: EMNLP 2021, 2021
Domain Effect Investigation for Bert Models Fine-Tuned on Different Text Categorization Tasks
Arabian Journal for Science and Engineering, 2023
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
ArXiv, 2020
BERT Based Multilingual Machine Comprehension in English and Hindi
ArXiv, 2020
On the Effect of Pretraining Corpora on In-context Learning by a Large-scale Language Model
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
2021
Assessing the Syntactic Capabilities of Transformer-based Multilingual Language Models
2021
Adversarial Neural Networks for Cross-lingual Sequence Tagging
ArXiv, 2018
What Does BERT Learn about the Structure of Language?
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Does BERT really agree ? Fine-grained Analysis of Lexical Dependence on a Syntactic Task
Findings of the Association for Computational Linguistics: ACL 2022
An Interpretability Illusion for BERT
2021
Proceedings of the First Workshop on Insights from Negative Results in NLP, 2020
HUBERT Untangles BERT to Improve Transfer across NLP Tasks
ArXiv, 2019
M-BERT: Injecting Multimodal Information in the BERT Structure
ArXiv, 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
arXiv (Cornell University), 2019
An Analysis of Social Biases Present in BERT Variants Across Multiple Languages
Cornell University - arXiv, 2022
Augmenting BERT Carefully with Underrepresented Linguistic Features
arXiv (Cornell University), 2020