Ethan Chau - Academia.edu (original) (raw)

Uploads

Papers by Ethan Chau

Research paper thumbnail of Specializing Multilingual Language Models: An Empirical Study

Proceedings of the 1st Workshop on Multilingual Representation Learning, 2021

Research paper thumbnail of Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank

Findings of the Association for Computational Linguistics: EMNLP 2020

Pretrained multilingual contextual representations have shown great success, but due to the limit... more Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties. This presents a challenge for language varieties unfamiliar to these models, whose labeled and unlabeled data is too limited to train a monolingual model effectively. We propose the use of additional language-specific pretraining and vocabulary augmentation to adapt multilingual models to low-resource settings. Using dependency parsing of four diverse lowresource language varieties as a case study, we show that these methods significantly improve performance over baselines, especially in the lowest-resource cases, and demonstrate the importance of the relationship between such models' pretraining data and target language varieties.

Research paper thumbnail of Specializing Multilingual Language Models: An Empirical Study

Proceedings of the 1st Workshop on Multilingual Representation Learning, 2021

Research paper thumbnail of Parsing with Multilingual BERT, a Small Corpus, and a Small Treebank

Findings of the Association for Computational Linguistics: EMNLP 2020

Pretrained multilingual contextual representations have shown great success, but due to the limit... more Pretrained multilingual contextual representations have shown great success, but due to the limits of their pretraining data, their benefits do not apply equally to all language varieties. This presents a challenge for language varieties unfamiliar to these models, whose labeled and unlabeled data is too limited to train a monolingual model effectively. We propose the use of additional language-specific pretraining and vocabulary augmentation to adapt multilingual models to low-resource settings. Using dependency parsing of four diverse lowresource language varieties as a case study, we show that these methods significantly improve performance over baselines, especially in the lowest-resource cases, and demonstrate the importance of the relationship between such models' pretraining data and target language varieties.

Log In