Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust (original) (raw)


Abstract

This paper presents the results of the WMT22 Metrics Shared Task. Participants submitting automatic MT evaluation metrics were asked to score the outputs of the translation systems competing in the WMT22 News Translation Task on four different domains: news, social, ecommerce, and chat. All metrics were evaluated on how well they correlate with human ratings at the system and segment level. Similar to last year, we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). This setup had several advantages, among other things: (i) expert-based evaluation is more reliable, (ii) we extended the pool of translations by 5 additional translations based on MBR decoding or rescoring which are challenging for current metrics. In addition, we initiated a challenge set subtask, where participants had to create contrastive test suites for evaluating metrics’ ability to capture and penalise specific types of translation errors. Finally, we present an extensive analysis on how well metrics perform on three language pairs: English to German, English to Russian and Chinese to English. The results demonstrate the superiority of neural-based learned metrics and demonstrate again that overlap metrics like Bleu, spBleu or chrf correlate poorly with human ratings. The results also reveal that neural-based metrics are remarkably robust across different domains and challenges.

Anthology ID:

2022.wmt-1.2

Volume:

Proceedings of the Seventh Conference on Machine Translation (WMT)

Month:

December

Year:

2022

Address:

Abu Dhabi, United Arab Emirates (Hybrid)

Editors:

Philipp Koehn,Loïc Barrault,Ondřej Bojar,Fethi Bougares,Rajen Chatterjee,Marta R. Costa-jussà,Christian Federmann,Mark Fishel,Alexander Fraser,Markus Freitag,Yvette Graham,Roman Grundkiewicz,Paco Guzman,Barry Haddow,Matthias Huck,Antonio Jimeno Yepes,Tom Kocmi,André Martins,Makoto Morishita,Christof Monz,Masaaki Nagata,Toshiaki Nakazawa,Matteo Negri,Aurélie Névéol,Mariana Neves,Martin Popel,Marco Turchi,Marcos Zampieri

Venue:

WMT

SIG:

SIGMT

Publisher:

Association for Computational Linguistics

Note:

Pages:

46–68

Language:

URL:

https://aclanthology.org/2022.wmt-1.2

DOI:

Bibkey:

Cite (ACL):

Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022. Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.

Cite (Informal):

Results of WMT22 Metrics Shared Task: Stop Using BLEU – Neural Metrics Are Better and More Robust (Freitag et al., WMT 2022)

Copy Citation:

PDF:

https://aclanthology.org/2022.wmt-1.2.pdf