Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain (original) (raw)
Abstract
This paper presents the results of the WMT21 Metrics Shared Task. Participants were asked to score the outputs of the translation systems competing in the WMT21 News Translation Task with automatic metrics on two different domains: news and TED talks. All metrics were evaluated on how well they correlate at the system- and segment-level with human ratings. Contrary to previous years’ editions, this year we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). This setup had several advantages: (i) expert-based evaluation has been shown to be more reliable, (ii) we were able to evaluate all metrics on two different domains using translations of the same MT systems, (iii) we added 5 additional translations coming from the same system during system development. In addition, we designed three challenge sets that evaluate the robustness of all automatic metrics. We present an extensive analysis on how well metrics perform on three language pairs: English to German, English to Russian and Chinese to English. We further show the impact of different reference translations on reference-based metrics and compare our expert-based MQM annotation with the DA scores acquired by WMT.
Anthology ID:
2021.wmt-1.73
Original:
Version 2:
Version 3:
Volume:
Proceedings of the Sixth Conference on Machine Translation
Month:
November
Year:
2021
Address:
Online
Editors:
Loic Barrault,Ondrej Bojar,Fethi Bougares,Rajen Chatterjee,Marta R. Costa-jussa,Christian Federmann,Mark Fishel,Alexander Fraser,Markus Freitag,Yvette Graham,Roman Grundkiewicz,Paco Guzman,Barry Haddow,Matthias Huck,Antonio Jimeno Yepes,Philipp Koehn,Tom Kocmi,Andre Martins,Makoto Morishita,Christof Monz
Venue:
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
733–774
Language:
URL:
https://aclanthology.org/2021.wmt-1.73
DOI:
Bibkey:
Cite (ACL):
Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ondřej Bojar. 2021. Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain. In Proceedings of the Sixth Conference on Machine Translation, pages 733–774, Online. Association for Computational Linguistics.
Cite (Informal):
Results of the WMT21 Metrics Shared Task: Evaluating Metrics with Expert-based Human Evaluations on TED and News Domain (Freitag et al., WMT 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.wmt-1.73.pdf