When Does Translation Require Context? A Data-driven, Multilingual Exploration
ACLSep 15, 2021Best Resource Paper
Although proper handling of discourse significantly contributes to the
quality of machine translation (MT), these improvements are not adequately
measured in common translation quality metrics. Recent works in context-aware
MT attempt to target a small set of discourse phenomena during evaluation,
however not in a fully systematic way. In this paper, we develop the
Multilingual Discourse-Aware (MuDA) benchmark, a series of taggers that
identify and evaluate model performance on discourse phenomena in any given
dataset. The choice of phenomena is inspired by a novel methodology to
systematically identify translations requiring context. We confirm the
difficulty of previously studied phenomena while uncovering others that were
previously unaddressed. We find that common context-aware MT models make only
marginal improvements over context-agnostic models, which suggests these models
do not handle these ambiguities effectively. We release code and data for 14
language pairs to encourage the MT community to focus on accurately capturing
discourse phenomena.