InfoLM: A New Metric to Evaluate Summarization & Data2Text Generation
AAAIDec 2, 2021Outstanding Student Paper
Assessing the quality of natural language generation systems through human
annotation is very expensive. Additionally, human annotation campaigns are
time-consuming and include non-reusable human labour. In practice, researchers
rely on automatic metrics as a proxy of quality. In the last decade, many
string-based metrics (e.g., BLEU) have been introduced. However, such metrics
usually rely on exact matches and thus, do not robustly handle synonyms. In
this paper, we introduce InfoLM a family of untrained metrics that can be
viewed as a string-based metric that addresses the aforementioned flaws thanks
to a pre-trained masked language model. This family of metrics also makes use
of information measures allowing the adaptation of InfoLM to various evaluation
criteria. Using direct assessment, we demonstrate that InfoLM achieves
statistically significant improvement and over 10 points of correlation gains
in many configurations on both summarization and data2text generation.