Position: Measure Dataset Diversity, Don't Just Claim It
ICML• 2024
Abstract
Machine learning (ML) datasets, often perceived as neutral, inherently
encapsulate abstract and disputed social constructs. Dataset curators
frequently employ value-laden terms such as diversity, bias, and quality to
characterize datasets. Despite their prevalence, these terms lack clear
definitions and validation. Our research explores the implications of this
issue by analyzing "diversity" across 135 image and text datasets. Drawing from
social sciences, we apply principles from measurement theory to identify
considerations and offer recommendations for conceptualizing, operationalizing,
and evaluating diversity in datasets. Our findings have broader implications
for ML research, advocating for a more nuanced and precise approach to handling
value-laden properties in dataset construction.