Scaling Data-Constrained Language Models
NeurIPSMay 25, 2023Outstanding Main Track Runner-Up Paper
The current trend of scaling language models involves increasing both
parameter count and training dataset size. Extrapolating this trend suggests
that training dataset size may soon be limited by the amount of text data
available on the internet. Motivated by this limit, we investigate scaling
language models in data-constrained regimes. Specifically, we run a large set
of experiments varying the extent of data repetition and compute budget,
ranging up to 900 billion training tokens and 9 billion parameter models. We
find that with constrained data for a fixed compute budget, training with up to
4 epochs of repeated data yields negligible changes to loss compared to having
unique data. However, with more repetition, the value of adding compute
eventually decays to zero. We propose and empirically validate a scaling law
for compute optimality that accounts for the decreasing value of repeated
tokens and excess parameters. Finally, we experiment with approaches mitigating
data scarcity, including augmenting the training dataset with code data or
removing commonly used filters. Models and datasets from our 400 training runs
are freely available at https://github.com/huggingface/datablations.