Causal Estimation of Memorisation Profiles
Understanding memorisation in language models has practical and societal
implications, e.g., studying models' training dynamics or preventing copyright
infringements. Prior work defines memorisation as the causal effect of training
with an instance on the model's ability to predict that instance. This
definition relies on a counterfactual: the ability to observe what would have
happened had the model not seen that instance. Existing methods struggle to
provide computationally efficient and accurate estimates of this
counterfactual. Further, they often estimate memorisation for a model
architecture rather than for a specific model instance. This paper fills an
important gap in the literature, proposing a new, principled, and efficient
method to estimate memorisation based on the difference-in-differences design
from econometrics. Using this method, we characterise a model's memorisation
profile--its memorisation trends across training--by only observing its
behaviour on a small set of instances throughout training. In experiments with
the Pythia model suite, we find that memorisation (i) is stronger and more
persistent in larger models, (ii) is determined by data order and learning
rate, and (iii) has stable trends across model sizes, thus making memorisation
in larger models predictable from smaller ones.