Privacy Auditing with One (1) Training Run
NeurIPS• 2023
Abstract
We propose a scheme for auditing differentially private machine learning
systems with a single training run. This exploits the parallelism of being able
to add or remove multiple training examples independently. We analyze this
using the connection between differential privacy and statistical
generalization, which avoids the cost of group privacy. Our auditing scheme
requires minimal assumptions about the algorithm and can be applied in the
black-box or white-box setting.