Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining
ICMLDec 13, 2022Best Paper
The performance of differentially private machine learning can be boosted
significantly by leveraging the transfer learning capabilities of non-private
models pretrained on large public datasets. We critically review this approach.
We primarily question whether the use of large Web-scraped datasets should be
viewed as differential-privacy-preserving. We caution that publicizing these
models pretrained on Web data as "private" could lead to harm and erode the
public's trust in differential privacy as a meaningful definition of privacy.
Beyond the privacy considerations of using public data, we further question
the utility of this paradigm. We scrutinize whether existing machine learning
benchmarks are appropriate for measuring the ability of pretrained models to
generalize to sensitive domains, which may be poorly represented in public Web
data. Finally, we notice that pretraining has been especially impactful for the
largest available models -- models sufficiently large to prohibit end users
running them on their own devices. Thus, deploying such models today could be a
net loss for privacy, as it would require (private) data to be outsourced to a
more compute-powerful third party.
We conclude by discussing potential paths forward for the field of private
learning, as public pretraining becomes more popular and powerful.