Private hypothesis selection

Mark Bun, Thomas Steinke, Gautam Kamath, Zhiwei Steven Wu

Research output: Contribution to journalConference articlepeer-review

33 Scopus citations

Abstract

We provide a differentially private algorithm for hypothesis selection. Given samples from an unknown probability distribution P and a set of m probability distributions H, the goal is to output, in a e-differentially private manner, a distribution from H whose total variation distance to P is comparable to that of the best such distribution (which we denote by a). The sample complexity of our basic algorithm is O ( loga2m + logaem ), representing a minimal cost for privacy when compared to the non-private algorithm. We also can handle infinite hypothesis classes H by relaxing to (e, d)-differential privacy. We apply our hypothesis selection algorithm to give learning algorithms for a number of natural distribution classes, including Gaussians, product distributions, sums of independent random variables, piecewise polynomials, and mixture classes. Our hypothesis selection procedure allows us to generically convert a cover for a class to a learning algorithm, complementing known learning lower bounds which are in terms of the size of the packing number of the class. As the covering and packing numbers are often closely related, for constant a, our algorithms achieve the optimal sample complexity for many classes of interest. Finally, we describe an application to private distribution-free PAC learning.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume32
StatePublished - 2019
Event33rd Annual Conference on Neural Information Processing Systems, NeurIPS 2019 - Vancouver, Canada
Duration: Dec 8 2019Dec 14 2019

Bibliographical note

Funding Information:
The authors would like to thank Shay Moran for bringing to their attention the application to PAC learning mentioned in the supplement, Jonathan Ullman for asking questions which motivated Remark 1, and Clément Canonne for assistance in reducing the constant factor in the approximation guarantee. This work was done while the authors were all affiliated the Simons Institute for the Theory of Computing. MB was supported by a Google Research Fellowship, as part of the Simons-Berkeley Research Fellowship program. GK was supported by a Microsoft Research Fellowship, as part of the Simons-Berkeley Research Fellowship program, and the work was also partially done while visiting Microsoft Research, Redmond. TS was supported by a Patrick J. McGovern Research Fellowship, as part of the Simons-Berkeley Research Fellowship program. ZSW was supported in part by a Google Faculty Research Award, a J.P. Morgan Faculty Award, and a Facebook Research Award.

Funding Information:
∗Simons Institute for the Theory of Computing and Boston University. mbun@bu.edu. Supported by a Google Research Fellowship, as part of the Simons-Berkeley Research Fellowship program.

Publisher Copyright:
© 2019 Neural information processing systems foundation. All rights reserved.

Fingerprint

Dive into the research topics of 'Private hypothesis selection'. Together they form a unique fingerprint.

Cite this