The Reusable Holdout: Preserving Validity in Adaptive Data Analysis. Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., & Roth, A. 349(6248):636–638.
The Reusable Holdout: Preserving Validity in Adaptive Data Analysis [link]Paper  doi  abstract   bibtex   
[Editor's summary: Testing hypotheses privately] Large data sets offer a vast scope for testing already-formulated ideas and exploring new ones. Unfortunately, researchers who attempt to do both on the same data set run the risk of making false discoveries, even when testing and exploration are carried out on distinct subsets of data. Based on ideas drawn from differential privacy, Dwork et al. now provide a theoretical solution. Ideas are tested against aggregate information, whereas individual data set components remain confidential. Preserving that privacy also preserves statistical inference validity. [Abstract] Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses.
@article{dworkReusableHoldoutPreserving2015,
  title = {The Reusable Holdout: Preserving Validity in Adaptive Data Analysis},
  author = {Dwork, Cynthia and Feldman, Vitaly and Hardt, Moritz and Pitassi, Toniann and Reingold, Omer and Roth, Aaron},
  date = {2015-08},
  journaltitle = {Science},
  volume = {349},
  pages = {636--638},
  issn = {1095-9203},
  doi = {10.1126/science.aaa9375},
  url = {https://doi.org/10.1126/science.aaa9375},
  abstract = {[Editor's summary: Testing hypotheses privately]

Large data sets offer a vast scope for testing already-formulated ideas and exploring new ones. Unfortunately, researchers who attempt to do both on the same data set run the risk of making false discoveries, even when testing and exploration are carried out on distinct subsets of data. Based on ideas drawn from differential privacy, Dwork et al. now provide a theoretical solution. Ideas are tested against aggregate information, whereas individual data set components remain confidential. Preserving that privacy also preserves statistical inference validity. 

[Abstract]

Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses.},
  keywords = {*imported-from-citeulike-INRMM,~INRMM-MiD:c-13696692,~to-add-doi-URL,algorithms,free-software,overfitting,pseudo-random,resampling,statistics,validation},
  number = {6248}
}

Downloads: 0