On the Renyi Differential Privacy of the Shuffle Model. Girgis, A. M., Data, D., Diggavi, S., Suresh, A. T., & Kairouz, P. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, of CCS '21, pages 2321–2341, New York, NY, USA, 2021. Association for Computing Machinery.
Paper
Arxiv doi abstract bibtex 6 downloads The central question studied in this paper is Rényi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and we develop new analysis techniques for deriving our results which could be of independent interest. In applications, such an RDP guarantee is most useful when we use it for composing several private interactions. We numerically demonstrate that, for important regimes, with composition our bound yields an improvement in privacy guarantee by a factor of $8times$ over the state-of-the-art approximate Differential Privacy (DP) guarantee (with standard composition) for shuffle models. Moreover, combining with Poisson subsampling, our result leads to at least $10times$ improvement over subsampled approximate DP with standard composition.
@inproceedings{10.1145/3460120.3484794,
author = {Girgis, Antonious M. and Data, Deepesh and Diggavi, Suhas and Suresh, Ananda Theertha and Kairouz, Peter},
title = {On the Renyi Differential Privacy of the Shuffle Model},
year = {2021},
isbn = {9781450384544},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3460120.3484794},
doi = {10.1145/3460120.3484794},
abstract = {The central question studied in this paper is R\'{e}nyi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and we develop new analysis techniques for deriving our results which could be of independent interest. In applications, such an RDP guarantee is most useful when we use it for composing several private interactions. We numerically demonstrate that, for important regimes, with composition our bound yields an improvement in privacy guarantee by a factor of $8times$ over the state-of-the-art approximate Differential Privacy (DP) guarantee (with standard composition) for shuffle models. Moreover, combining with Poisson subsampling, our result leads to at least $10times$ improvement over subsampled approximate DP with standard composition.},
booktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},
pages = {2321–2341},
numpages = {21},
keywords = {distributed learning, renyi divergence, privacy amplification via shuffling, privacy composition, differential privacy},
location = {Virtual Event, Republic of Korea},
series = {CCS '21},
url_arxiv = {https://arxiv.org/abs/2105.05180},
tags = {conf,PDL,DML},
type = {4},
}
Downloads: 6
{"_id":"HmZuMZSB6MwcYMLaf","bibbaseid":"girgis-data-diggavi-suresh-kairouz-ontherenyidifferentialprivacyoftheshufflemodel-2021","author_short":["Girgis, A. M.","Data, D.","Diggavi, S.","Suresh, A. T.","Kairouz, P."],"bibdata":{"bibtype":"inproceedings","type":"4","author":[{"propositions":[],"lastnames":["Girgis"],"firstnames":["Antonious","M."],"suffixes":[]},{"propositions":[],"lastnames":["Data"],"firstnames":["Deepesh"],"suffixes":[]},{"propositions":[],"lastnames":["Diggavi"],"firstnames":["Suhas"],"suffixes":[]},{"propositions":[],"lastnames":["Suresh"],"firstnames":["Ananda","Theertha"],"suffixes":[]},{"propositions":[],"lastnames":["Kairouz"],"firstnames":["Peter"],"suffixes":[]}],"title":"On the Renyi Differential Privacy of the Shuffle Model","year":"2021","isbn":"9781450384544","publisher":"Association for Computing Machinery","address":"New York, NY, USA","url":"https://doi.org/10.1145/3460120.3484794","doi":"10.1145/3460120.3484794","abstract":"The central question studied in this paper is Rényi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and we develop new analysis techniques for deriving our results which could be of independent interest. In applications, such an RDP guarantee is most useful when we use it for composing several private interactions. We numerically demonstrate that, for important regimes, with composition our bound yields an improvement in privacy guarantee by a factor of $8times$ over the state-of-the-art approximate Differential Privacy (DP) guarantee (with standard composition) for shuffle models. Moreover, combining with Poisson subsampling, our result leads to at least $10times$ improvement over subsampled approximate DP with standard composition.","booktitle":"Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security","pages":"2321–2341","numpages":"21","keywords":"distributed learning, renyi divergence, privacy amplification via shuffling, privacy composition, differential privacy","location":"Virtual Event, Republic of Korea","series":"CCS '21","url_arxiv":"https://arxiv.org/abs/2105.05180","tags":"conf,PDL,DML","bibtex":"@inproceedings{10.1145/3460120.3484794,\nauthor = {Girgis, Antonious M. and Data, Deepesh and Diggavi, Suhas and Suresh, Ananda Theertha and Kairouz, Peter},\ntitle = {On the Renyi Differential Privacy of the Shuffle Model},\nyear = {2021},\nisbn = {9781450384544},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3460120.3484794},\ndoi = {10.1145/3460120.3484794},\nabstract = {The central question studied in this paper is R\\'{e}nyi Differential Privacy (RDP) guarantees for general discrete local randomizers in the shuffle privacy model. In the shuffle model, each of the n clients randomizes its response using a local differentially private (LDP) mechanism and the untrusted server only receives a random permutation (shuffle) of the client responses without association to each client. The principal result in this paper is the first direct RDP bounds for general discrete local randomization in the shuffle privacy model, and we develop new analysis techniques for deriving our results which could be of independent interest. In applications, such an RDP guarantee is most useful when we use it for composing several private interactions. We numerically demonstrate that, for important regimes, with composition our bound yields an improvement in privacy guarantee by a factor of $8times$ over the state-of-the-art approximate Differential Privacy (DP) guarantee (with standard composition) for shuffle models. Moreover, combining with Poisson subsampling, our result leads to at least $10times$ improvement over subsampled approximate DP with standard composition.},\nbooktitle = {Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security},\npages = {2321–2341},\nnumpages = {21},\nkeywords = {distributed learning, renyi divergence, privacy amplification via shuffling, privacy composition, differential privacy},\nlocation = {Virtual Event, Republic of Korea},\nseries = {CCS '21},\n url_arxiv = {https://arxiv.org/abs/2105.05180},\n tags = {conf,PDL,DML},\n type = {4},\n}\n\n","author_short":["Girgis, A. M.","Data, D.","Diggavi, S.","Suresh, A. T.","Kairouz, P."],"key":"10.1145/3460120.3484794","id":"10.1145/3460120.3484794","bibbaseid":"girgis-data-diggavi-suresh-kairouz-ontherenyidifferentialprivacyoftheshufflemodel-2021","role":"author","urls":{"Paper":"https://doi.org/10.1145/3460120.3484794"," arxiv":"https://arxiv.org/abs/2105.05180"},"keyword":["distributed learning","renyi divergence","privacy amplification via shuffling","privacy composition","differential privacy"],"metadata":{"authorlinks":{}},"downloads":6,"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/network/files/e2kjGxYgtBo8SWSbC","dataSources":["hicKnsKYNEFXC4CgH","jxCYzXXYRqw2fiEXQ","wCByFFrQMyRwfzrJ6","yuqM5ah4HMsTyDrMa","YaM87hGQiepg5qijZ","n9wmfkt5w8CPqCepg","soj2cS6PgG8NPmWGr","FaDBDiyFAJY5pL28h","ycfdiwWPzC2rE6H77"],"keywords":["distributed learning","renyi divergence","privacy amplification via shuffling","privacy composition","differential privacy"],"search_terms":["renyi","differential","privacy","shuffle","model","girgis","data","diggavi","suresh","kairouz"],"title":"On the Renyi Differential Privacy of the Shuffle Model","year":2021,"downloads":6}