Multi-Message Shuffled Privacy in Federated Learning. Girgis, A. M. & Diggavi, S. IEEE Journal on Selected Areas in Information Theory, 5:12-27, 2024.
Arxiv doi abstract bibtex 21 downloads We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\ell _{2}$ -norm and $\ell _{∞ }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.
@ARTICLE{10436711,
author={Girgis, Antonious M. and Diggavi, Suhas},
journal={IEEE Journal on Selected Areas in Information Theory},
title={Multi-Message Shuffled Privacy in Federated Learning},
year={2024},
volume={5},
number={},
pages={12-27},
abstract={We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\ell _{2}$ -norm and $\ell _{\infty }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.},
keywords={Privacy;Servers;Differential privacy;Federated learning;Numerical models;Optimization;Estimation;Differential privacy;mean estimation;federated learning;shuffled model},
doi={10.1109/JSAIT.2024.3366225},
ISSN={2641-8770},
month={},
type = {2},
tags = {journal,PFL},
url_arxiv = {https://doi.org/10.48550/arXiv.2302.11152},
}
Downloads: 21
{"_id":"DGf375gwdGLq9YxkH","bibbaseid":"girgis-diggavi-multimessageshuffledprivacyinfederatedlearning-2024","author_short":["Girgis, A. M.","Diggavi, S."],"bibdata":{"bibtype":"article","type":"2","author":[{"propositions":[],"lastnames":["Girgis"],"firstnames":["Antonious","M."],"suffixes":[]},{"propositions":[],"lastnames":["Diggavi"],"firstnames":["Suhas"],"suffixes":[]}],"journal":"IEEE Journal on Selected Areas in Information Theory","title":"Multi-Message Shuffled Privacy in Federated Learning","year":"2024","volume":"5","number":"","pages":"12-27","abstract":"We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\\ell _{2}$ -norm and $\\ell _{∞ }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.","keywords":"Privacy;Servers;Differential privacy;Federated learning;Numerical models;Optimization;Estimation;Differential privacy;mean estimation;federated learning;shuffled model","doi":"10.1109/JSAIT.2024.3366225","issn":"2641-8770","month":"","tags":"journal,PFL","url_arxiv":"https://doi.org/10.48550/arXiv.2302.11152","bibtex":"@ARTICLE{10436711,\n author={Girgis, Antonious M. and Diggavi, Suhas},\n journal={IEEE Journal on Selected Areas in Information Theory}, \n title={Multi-Message Shuffled Privacy in Federated Learning}, \n year={2024},\n volume={5},\n number={},\n pages={12-27},\n abstract={We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\\ell _{2}$ -norm and $\\ell _{\\infty }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.},\n keywords={Privacy;Servers;Differential privacy;Federated learning;Numerical models;Optimization;Estimation;Differential privacy;mean estimation;federated learning;shuffled model},\n doi={10.1109/JSAIT.2024.3366225},\n ISSN={2641-8770},\n month={},\n type = {2},\n tags = {journal,PFL},\n url_arxiv = {https://doi.org/10.48550/arXiv.2302.11152},\n}\n\n","author_short":["Girgis, A. M.","Diggavi, S."],"key":"10436711","id":"10436711","bibbaseid":"girgis-diggavi-multimessageshuffledprivacyinfederatedlearning-2024","role":"author","urls":{" arxiv":"https://doi.org/10.48550/arXiv.2302.11152"},"keyword":["Privacy;Servers;Differential privacy;Federated learning;Numerical models;Optimization;Estimation;Differential privacy;mean estimation;federated learning;shuffled model"],"metadata":{"authorlinks":{}},"downloads":21,"html":""},"bibtype":"article","biburl":"https://bibbase.org/network/files/e2kjGxYgtBo8SWSbC","dataSources":["FaDBDiyFAJY5pL28h","ycfdiwWPzC2rE6H77"],"keywords":["privacy;servers;differential privacy;federated learning;numerical models;optimization;estimation;differential privacy;mean estimation;federated learning;shuffled model"],"search_terms":["multi","message","shuffled","privacy","federated","learning","girgis","diggavi"],"title":"Multi-Message Shuffled Privacy in Federated Learning","year":2024,"downloads":21}