Multi-Message Shuffled Privacy in Federated Learning. Girgis, A. M. & Diggavi, S. IEEE Journal on Selected Areas in Information Theory, 5:12-27, 2024.
Multi-Message Shuffled Privacy in Federated Learning [link]Arxiv  doi  abstract   bibtex   21 downloads  
We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded $\ell _{2}$ -norm and $\ell _{∞ }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.
@ARTICLE{10436711,
  author={Girgis, Antonious M. and Diggavi, Suhas},
  journal={IEEE Journal on Selected Areas in Information Theory}, 
  title={Multi-Message Shuffled Privacy in Federated Learning}, 
  year={2024},
  volume={5},
  number={},
  pages={12-27},
  abstract={We study the distributed mean estimation (DME) problem under privacy and communication constraints in the local differential privacy (LDP) and multi-message shuffled (MMS) privacy frameworks. The DME has wide applications in both federated learning and analytics. We propose a communication-efficient and differentially private algorithm for DME of bounded  $\ell _{2}$ -norm and  $\ell _{\infty }$ -norm vectors. We analyze our proposed DME schemes showing that our algorithms have order-optimal privacy-communication-performance trade-offs. Our algorithms are designed by giving unequal privacy assignments at different resolutions of the vector (through binary expansion) and appropriately combining it with coordinate sampling. These results are directly applied to give guarantees on private federated learning algorithms. We also numerically evaluate the performance of our private DME algorithms.},
  keywords={Privacy;Servers;Differential privacy;Federated learning;Numerical models;Optimization;Estimation;Differential privacy;mean estimation;federated learning;shuffled model},
  doi={10.1109/JSAIT.2024.3366225},
  ISSN={2641-8770},
  month={},
  type = {2},
  tags = {journal,PFL},
  url_arxiv          = {https://doi.org/10.48550/arXiv.2302.11152},
}

Downloads: 21