Demystifying Model Averaging for Communication-Efficient Federated Matrix Factorization. Wang, S., Suwandi, R. C., & Chang, T. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2021, Toronto, ON, Canada, June 6-11, 2021, pages 3680–3684, 2021. IEEE.
Demystifying Model Averaging for Communication-Efficient Federated Matrix Factorization [link]Paper  doi  abstract   bibtex   
Federated learning (FL) is encountered with the challenge of training a model in massive and heterogeneous networks. Model averaging (MA) has become a popular FL paradigm where parallel (stochastic) gradient descent (GD) is run on a small sampled subset of clients multiple times before uploading the local models to a server for averaging, which has been proven effective in reducing the communication cost for achieving a good model. However, MA has not been considered for the important matrix factorization (MF) model, which has vast signal processing and machine learning applications. In this paper, we investigate the federated MF problem and propose a new MA based algorithm, named FedMAvg, by judiciously combining the alternating minimization technique and MA. Through analysis, we show that gradually decreasing the number of local GD and only allowing partial clients to communicate with the server can greatly reduce the communication cost, especially in heterogeneous networks with non-i.i.d. data. Experimental results by applying FedMAvg to data clustering and item recommendation tasks demonstrate its efficacy in terms of both task performance and communication efficiency.
@inproceedings{DBLP:conf/icassp/WangSC21,
  author    = {Shuai Wang and
               Richard Cornelius Suwandi and
               Tsung{-}Hui Chang},
  title     = {Demystifying Model Averaging for Communication-Efficient Federated
               Matrix Factorization},
  booktitle = {{IEEE} International Conference on Acoustics, Speech and Signal Processing,
               {ICASSP} 2021, Toronto, ON, Canada, June 6-11, 2021},
  abstract  = {Federated learning (FL) is encountered with the challenge of training a model     in massive and heterogeneous networks. Model averaging (MA) has become a popular FL paradigm where parallel (stochastic) gradient descent (GD) is run on a small sampled subset of clients multiple times before uploading the local models to a server for averaging, which has been proven effective in reducing the communication cost for achieving a good model. However, MA has not been considered for the important matrix factorization (MF) model, which has vast signal processing and machine learning applications. In this paper, we investigate the federated MF problem and propose a new MA based algorithm, named FedMAvg, by judiciously combining the alternating minimization technique and MA. Through analysis, we show that gradually decreasing the number of local GD and only allowing partial clients to communicate with the server can greatly reduce the communication cost, especially in heterogeneous networks with non-i.i.d. data. Experimental results by applying FedMAvg to data clustering and item recommendation tasks demonstrate its efficacy in terms of both task performance and communication efficiency.},
  keywords = {Matrix Factorization, Federated Learning, Model Averaging, Clustering, Recommendation Systems},
  publisher = {{IEEE}},
  year      = {2021},
  pages     = {3680–3684},
  url       = {https://doi.org/10.1109/ICASSP39728.2021.9413927},
  doi       = {10.1109/ICASSP39728.2021.9413927},
  timestamp = {Thu, 08 Jul 2021 17:12:48 +0200},
  biburl    = {https://dblp.org/rec/conf/icassp/WangSC21.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Downloads: 0