Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning. Kontoudis, G. P & Stilwell, D. J arXiv preprint arXiv:2203.02865, 2022.
Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning [pdf]Pdf  Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning [link]Html  abstract   bibtex   8 downloads  
In this paper, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. To decentralize the implementation of GP training optimization algorithms, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.

Downloads: 8