{"_id":"zNQ2iTsNzKjanEL8W","bibbaseid":"kontoudis-stilwell-fullydecentralizedscalablegaussianprocessesformultiagentfederatedlearning-2022","author_short":["Kontoudis, G. P","Stilwell, D. J"],"bibdata":{"bibtype":"misc","type":"misc","title":"Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning","author":[{"propositions":[],"lastnames":["Kontoudis"],"firstnames":["George","P"],"suffixes":[]},{"propositions":[],"lastnames":["Stilwell"],"firstnames":["Daniel","J"],"suffixes":[]}],"abstract":"In this paper, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. To decentralize the implementation of GP training optimization algorithms, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.","keywords":"Gaussian processes, distributed optimization, multi-agent systems, decentralized networks,gradient-based optimization","howpublished":"arXiv preprint arXiv:2203.02865","year":"2022","url_pdf":"https://arxiv.org/pdf/2203.02865.pdf","url_html":"https://arxiv.org/abs/2203.02865","bibtex":"@misc{kontoudis2022fully,\n title = {Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning},\n author = {Kontoudis, George P and Stilwell, Daniel J},\n abstract = {In this paper, we propose decentralized and scalable algorithms for Gaussian process (GP) training and prediction in multi-agent systems. To decentralize the implementation of GP training optimization algorithms, we employ the alternating direction method of multipliers (ADMM). A closed-form solution of the decentralized proximal ADMM is provided for the case of GP hyper-parameter training with maximum likelihood estimation. Multiple aggregation techniques for GP prediction are decentralized with the use of iterative and consensus methods. In addition, we propose a covariance-based nearest neighbor selection strategy that enables a subset of agents to perform predictions. The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.},\n keywords={Gaussian processes, distributed optimization, multi-agent systems, decentralized networks,gradient-based optimization},\n howpublished = {arXiv preprint arXiv:2203.02865},\n year = {2022},\n url_pdf = {https://arxiv.org/pdf/2203.02865.pdf},\n url_html = {https://arxiv.org/abs/2203.02865}\n}\n\n","author_short":["Kontoudis, G. P","Stilwell, D. J"],"key":"kontoudis2022fully","id":"kontoudis2022fully","bibbaseid":"kontoudis-stilwell-fullydecentralizedscalablegaussianprocessesformultiagentfederatedlearning-2022","role":"author","urls":{" pdf":"https://arxiv.org/pdf/2203.02865.pdf"," html":"https://arxiv.org/abs/2203.02865"},"keyword":["Gaussian processes","distributed optimization","multi-agent systems","decentralized networks","gradient-based optimization"],"metadata":{"authorlinks":{}},"downloads":8,"html":""},"bibtype":"misc","biburl":"http://www.georgekontoudis.com/publications/Kontoudis_mypubs.bib","dataSources":["ENfCwQrwsiuEJtsqK"],"keywords":["gaussian processes","distributed optimization","multi-agent systems","decentralized networks","gradient-based optimization"],"search_terms":["fully","decentralized","scalable","gaussian","processes","multi","agent","federated","learning","kontoudis","stilwell"],"title":"Fully Decentralized, Scalable Gaussian Processes for Multi-Agent Federated Learning","year":2022,"downloads":8}