Data encoding methods for byzantine-resilient distributed optimization. Data, D., Song, L., & Diggavi, S. In 2019 IEEE International Symposium on Information Theory (ISIT), pages 2719–2723, 2019. IEEE. doi abstract bibtex We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector for generalized linear models, iteratively, using proximal gradient descent (PGD), of which gradient descent (GD) is a special case. The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to t ≤ [m-1/2] corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed PGD algorithm, without adversaries, for t ≤ m/3 . Our encoding works as efficiently in the streaming data etting as it does in the
@inproceedings{data2019data,
abstract = {We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector for generalized linear models, iteratively, using proximal gradient descent (PGD), of which gradient descent (GD) is a special case. The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to t ≤ [m-1/2] corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed PGD algorithm, without adversaries, for t ≤ m/3 . Our encoding works as efficiently in the streaming data etting as it does in the},
author = {Data, Deepesh and Song, Linqi and Diggavi, Suhas},
booktitle = {2019 IEEE International Symposium on Information Theory (ISIT)},
organization = {IEEE},
pages = {2719--2723},
tags = {conf,SDL,DML},
title = {Data encoding methods for byzantine-resilient distributed optimization},
type = {4},
doi = {10.1109/ISIT.2019.8849857},
year = {2019}
}
Downloads: 0
{"_id":"EBusukTTTx8wRyMpp","bibbaseid":"data-song-diggavi-dataencodingmethodsforbyzantineresilientdistributedoptimization-2019","author_short":["Data, D.","Song, L.","Diggavi, S."],"bibdata":{"bibtype":"inproceedings","type":"4","abstract":"We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector for generalized linear models, iteratively, using proximal gradient descent (PGD), of which gradient descent (GD) is a special case. The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to t ≤ [m-1/2] corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed PGD algorithm, without adversaries, for t ≤ m/3 . Our encoding works as efficiently in the streaming data etting as it does in the","author":[{"propositions":[],"lastnames":["Data"],"firstnames":["Deepesh"],"suffixes":[]},{"propositions":[],"lastnames":["Song"],"firstnames":["Linqi"],"suffixes":[]},{"propositions":[],"lastnames":["Diggavi"],"firstnames":["Suhas"],"suffixes":[]}],"booktitle":"2019 IEEE International Symposium on Information Theory (ISIT)","organization":"IEEE","pages":"2719–2723","tags":"conf,SDL,DML","title":"Data encoding methods for byzantine-resilient distributed optimization","doi":"10.1109/ISIT.2019.8849857","year":"2019","bibtex":"@inproceedings{data2019data,\n abstract = {We consider distributed gradient computation, where both data and computation are distributed among m worker machines, t of which can be Byzantine adversaries, and a designated (master) node computes the model/parameter vector for generalized linear models, iteratively, using proximal gradient descent (PGD), of which gradient descent (GD) is a special case. The Byzantine adversaries can (collaboratively) deviate arbitrarily from their gradient computation. To solve this, we propose a method based on data encoding and (real) error correction to combat the adversarial behavior. We can tolerate up to t ≤ [m-1/2] corrupt worker nodes, which is information-theoretically optimal. Our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding. We demonstrate a trade-off between the number of adversaries tolerated and the resource requirement (storage and computational complexity). As an example, our scheme incurs a constant overhead (storage and computational complexity) over that required by the distributed PGD algorithm, without adversaries, for t ≤ m/3 . Our encoding works as efficiently in the streaming data etting as it does in the},\n author = {Data, Deepesh and Song, Linqi and Diggavi, Suhas},\n booktitle = {2019 IEEE International Symposium on Information Theory (ISIT)},\n organization = {IEEE},\n pages = {2719--2723},\n tags = {conf,SDL,DML},\n title = {Data encoding methods for byzantine-resilient distributed optimization},\n type = {4},\n doi = {10.1109/ISIT.2019.8849857},\n year = {2019}\n}\n\n","author_short":["Data, D.","Song, L.","Diggavi, S."],"key":"data2019data","id":"data2019data","bibbaseid":"data-song-diggavi-dataencodingmethodsforbyzantineresilientdistributedoptimization-2019","role":"author","urls":{},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/network/files/e2kjGxYgtBo8SWSbC","dataSources":["hicKnsKYNEFXC4CgH","jxCYzXXYRqw2fiEXQ","wCByFFrQMyRwfzrJ6","yuqM5ah4HMsTyDrMa","YaM87hGQiepg5qijZ","n9wmfkt5w8CPqCepg","soj2cS6PgG8NPmWGr","FaDBDiyFAJY5pL28h","ycfdiwWPzC2rE6H77"],"keywords":[],"search_terms":["data","encoding","methods","byzantine","resilient","distributed","optimization","data","song","diggavi"],"title":"Data encoding methods for byzantine-resilient distributed optimization","year":2019}