Low-Bit Quantization for Attributed Network Representation Learning. Yang, H., Pan, S., Chen, L., Zhou, C., & Zhang, P. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 4047-4053 (CORE Ranked A*), 2019. ijcai.org. doi abstract bibtex Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.
@inproceedings{
title = {Low-Bit Quantization for Attributed Network Representation Learning},
type = {inproceedings},
year = {2019},
pages = {4047-4053 (CORE Ranked A*)},
publisher = {ijcai.org},
id = {c6b64795-d583-35de-a7e5-4d6c9881d99b},
created = {2019-11-28T02:47:24.662Z},
file_attached = {false},
profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},
last_modified = {2022-04-10T12:11:00.281Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {true},
hidden = {false},
citation_key = {DBLP:conf/ijcai/YangP0Z019},
source_type = {inproceedings},
folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},
private_publication = {false},
abstract = {Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.},
bibtype = {inproceedings},
author = {Yang, Hong and Pan, Shirui and Chen, Ling and Zhou, Chuan and Zhang, Peng},
editor = {Kraus, Sarit},
doi = {10.24963/ijcai.2019/562},
booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019}
}
Downloads: 0
{"_id":"iiRNPWbJxsuxSqKXa","bibbaseid":"yang-pan-chen-zhou-zhang-lowbitquantizationforattributednetworkrepresentationlearning-2019","authorIDs":["561c75c18d7cb332200004ce","561c77518d7cb332200004db","5de872e7e66c23df0100005d","5e0176b9219bd5df010000c1","5e114756495520de010000a9","5e12c60370e2c4f201000052","5e199a3b204503de0100007f","5e36614a4b25bbf2010000a4","k6fB9cczWrDbvaPzR"],"author_short":["Yang, H.","Pan, S.","Chen, L.","Zhou, C.","Zhang, P."],"bibdata":{"title":"Low-Bit Quantization for Attributed Network Representation Learning","type":"inproceedings","year":"2019","pages":"4047-4053 (CORE Ranked A*)","publisher":"ijcai.org","id":"c6b64795-d583-35de-a7e5-4d6c9881d99b","created":"2019-11-28T02:47:24.662Z","file_attached":false,"profile_id":"079852a8-52df-3ac8-a41c-8bebd97d6b2b","last_modified":"2022-04-10T12:11:00.281Z","read":false,"starred":false,"authored":"true","confirmed":"true","hidden":false,"citation_key":"DBLP:conf/ijcai/YangP0Z019","source_type":"inproceedings","folder_uuids":"f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb","private_publication":false,"abstract":"Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.","bibtype":"inproceedings","author":"Yang, Hong and Pan, Shirui and Chen, Ling and Zhou, Chuan and Zhang, Peng","editor":"Kraus, Sarit","doi":"10.24963/ijcai.2019/562","booktitle":"Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019","bibtex":"@inproceedings{\n title = {Low-Bit Quantization for Attributed Network Representation Learning},\n type = {inproceedings},\n year = {2019},\n pages = {4047-4053 (CORE Ranked A*)},\n publisher = {ijcai.org},\n id = {c6b64795-d583-35de-a7e5-4d6c9881d99b},\n created = {2019-11-28T02:47:24.662Z},\n file_attached = {false},\n profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},\n last_modified = {2022-04-10T12:11:00.281Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {DBLP:conf/ijcai/YangP0Z019},\n source_type = {inproceedings},\n folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},\n private_publication = {false},\n abstract = {Attributed network embedding plays an important role in transferring network data into compact vectors for effective network analysis. Existing attributed network embedding models are designed either in continuous Euclidean spaces which introduce data redundancy or in binary coding spaces which incur significant loss of representation accuracy. To this end, we present a new Low-Bit Quantization for Attributed Network Representation Learning model (LQANR for short) that can learn compact node representations with low bitwidth values while preserving high representation accuracy. Specifically, we formulate a new representation learning function based on matrix factorization that can jointly learn the low-bit node representations and the layer aggregation weights under the low-bit quantization constraint. Because the new learning function falls into the category of mixed integer optimization, we propose an efficient mixed-integer based alternating direction method of multipliers (ADMM) algorithm as the solution. Experiments on real-world node classification and link prediction tasks validate the promising results of the proposed LQANR model.},\n bibtype = {inproceedings},\n author = {Yang, Hong and Pan, Shirui and Chen, Ling and Zhou, Chuan and Zhang, Peng},\n editor = {Kraus, Sarit},\n doi = {10.24963/ijcai.2019/562},\n booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019}\n}","author_short":["Yang, H.","Pan, S.","Chen, L.","Zhou, C.","Zhang, P."],"editor_short":["Kraus, S."],"biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","bibbaseid":"yang-pan-chen-zhou-zhang-lowbitquantizationforattributednetworkrepresentationlearning-2019","role":"author","urls":{},"metadata":{"authorlinks":{"pan, s":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b"}},"downloads":0},"bibtype":"inproceedings","biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","creationDate":"2019-11-29T05:29:58.840Z","downloads":0,"keywords":[],"search_terms":["low","bit","quantization","attributed","network","representation","learning","yang","pan","chen","zhou","zhang"],"title":"Low-Bit Quantization for Attributed Network Representation Learning","year":2019,"dataSources":["mKA5vx6kcS6ikoYhW","ya2CyA73rpZseyrZ8","AoeZNpAr9D2ciGMwa","fcdT59YHNhp9Euu5k","m7B7iLMuqoXuENyof","Byqq56wkTmdCSSibG","gmNB3pprCEczjrwyo","SRK2HijFQemp6YcG3","dJWKgXqQFEYPXFiST","HPBzCWvwA7wkE6Dnk","uEtXodz95HRDCHN22","2252seNhipfTmjEBQ","vpu5W6z2tNtLkKjsj","HmWAviNezgcH2jK9X","ukuCjJZTpTcMx84Tz","AcaDrFjGvc6GmT8Yb"]}