Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization. Wu, B., Yang, X., Pan, S., & Yuan, X. Volume 1 , Association for Computing Machinery, 2020. Paper Website doi abstract bibtex Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the models trained over the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, for the first time, we comprehensively investigate and develop model extraction attacks against GNN models. We first systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbour connections of the nodes obtained by the attacker. Then we present detailed methods which utilise the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., 84% - 89% of the inputs in the target domain have the same output predictions as the victim model.
@book{
title = {Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization},
type = {book},
year = {2020},
source = {Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security (ASIA CCS '22), May 30-June 3, 2022, Nagasaki, Japan},
keywords = {2022,Graph Neural Networks,Model Extraction Attack,acm reference format,and xingliang yuan,bang wu,graph neural networks,model,model extraction attack,shirui pan,xiangwen yang},
volume = {1},
issue = {1},
websites = {http://arxiv.org/abs/2010.12751},
publisher = {Association for Computing Machinery},
id = {756d69e7-9177-3330-973d-b9e251240330},
created = {2022-01-05T09:23:15.672Z},
file_attached = {true},
profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2022-01-05T09:24:30.842Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {62c10c84-630d-4fdf-af89-e343e67460c7},
private_publication = {false},
abstract = {Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the models trained over the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, for the first time, we comprehensively investigate and develop model extraction attacks against GNN models. We first systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbour connections of the nodes obtained by the attacker. Then we present detailed methods which utilise the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., 84% - 89% of the inputs in the target domain have the same output predictions as the victim model.},
bibtype = {book},
author = {Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang},
doi = {10.1145/3488932.3497753}
}
Downloads: 0
{"_id":"ZXDGMYFXqZcj7rdQr","bibbaseid":"wu-yang-pan-yuan-modelextractionattacksongraphneuralnetworkstaxonomyandrealization-2020","authorIDs":["FgTRy7pNrBNcDn5rk"],"author_short":["Wu, B.","Yang, X.","Pan, S.","Yuan, X."],"bibdata":{"title":"Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization","type":"book","year":"2020","source":"Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security (ASIA CCS '22), May 30-June 3, 2022, Nagasaki, Japan","keywords":"2022,Graph Neural Networks,Model Extraction Attack,acm reference format,and xingliang yuan,bang wu,graph neural networks,model,model extraction attack,shirui pan,xiangwen yang","volume":"1","issue":"1","websites":"http://arxiv.org/abs/2010.12751","publisher":"Association for Computing Machinery","id":"756d69e7-9177-3330-973d-b9e251240330","created":"2022-01-05T09:23:15.672Z","file_attached":"true","profile_id":"ad172e55-c0e8-3aa4-8465-09fac4d5f5c8","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2022-01-05T09:24:30.842Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"folder_uuids":"62c10c84-630d-4fdf-af89-e343e67460c7","private_publication":false,"abstract":"Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the models trained over the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, for the first time, we comprehensively investigate and develop model extraction attacks against GNN models. We first systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbour connections of the nodes obtained by the attacker. Then we present detailed methods which utilise the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., 84% - 89% of the inputs in the target domain have the same output predictions as the victim model.","bibtype":"book","author":"Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang","doi":"10.1145/3488932.3497753","bibtex":"@book{\n title = {Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization},\n type = {book},\n year = {2020},\n source = {Proceedings of the 2022 ACM Asia Conference on Computer and Communications Security (ASIA CCS '22), May 30-June 3, 2022, Nagasaki, Japan},\n keywords = {2022,Graph Neural Networks,Model Extraction Attack,acm reference format,and xingliang yuan,bang wu,graph neural networks,model,model extraction attack,shirui pan,xiangwen yang},\n volume = {1},\n issue = {1},\n websites = {http://arxiv.org/abs/2010.12751},\n publisher = {Association for Computing Machinery},\n id = {756d69e7-9177-3330-973d-b9e251240330},\n created = {2022-01-05T09:23:15.672Z},\n file_attached = {true},\n profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2022-01-05T09:24:30.842Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n folder_uuids = {62c10c84-630d-4fdf-af89-e343e67460c7},\n private_publication = {false},\n abstract = {Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker pretending as a client. Unfortunately, prior works focus on the models trained over the Euclidean space, e.g., images and texts, while how to extract a GNN model that contains a graph structure and node features is yet to be explored. In this paper, for the first time, we comprehensively investigate and develop model extraction attacks against GNN models. We first systematically formalise the threat modelling in the context of GNN model extraction and classify the adversarial threats into seven categories by considering different background knowledge of the attacker, e.g., attributes and/or neighbour connections of the nodes obtained by the attacker. Then we present detailed methods which utilise the accessible knowledge in each threat to implement the attacks. By evaluating over three real-world datasets, our attacks are shown to extract duplicated models effectively, i.e., 84% - 89% of the inputs in the target domain have the same output predictions as the victim model.},\n bibtype = {book},\n author = {Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang},\n doi = {10.1145/3488932.3497753}\n}","author_short":["Wu, B.","Yang, X.","Pan, S.","Yuan, X."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/644cc13e-7194-ecaf-6c7c-b2c90980a322/201012751.pdf.pdf","Website":"http://arxiv.org/abs/2010.12751"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"wu-yang-pan-yuan-modelextractionattacksongraphneuralnetworkstaxonomyandrealization-2020","role":"author","keyword":["2022","Graph Neural Networks","Model Extraction Attack","acm reference format","and xingliang yuan","bang wu","graph neural networks","model","model extraction attack","shirui pan","xiangwen yang"],"metadata":{"authorlinks":{"pan, s":"https://trust-agi.github.io/publication/"}},"downloads":0},"bibtype":"book","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","creationDate":"2021-02-15T10:01:15.185Z","downloads":0,"keywords":["2022","graph neural networks","model extraction attack","acm reference format","and xingliang yuan","bang wu","graph neural networks","model","model extraction attack","shirui pan","xiangwen yang"],"search_terms":["model","extraction","attacks","graph","neural","networks","taxonomy","realization","wu","yang","pan","yuan"],"title":"Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization","year":2020,"dataSources":["mKA5vx6kcS6ikoYhW","AoeZNpAr9D2ciGMwa","ya2CyA73rpZseyrZ8","fcdT59YHNhp9Euu5k","2252seNhipfTmjEBQ"]}