Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications. Wu, B., Yang, X., Pan, S., & Yuan, X. In IEEE International Conference on Data Mining (ICDM), pages 1421-1426 (CORE Ranked A*), 2021. abstract bibtex Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases. Furthermore, we analyse the implications behind the MIA against GNNs. Our findings confirm that GNNs can be even more vulnerable to MIA than the models with non-graph structures. And unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.
@inproceedings{
title = {Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications},
type = {inproceedings},
year = {2021},
keywords = {-membership inference attacks,graph classifica-,graph neural networks,tion},
pages = {1421-1426 (CORE Ranked A*)},
id = {24ee3590-e438-3a2c-9af8-17d9246cb76b},
created = {2022-01-16T04:41:09.464Z},
file_attached = {false},
profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},
last_modified = {2022-04-10T12:11:48.611Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {true},
hidden = {false},
citation_key = {Wu2021},
folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},
private_publication = {false},
abstract = {Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases. Furthermore, we analyse the implications behind the MIA against GNNs. Our findings confirm that GNNs can be even more vulnerable to MIA than the models with non-graph structures. And unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.},
bibtype = {inproceedings},
author = {Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang},
booktitle = {IEEE International Conference on Data Mining (ICDM)}
}
Downloads: 0
{"_id":"BaPPLcXycHkrwAHf4","bibbaseid":"wu-yang-pan-yuan-adaptingmembershipinferenceattackstognnforgraphclassificationapproachesandimplications-2021","author_short":["Wu, B.","Yang, X.","Pan, S.","Yuan, X."],"bibdata":{"title":"Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications","type":"inproceedings","year":"2021","keywords":"-membership inference attacks,graph classifica-,graph neural networks,tion","pages":"1421-1426 (CORE Ranked A*)","id":"24ee3590-e438-3a2c-9af8-17d9246cb76b","created":"2022-01-16T04:41:09.464Z","file_attached":false,"profile_id":"079852a8-52df-3ac8-a41c-8bebd97d6b2b","last_modified":"2022-04-10T12:11:48.611Z","read":false,"starred":false,"authored":"true","confirmed":"true","hidden":false,"citation_key":"Wu2021","folder_uuids":"f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb","private_publication":false,"abstract":"Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases. Furthermore, we analyse the implications behind the MIA against GNNs. Our findings confirm that GNNs can be even more vulnerable to MIA than the models with non-graph structures. And unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.","bibtype":"inproceedings","author":"Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang","booktitle":"IEEE International Conference on Data Mining (ICDM)","bibtex":"@inproceedings{\n title = {Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications},\n type = {inproceedings},\n year = {2021},\n keywords = {-membership inference attacks,graph classifica-,graph neural networks,tion},\n pages = {1421-1426 (CORE Ranked A*)},\n id = {24ee3590-e438-3a2c-9af8-17d9246cb76b},\n created = {2022-01-16T04:41:09.464Z},\n file_attached = {false},\n profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},\n last_modified = {2022-04-10T12:11:48.611Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {true},\n hidden = {false},\n citation_key = {Wu2021},\n folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},\n private_publication = {false},\n abstract = {Graph Neural Networks (GNNs) are widely adopted to analyse non-Euclidean data, such as chemical networks, brain networks, and social networks, modelling complex relationships and interdependency between objects. Recently, Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. How to infer the membership of an entire graph record is yet to be explored. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases. Furthermore, we analyse the implications behind the MIA against GNNs. Our findings confirm that GNNs can be even more vulnerable to MIA than the models with non-graph structures. And unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.},\n bibtype = {inproceedings},\n author = {Wu, Bang and Yang, Xiangwen and Pan, Shirui and Yuan, Xingliang},\n booktitle = {IEEE International Conference on Data Mining (ICDM)}\n}","author_short":["Wu, B.","Yang, X.","Pan, S.","Yuan, X."],"biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","bibbaseid":"wu-yang-pan-yuan-adaptingmembershipinferenceattackstognnforgraphclassificationapproachesandimplications-2021","role":"author","urls":{},"keyword":["-membership inference attacks","graph classifica-","graph neural networks","tion"],"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","dataSources":["mKA5vx6kcS6ikoYhW","ya2CyA73rpZseyrZ8","AoeZNpAr9D2ciGMwa","gmNB3pprCEczjrwyo","fcdT59YHNhp9Euu5k","SRK2HijFQemp6YcG3","dJWKgXqQFEYPXFiST","HPBzCWvwA7wkE6Dnk","uEtXodz95HRDCHN22","2252seNhipfTmjEBQ","vpu5W6z2tNtLkKjsj","HmWAviNezgcH2jK9X","ukuCjJZTpTcMx84Tz","m7B7iLMuqoXuENyof","AcaDrFjGvc6GmT8Yb"],"keywords":["-membership inference attacks","graph classifica-","graph neural networks","tion"],"search_terms":["adapting","membership","inference","attacks","gnn","graph","classification","approaches","implications","wu","yang","pan","yuan"],"title":"Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications","year":2021}