Domain-adversarial graph neural networks for text classification. Wu, M., Pan, S., Zhu, X., Zhou, C., & Pan, L. In Proceedings - IEEE International Conference on Data Mining, ICDM, volume 2019-Novem, pages 648-657 (CORE Ranked A*), 2019. doi abstract bibtex © 2019 IEEE. Text classification, in cross-domain setting, is a challenging task. On the one hand, data from other domains are often useful to improve the learning on the target domain; on the other hand, domain variance and hierarchical structure of documents from words, key phrases, sentences, paragraphs, etc. make it difficult to align domains for effective learning. To date, existing cross-domain text classification methods mainly strive to minimize feature distribution differences between domains, and they typically suffer from three major limitations - (1) difficult to capture semantics in non-consecutive phrases and long-distance word dependency because of treating texts as word sequences, (2) neglect of hierarchical coarse-grained structures of document for feature learning, and (3) narrow focus of the domains at instance levels, without using domains as supervisions to improve text classification. This paper proposes an end-to-end, domain-adversarial graph neural networks (DAGNN), for cross-domain text classification. Our motivation is to model documents as graphs and use a domain-adversarial training principle to lean features from each graph (as well as learning the separation of domains) for effective text classification. At the instance level, DAGNN uses a graph to model each document, so that it can capture non-consecutive and long-distance semantics. At the feature level, DAGNN uses graphs from different domains to jointly train hierarchical graph neural networks in order to learn good features. At the learning level, DAGNN proposes a domain-adversarial principle such that the learned features not only optimally classify documents but also separates domains. Experiments on benchmark datasets demonstrate the effectiveness of our method in cross-domain classification tasks.
@inproceedings{
title = {Domain-adversarial graph neural networks for text classification},
type = {inproceedings},
year = {2019},
keywords = {Cross-domain learning,Graph neural networks,Text classification},
pages = {648-657 (CORE Ranked A*)},
volume = {2019-Novem},
id = {a5af5826-9b18-3360-9de5-26d4e6f78b87},
created = {2020-02-13T23:59:00.000Z},
file_attached = {false},
profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},
last_modified = {2022-04-10T12:10:51.778Z},
read = {false},
starred = {false},
authored = {true},
confirmed = {false},
hidden = {false},
citation_key = {Wu2019},
folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},
private_publication = {false},
abstract = {© 2019 IEEE. Text classification, in cross-domain setting, is a challenging task. On the one hand, data from other domains are often useful to improve the learning on the target domain; on the other hand, domain variance and hierarchical structure of documents from words, key phrases, sentences, paragraphs, etc. make it difficult to align domains for effective learning. To date, existing cross-domain text classification methods mainly strive to minimize feature distribution differences between domains, and they typically suffer from three major limitations - (1) difficult to capture semantics in non-consecutive phrases and long-distance word dependency because of treating texts as word sequences, (2) neglect of hierarchical coarse-grained structures of document for feature learning, and (3) narrow focus of the domains at instance levels, without using domains as supervisions to improve text classification. This paper proposes an end-to-end, domain-adversarial graph neural networks (DAGNN), for cross-domain text classification. Our motivation is to model documents as graphs and use a domain-adversarial training principle to lean features from each graph (as well as learning the separation of domains) for effective text classification. At the instance level, DAGNN uses a graph to model each document, so that it can capture non-consecutive and long-distance semantics. At the feature level, DAGNN uses graphs from different domains to jointly train hierarchical graph neural networks in order to learn good features. At the learning level, DAGNN proposes a domain-adversarial principle such that the learned features not only optimally classify documents but also separates domains. Experiments on benchmark datasets demonstrate the effectiveness of our method in cross-domain classification tasks.},
bibtype = {inproceedings},
author = {Wu, M. and Pan, S. and Zhu, X. and Zhou, C. and Pan, L.},
doi = {10.1109/ICDM.2019.00075},
booktitle = {Proceedings - IEEE International Conference on Data Mining, ICDM}
}
Downloads: 0
{"_id":"N7hsYKFYuXHDS4gRC","bibbaseid":"wu-pan-zhu-zhou-pan-domainadversarialgraphneuralnetworksfortextclassification-2019","authorIDs":["561c77518d7cb332200004db","5de872e7e66c23df0100005d","5e0176b9219bd5df010000c1","5e114756495520de010000a9","5e12c60370e2c4f201000052","5e199a3b204503de0100007f"],"author_short":["Wu, M.","Pan, S.","Zhu, X.","Zhou, C.","Pan, L."],"bibdata":{"title":"Domain-adversarial graph neural networks for text classification","type":"inproceedings","year":"2019","keywords":"Cross-domain learning,Graph neural networks,Text classification","pages":"648-657 (CORE Ranked A*)","volume":"2019-Novem","id":"a5af5826-9b18-3360-9de5-26d4e6f78b87","created":"2020-02-13T23:59:00.000Z","file_attached":false,"profile_id":"079852a8-52df-3ac8-a41c-8bebd97d6b2b","last_modified":"2022-04-10T12:10:51.778Z","read":false,"starred":false,"authored":"true","confirmed":false,"hidden":false,"citation_key":"Wu2019","folder_uuids":"f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb","private_publication":false,"abstract":"© 2019 IEEE. Text classification, in cross-domain setting, is a challenging task. On the one hand, data from other domains are often useful to improve the learning on the target domain; on the other hand, domain variance and hierarchical structure of documents from words, key phrases, sentences, paragraphs, etc. make it difficult to align domains for effective learning. To date, existing cross-domain text classification methods mainly strive to minimize feature distribution differences between domains, and they typically suffer from three major limitations - (1) difficult to capture semantics in non-consecutive phrases and long-distance word dependency because of treating texts as word sequences, (2) neglect of hierarchical coarse-grained structures of document for feature learning, and (3) narrow focus of the domains at instance levels, without using domains as supervisions to improve text classification. This paper proposes an end-to-end, domain-adversarial graph neural networks (DAGNN), for cross-domain text classification. Our motivation is to model documents as graphs and use a domain-adversarial training principle to lean features from each graph (as well as learning the separation of domains) for effective text classification. At the instance level, DAGNN uses a graph to model each document, so that it can capture non-consecutive and long-distance semantics. At the feature level, DAGNN uses graphs from different domains to jointly train hierarchical graph neural networks in order to learn good features. At the learning level, DAGNN proposes a domain-adversarial principle such that the learned features not only optimally classify documents but also separates domains. Experiments on benchmark datasets demonstrate the effectiveness of our method in cross-domain classification tasks.","bibtype":"inproceedings","author":"Wu, M. and Pan, S. and Zhu, X. and Zhou, C. and Pan, L.","doi":"10.1109/ICDM.2019.00075","booktitle":"Proceedings - IEEE International Conference on Data Mining, ICDM","bibtex":"@inproceedings{\n title = {Domain-adversarial graph neural networks for text classification},\n type = {inproceedings},\n year = {2019},\n keywords = {Cross-domain learning,Graph neural networks,Text classification},\n pages = {648-657 (CORE Ranked A*)},\n volume = {2019-Novem},\n id = {a5af5826-9b18-3360-9de5-26d4e6f78b87},\n created = {2020-02-13T23:59:00.000Z},\n file_attached = {false},\n profile_id = {079852a8-52df-3ac8-a41c-8bebd97d6b2b},\n last_modified = {2022-04-10T12:10:51.778Z},\n read = {false},\n starred = {false},\n authored = {true},\n confirmed = {false},\n hidden = {false},\n citation_key = {Wu2019},\n folder_uuids = {f3b8cf54-f818-49eb-a899-33ac83c5e58d,2327f56c-ffc0-4246-bac0-b9fa6098ebfb},\n private_publication = {false},\n abstract = {© 2019 IEEE. Text classification, in cross-domain setting, is a challenging task. On the one hand, data from other domains are often useful to improve the learning on the target domain; on the other hand, domain variance and hierarchical structure of documents from words, key phrases, sentences, paragraphs, etc. make it difficult to align domains for effective learning. To date, existing cross-domain text classification methods mainly strive to minimize feature distribution differences between domains, and they typically suffer from three major limitations - (1) difficult to capture semantics in non-consecutive phrases and long-distance word dependency because of treating texts as word sequences, (2) neglect of hierarchical coarse-grained structures of document for feature learning, and (3) narrow focus of the domains at instance levels, without using domains as supervisions to improve text classification. This paper proposes an end-to-end, domain-adversarial graph neural networks (DAGNN), for cross-domain text classification. Our motivation is to model documents as graphs and use a domain-adversarial training principle to lean features from each graph (as well as learning the separation of domains) for effective text classification. At the instance level, DAGNN uses a graph to model each document, so that it can capture non-consecutive and long-distance semantics. At the feature level, DAGNN uses graphs from different domains to jointly train hierarchical graph neural networks in order to learn good features. At the learning level, DAGNN proposes a domain-adversarial principle such that the learned features not only optimally classify documents but also separates domains. Experiments on benchmark datasets demonstrate the effectiveness of our method in cross-domain classification tasks.},\n bibtype = {inproceedings},\n author = {Wu, M. and Pan, S. and Zhu, X. and Zhou, C. and Pan, L.},\n doi = {10.1109/ICDM.2019.00075},\n booktitle = {Proceedings - IEEE International Conference on Data Mining, ICDM}\n}","author_short":["Wu, M.","Pan, S.","Zhu, X.","Zhou, C.","Pan, L."],"biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","bibbaseid":"wu-pan-zhu-zhou-pan-domainadversarialgraphneuralnetworksfortextclassification-2019","role":"author","urls":{},"keyword":["Cross-domain learning","Graph neural networks","Text classification"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","creationDate":"2019-11-29T05:40:15.768Z","downloads":0,"keywords":["cross-domain learning","graph neural networks","text classification"],"search_terms":["domain","adversarial","graph","neural","networks","text","classification","wu","pan","zhu","zhou","pan"],"title":"Domain-adversarial graph neural networks for text classification","year":2019,"biburl":"https://bibbase.org/service/mendeley/079852a8-52df-3ac8-a41c-8bebd97d6b2b","dataSources":["mKA5vx6kcS6ikoYhW","ya2CyA73rpZseyrZ8","AoeZNpAr9D2ciGMwa","fcdT59YHNhp9Euu5k","m7B7iLMuqoXuENyof","gmNB3pprCEczjrwyo","SRK2HijFQemp6YcG3","dJWKgXqQFEYPXFiST","HPBzCWvwA7wkE6Dnk","uEtXodz95HRDCHN22","2252seNhipfTmjEBQ","vpu5W6z2tNtLkKjsj","HmWAviNezgcH2jK9X","ukuCjJZTpTcMx84Tz","AcaDrFjGvc6GmT8Yb"]}