The compositionality of neural networks: integrating symbolism and connectionism. Hupkes, D., Dankers, V., Mul, M., & Bruni, E. 2019. Paper abstract bibtex Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution based and a transformer model. We provide an in depth analysis of the results, that uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.
@article{Hupkes2019,
abstract = {Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution based and a transformer model. We provide an in depth analysis of the results, that uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.},
archivePrefix = {arXiv},
arxivId = {1908.08351},
author = {Hupkes, Dieuwke and Dankers, Verna and Mul, Mathijs and Bruni, Elia},
eprint = {1908.08351},
file = {:Users/shanest/Documents/Library/Hupkes et al/Unknown/Hupkes et al. - 2019 - The compositionality of neural networks integrating symbolism and connectionism.pdf:pdf},
keywords = {method: various,phenomenon: compositionality},
pages = {1--40},
title = {{The compositionality of neural networks: integrating symbolism and connectionism}},
url = {http://arxiv.org/abs/1908.08351},
year = {2019}
}
Downloads: 0
{"_id":"TFtRjn6vRki8yPTiB","bibbaseid":"hupkes-dankers-mul-bruni-thecompositionalityofneuralnetworksintegratingsymbolismandconnectionism-2019","authorIDs":[],"author_short":["Hupkes, D.","Dankers, V.","Mul, M.","Bruni, E."],"bibdata":{"bibtype":"article","type":"article","abstract":"Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution based and a transformer model. We provide an in depth analysis of the results, that uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.","archiveprefix":"arXiv","arxivid":"1908.08351","author":[{"propositions":[],"lastnames":["Hupkes"],"firstnames":["Dieuwke"],"suffixes":[]},{"propositions":[],"lastnames":["Dankers"],"firstnames":["Verna"],"suffixes":[]},{"propositions":[],"lastnames":["Mul"],"firstnames":["Mathijs"],"suffixes":[]},{"propositions":[],"lastnames":["Bruni"],"firstnames":["Elia"],"suffixes":[]}],"eprint":"1908.08351","file":":Users/shanest/Documents/Library/Hupkes et al/Unknown/Hupkes et al. - 2019 - The compositionality of neural networks integrating symbolism and connectionism.pdf:pdf","keywords":"method: various,phenomenon: compositionality","pages":"1–40","title":"The compositionality of neural networks: integrating symbolism and connectionism","url":"http://arxiv.org/abs/1908.08351","year":"2019","bibtex":"@article{Hupkes2019,\nabstract = {Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models' composition operations are local or global (iv) if models' predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution based and a transformer model. We provide an in depth analysis of the results, that uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.},\narchivePrefix = {arXiv},\narxivId = {1908.08351},\nauthor = {Hupkes, Dieuwke and Dankers, Verna and Mul, Mathijs and Bruni, Elia},\neprint = {1908.08351},\nfile = {:Users/shanest/Documents/Library/Hupkes et al/Unknown/Hupkes et al. - 2019 - The compositionality of neural networks integrating symbolism and connectionism.pdf:pdf},\nkeywords = {method: various,phenomenon: compositionality},\npages = {1--40},\ntitle = {{The compositionality of neural networks: integrating symbolism and connectionism}},\nurl = {http://arxiv.org/abs/1908.08351},\nyear = {2019}\n}\n","author_short":["Hupkes, D.","Dankers, V.","Mul, M.","Bruni, E."],"key":"Hupkes2019","id":"Hupkes2019","bibbaseid":"hupkes-dankers-mul-bruni-thecompositionalityofneuralnetworksintegratingsymbolismandconnectionism-2019","role":"author","urls":{"Paper":"http://arxiv.org/abs/1908.08351"},"keyword":["method: various","phenomenon: compositionality"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"article","biburl":"https://www.shane.st/teaching/575/win20/MachineLearning-interpretability.bib","creationDate":"2020-01-05T22:03:38.519Z","downloads":0,"keywords":["method: various","phenomenon: compositionality"],"search_terms":["compositionality","neural","networks","integrating","symbolism","connectionism","hupkes","dankers","mul","bruni"],"title":"The compositionality of neural networks: integrating symbolism and connectionism","year":2019,"dataSources":["okYcdTpf4JJ2zkj7A","znj7izS5PeehdLR3G"]}