Discovering Higher-Order Interactions Through Neural Information Decomposition. Reing, K., Ver Steeg, G., & Galstyan, A. Entropy, 2021.
Discovering Higher-Order Interactions Through Neural Information Decomposition [link]Paper  doi  abstract   bibtex   
If regularity in data takes the form of higher-order functions among groups of variables, models which are biased towards lower-order functions may easily mistake the data for noise. To distinguish whether this is the case, one must be able to quantify the contribution of different orders of dependence to the total information. Recent work in information theory attempts to do this through measures of multivariate mutual information (MMI) and information decomposition (ID). Despite substantial theoretical progress, practical issues related to tractability and learnability of higher-order functions are still largely unaddressed. In this work, we introduce a new approach to information decomposition—termed Neural Information Decomposition (NID)—which is both theoretically grounded, and can be efficiently estimated in practice using neural networks. We show on synthetic data that NID can learn to distinguish higher-order functions from noise, while many unsupervised probability models cannot. Additionally, we demonstrate the usefulness of this framework as a tool for exploring biological and artificial neural networks.
@article{reing_entropy21,
	Abstract = {If regularity in data takes the form of higher-order functions among groups of variables, models which are biased towards lower-order functions may easily mistake the data for noise. To distinguish whether this is the case, one must be able to quantify the contribution of different orders of dependence to the total information. Recent work in information theory attempts to do this through measures of multivariate mutual information (MMI) and information decomposition (ID). Despite substantial theoretical progress, practical issues related to tractability and learnability of higher-order functions are still largely unaddressed. In this work, we introduce a new approach to information decomposition—termed Neural Information Decomposition (NID)—which is both theoretically grounded, and can be efficiently estimated in practice using neural networks. We show on synthetic data that NID can learn to distinguish higher-order functions from noise, while many unsupervised probability models cannot. Additionally, we demonstrate the usefulness of this framework as a tool for exploring biological and artificial neural networks.},
	Article-Number = {79},
	Author = {Reing, Kyle and Ver Steeg, Greg and Galstyan, Aram},
	Date-Added = {2021-01-29 10:34:27 -0800},
	Date-Modified = {2021-01-29 10:34:45 -0800},
	Doi = {10.3390/e23010079},
	Issn = {1099-4300},
	Journal = {Entropy},
	Number = {1},
	Pubmedid = {33430463},
	Title = {Discovering Higher-Order Interactions Through Neural Information Decomposition},
	Url = {https://www.mdpi.com/1099-4300/23/1/79},
	Volume = {23},
	Year = {2021},
	Bdsk-Url-1 = {https://www.mdpi.com/1099-4300/23/1/79},
	Bdsk-Url-2 = {https://doi.org/10.3390/e23010079}
}

Downloads: 0