Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. Bronstein, M., M., Bruna, J., Cohen, T., & Veličković, P. 2021.
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [link]Website  abstract   bibtex   
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.
@article{
 title = {Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges},
 type = {article},
 year = {2021},
 websites = {http://arxiv.org/abs/2104.13478},
 id = {1e903e3b-d888-3d47-9a2a-2c6cfb04c4e9},
 created = {2021-07-12T09:40:52.521Z},
 file_attached = {true},
 profile_id = {bfbbf840-4c42-3914-a463-19024f50b30c},
 group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
 last_modified = {2022-02-01T13:11:54.912Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 folder_uuids = {368c6572-df92-4840-8400-80e7c9ee2dd7,20ccb950-fef9-4ee1-800c-a60ba9f1df16},
 private_publication = {false},
 abstract = {The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.},
 bibtype = {article},
 author = {Bronstein, Michael M. and Bruna, Joan and Cohen, Taco and Veličković, Petar}
}

Downloads: 0