Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. Bronstein, M., M., Bruna, J., Cohen, T., & Veličković, P. 2021. Paper Website abstract bibtex The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.
@article{
title = {Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges},
type = {article},
year = {2021},
websites = {http://arxiv.org/abs/2104.13478},
id = {1e903e3b-d888-3d47-9a2a-2c6cfb04c4e9},
created = {2021-07-12T09:40:52.521Z},
file_attached = {true},
profile_id = {bfbbf840-4c42-3914-a463-19024f50b30c},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2022-02-01T13:11:54.912Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {368c6572-df92-4840-8400-80e7c9ee2dd7,20ccb950-fef9-4ee1-800c-a60ba9f1df16},
private_publication = {false},
abstract = {The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.},
bibtype = {article},
author = {Bronstein, Michael M. and Bruna, Joan and Cohen, Taco and Veličković, Petar}
}
Downloads: 0
{"_id":"XwNw4EvoKnAhB4g8g","bibbaseid":"bronstein-bruna-cohen-velikovi-geometricdeeplearninggridsgroupsgraphsgeodesicsandgauges-2021","author_short":["Bronstein, M., M.","Bruna, J.","Cohen, T.","Veličković, P."],"bibdata":{"title":"Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges","type":"article","year":"2021","websites":"http://arxiv.org/abs/2104.13478","id":"1e903e3b-d888-3d47-9a2a-2c6cfb04c4e9","created":"2021-07-12T09:40:52.521Z","file_attached":"true","profile_id":"bfbbf840-4c42-3914-a463-19024f50b30c","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2022-02-01T13:11:54.912Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"folder_uuids":"368c6572-df92-4840-8400-80e7c9ee2dd7,20ccb950-fef9-4ee1-800c-a60ba9f1df16","private_publication":false,"abstract":"The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.","bibtype":"article","author":"Bronstein, Michael M. and Bruna, Joan and Cohen, Taco and Veličković, Petar","bibtex":"@article{\n title = {Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges},\n type = {article},\n year = {2021},\n websites = {http://arxiv.org/abs/2104.13478},\n id = {1e903e3b-d888-3d47-9a2a-2c6cfb04c4e9},\n created = {2021-07-12T09:40:52.521Z},\n file_attached = {true},\n profile_id = {bfbbf840-4c42-3914-a463-19024f50b30c},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2022-02-01T13:11:54.912Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n folder_uuids = {368c6572-df92-4840-8400-80e7c9ee2dd7,20ccb950-fef9-4ee1-800c-a60ba9f1df16},\n private_publication = {false},\n abstract = {The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Indeed, many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact feasible with appropriate computational scale. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning, whereby adapted, often hierarchical, features capture the appropriate notion of regularity for each task, and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying low-dimensionality and structure of the physical world. This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. Such a 'geometric unification' endeavour, in the spirit of Felix Klein's Erlangen Program, serves a dual purpose: on one hand, it provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers. On the other hand, it gives a constructive procedure to incorporate prior physical knowledge into neural architectures and provide principled way to build future architectures yet to be invented.},\n bibtype = {article},\n author = {Bronstein, Michael M. and Bruna, Joan and Cohen, Taco and Veličković, Petar}\n}","author_short":["Bronstein, M., M.","Bruna, J.","Cohen, T.","Veličković, P."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/6dc43280-25d5-1e29-369a-fbca621d1016/Geometric_deep_learning.pdf.pdf","Website":"http://arxiv.org/abs/2104.13478"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"bronstein-bruna-cohen-velikovi-geometricdeeplearninggridsgroupsgraphsgeodesicsandgauges-2021","role":"author","metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["KhfhF8P52iu5Szymq","CmHEoydhafhbkXXt5","N4kJAiLiJ7kxfNsoh","2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["geometric","deep","learning","grids","groups","graphs","geodesics","gauges","bronstein","bruna","cohen","veličković"],"title":"Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges","year":2021}