October, 2023. arXiv:2310.20360 [cs, math, stat]

Paper doi abstract bibtex

Paper doi abstract bibtex

This book aims to provide an introduction to the topic of deep learning algorithms. We review essential components of deep learning algorithms in full mathematical detail including different artificial neural network (ANN) architectures (such as fully-connected feedforward ANNs, convolutional ANNs, recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different optimization algorithms (such as the basic stochastic gradient descent (SGD) method, accelerated methods, and adaptive methods). We also cover several theoretical aspects of deep learning algorithms such as approximation capacities of ANNs (including a calculus for ANNs), optimization theory (including Kurdyka-\\L\ojasiewicz inequalities), and generalization errors. In the last part of the book some deep learning approximation methods for PDEs are reviewed including physics-informed neural networks (PINNs) and deep Galerkin methods. We hope that this book will be useful for students and scientists who do not yet have any background in deep learning at all and would like to gain a solid foundation as well as for practitioners who would like to obtain a firmer mathematical understanding of the objects and methods considered in deep learning.

@misc{jentzen_mathematical_2023, title = {Mathematical {Introduction} to {Deep} {Learning}: {Methods}, {Implementations}, and {Theory}}, shorttitle = {Mathematical {Introduction} to {Deep} {Learning}}, url = {http://arxiv.org/abs/2310.20360}, doi = {10.48550/arXiv.2310.20360}, abstract = {This book aims to provide an introduction to the topic of deep learning algorithms. We review essential components of deep learning algorithms in full mathematical detail including different artificial neural network (ANN) architectures (such as fully-connected feedforward ANNs, convolutional ANNs, recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different optimization algorithms (such as the basic stochastic gradient descent (SGD) method, accelerated methods, and adaptive methods). We also cover several theoretical aspects of deep learning algorithms such as approximation capacities of ANNs (including a calculus for ANNs), optimization theory (including Kurdyka-\{{\textbackslash}L\}ojasiewicz inequalities), and generalization errors. In the last part of the book some deep learning approximation methods for PDEs are reviewed including physics-informed neural networks (PINNs) and deep Galerkin methods. We hope that this book will be useful for students and scientists who do not yet have any background in deep learning at all and would like to gain a solid foundation as well as for practitioners who would like to obtain a firmer mathematical understanding of the objects and methods considered in deep learning.}, urldate = {2023-11-14}, publisher = {arXiv}, author = {Jentzen, Arnulf and Kuckuck, Benno and von Wurstemberger, Philippe}, month = oct, year = {2023}, note = {arXiv:2310.20360 [cs, math, stat]}, keywords = {68T07, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, Mathematics - Numerical Analysis, Mathematics - Probability, Statistics - Machine Learning}, }

Downloads: 0