Deep Residual Learning for Image Recognition. He, K., Ren, S., Sun, J., & Zhang, X. ArXiv e-prints, December, 2015.
Deep Residual Learning for Image Recognition [link]Paper  bibtex   
@article{He:2015tt,
author = {He, Kaiming and Ren, Shaoqing and Sun, Jian and Zhang, Xiangyu},
title = {{Deep Residual Learning for Image Recognition}},
journal = {ArXiv e-prints},
year = {2015},
volume = {cs.CV},
month = dec,
annote = {resnet is a classical example, making network easier to train, rather than finding better optimization methods.

Spiritually, it's similar to batch normalization. The key to performance improvement is not having more parameters, or allowing more types of functions to be learned (at least not the focus), but having better learning dynamics.

When there's mismatch in number of units across layers, a general 1x1 convolution, rather than pointwise addition, is used to match dimensions.

Please check identity mapping paper [Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027) for better residual net blocks, if you really want to go deep.},
keywords = {classics, deep learning},
read = {Yes},
rating = {5},
date-added = {2017-02-28T18:22:36GMT},
date-modified = {2017-03-27T20:01:33GMT},
url = {http://arxiv.org/abs/1512.03385},
local-url = {file://localhost/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2015/He/arXiv%202015%20He.pdf},
file = {{arXiv 2015 He.pdf:/Users/yimengzh/Documents/Papers3_revised/Library.papers3/Articles/2015/He/arXiv 2015 He.pdf:application/pdf}},
uri = {\url{papers3://publication/uuid/F8608028-F58E-4705-A893-0E485C161AF1}}
}

Downloads: 0