On the importance of initialization and momentum in deep learning. Sutskever, I., Martens, J., Dahl, G., & Hinton, G. 30th International Conference on Machine Learning, ICML 2013, 2013. Paper abstract bibtex 2 downloads Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods. Copyright 2013 by the author(s).
@article{
title = {On the importance of initialization and momentum in deep learning},
type = {article},
year = {2013},
pages = {2176-2184},
id = {9cfc18e8-7e82-343f-a27f-96bd96409988},
created = {2021-07-12T14:15:35.413Z},
file_attached = {true},
profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2021-07-12T14:17:10.846Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {85ed9c29-c272-40dc-a01a-f912101de83a},
private_publication = {false},
abstract = {Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods. Copyright 2013 by the author(s).},
bibtype = {article},
author = {Sutskever, Ilya and Martens, James and Dahl, George and Hinton, Geoffrey},
journal = {30th International Conference on Machine Learning, ICML 2013},
number = {PART 3}
}
Downloads: 2
{"_id":{"_str":"534211c16d78590a06000098"},"__v":8,"authorIDs":["2ACfCTBEv4pRPLwBb","4DQrsTmafKuPbvKom","5457dd852abc8e9f3700082c","547ccb28a29145d03f000113","5de76c4f179cbdde01000135","5de7b92fbc280fdf01000192","5de7e861c8f9f6df01000188","5de7ff309b61e8de0100005f","5de917d35d589edf01000025","5de93bf8b8c3f8de010000a3","5de95819d574c6de010000d5","5de96615d574c6de010001ab","5de9faf7fac96fde01000039","5dea1112fac96fde01000194","5deb75f49e04d1df010000c8","5deb8542b62591df0100002d","5deb946fb62591df010000ef","5decb37d93ac84df01000108","5dece9a3619535de010000f9","5dee20da584fb4df0100023f","5dee5ebb773914de01000077","5dee6b12773914de0100015a","5deea5af0ceb4cdf01000193","5deee4cc66e59ade01000133","5def23c6e83f7dde0100003c","5def2e39e83f7dde010000a6","5def601cfe2024de01000084","5defdd35090769df01000181","5df0938cf651f5df01000056","5df0980df651f5df010000a2","5df0c74096fa76de01000024","5df0eda045b054df010000fb","5df2008fe4cb4ede01000035","5df2583563aac8df010000ad","5df25ae963aac8df010000dd","5df28978cf8320de0100001f","5df3756223fb6fdf010000fe","5df38d112b1f8ade01000086","5df3f9cad1756cdf01000039","5df4ca0755b997de0100009a","5df4cd8055b997de010000c2","5df53e56fd245cde01000125","5df60b78a37a40df01000156","5df62fce38e915de0100004b","5df6491ddf30fcdf0100003d","5df67503797ba9de01000104","5df6983872bbd4df01000160","5df6b0e031a37ade01000178","5df789d35c8a36df010000f7","5df7c23392a8e4df010000da","5df7dafbdc100cde010000e1","5df7e65edc100cde010001c6","5df89d4010b1d1de01000088","5df8b0cee6b510df01000021","5df93745d04b27df01000185","5df9d77138a7afde01000084","5dfa483ced5baede0100011b","5dfa67a37d1403df01000123","5dfbc3f34705b7de01000022","5dfcc5cc7a3608de0100004f","5dfe49bfbfbabdde01000004","5e1dc9478d71ddde0100015d","5e29d9d0888177df0100011e","5e48c117f1ed39de0100008d","5e555c0ee89e5fde010000e6","5e55fa1c819fabdf0100003a","5e5b04db6e568ade0100001f","5hGMdsfN7BrXW6K8T","5vmPz2jJcYQdtZPiZ","6yoSqPPyPrLdz8e5Q","BYkXaBeGZENiggkom","Bm98SYMoSNDbYwKGj","EsmZfHTQHAoi4zrJ2","N6cuxqTfG9ybhWDqZ","PXRdnhZs2CXY9NLhX","Q7zrKooGeSy8NTBjC","QxWxCp32GcmNqJ9K2","WnMtdN4pbnNcAtJ9C","e3ZEg6YfZmhHyjxdZ","exw99o2vqr9d3BXtB","fnGMsMDrpkcjCLZ5X","gN5Lfqjgx8P4c7HJT","gxtJ9RRRnpW2hQdtv","hCHC3WLvySqxwH4eZ","jN4BRAzEpDg6bmHmM","mBpuinLcpSzpxcFaz","n3Tju5NZ6trek5XEM","n3hXojCsQTaqGTPyY","ovEhxZqGLG9hGfrun","rnZ6cT67qkowNdLgz","u6Fai3nvyHwLKZpPn","vcz5Swk9goZXRki2G","x9kDqsoXq57J2bEu5","xmZk6XEacSsFbo2Sy","xufS6EqKGDqRQs47H"],"author_short":["Sutskever, I.","Martens, J.","Dahl, G.","Hinton, G."],"bibbaseid":"sutskever-martens-dahl-hinton-ontheimportanceofinitializationandmomentumindeeplearning-2013","bibdata":{"title":"On the importance of initialization and momentum in deep learning","type":"article","year":"2013","pages":"2176-2184","id":"9cfc18e8-7e82-343f-a27f-96bd96409988","created":"2021-07-12T14:15:35.413Z","file_attached":"true","profile_id":"ad172e55-c0e8-3aa4-8465-09fac4d5f5c8","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2021-07-12T14:17:10.846Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"folder_uuids":"85ed9c29-c272-40dc-a01a-f912101de83a","private_publication":false,"abstract":"Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods. Copyright 2013 by the author(s).","bibtype":"article","author":"Sutskever, Ilya and Martens, James and Dahl, George and Hinton, Geoffrey","journal":"30th International Conference on Machine Learning, ICML 2013","number":"PART 3","bibtex":"@article{\n title = {On the importance of initialization and momentum in deep learning},\n type = {article},\n year = {2013},\n pages = {2176-2184},\n id = {9cfc18e8-7e82-343f-a27f-96bd96409988},\n created = {2021-07-12T14:15:35.413Z},\n file_attached = {true},\n profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2021-07-12T14:17:10.846Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n folder_uuids = {85ed9c29-c272-40dc-a01a-f912101de83a},\n private_publication = {false},\n abstract = {Deep and recurrent neural networks (DNNs and RNNs respectively) are powerful models that were considered to be almost impossible to train using stochastic gradient descent with momentum. In this paper, we show that when stochastic gradient descent with momentum uses a well-designed random initialization and a particular type of slowly increasing schedule for the momentum parameter, it can train both DNNs and RNNs (on datasets with long-term dependencies) to levels of performance that were previously achievable only with Hessian-Free optimization. We find that both the initialization and the momentum are crucial since poorly initialized networks cannot be trained with momentum and well-initialized networks perform markedly worse when the momentum is absent or poorly tuned. Our success training these models suggests that previous attempts to train deep and recurrent neural networks from random initializations have likely failed due to poor initialization schemes. Furthermore, carefully tuned momentum methods suffice for dealing with the curvature issues in deep and recurrent network training objectives without the need for sophisticated second-order methods. Copyright 2013 by the author(s).},\n bibtype = {article},\n author = {Sutskever, Ilya and Martens, James and Dahl, George and Hinton, Geoffrey},\n journal = {30th International Conference on Machine Learning, ICML 2013},\n number = {PART 3}\n}","author_short":["Sutskever, I.","Martens, J.","Dahl, G.","Hinton, G."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/1ad6f1e3-f93c-1e71-6397-9daa6acf392a/sutskever13.pdf.pdf"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"sutskever-martens-dahl-hinton-ontheimportanceofinitializationandmomentumindeeplearning-2013","role":"author","metadata":{"authorlinks":{"hinton, g":"https://bibbase.org/show?bib=www.cs.toronto.edu/~fritz/master3.bib&theme=side"}},"downloads":2},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","downloads":2,"keywords":[],"search_terms":["importance","initialization","momentum","deep","learning","sutskever","martens","dahl","hinton"],"title":"On the importance of initialization and momentum in deep learning","year":2013,"dataSources":["avdRdTCKoXoyxo2tQ","GtChgCdrAm62yoP3L","C5FtkvWWggFfMJTFX","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}