ImageNet Classification with Deep Convolutional Neural Networks. Krizhevsky, A., Sutskever, I., & Hinton, G., E. Technical Report Website abstract bibtex We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
@techreport{
title = {ImageNet Classification with Deep Convolutional Neural Networks},
type = {techreport},
websites = {http://code.google.com/p/cuda-convnet/},
id = {6d9545e1-ada6-31f8-8409-168f78831c2b},
created = {2024-02-14T13:08:57.299Z},
file_attached = {false},
profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
last_modified = {2024-02-14T13:08:57.299Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
private_publication = {false},
abstract = {We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.},
bibtype = {techreport},
author = {Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E}
}
Downloads: 0
{"_id":"iJqqkQcTASknjsFnQ","bibbaseid":"krizhevsky-sutskever-hinton-imagenetclassificationwithdeepconvolutionalneuralnetworks","author_short":["Krizhevsky, A.","Sutskever, I.","Hinton, G., E."],"bibdata":{"title":"ImageNet Classification with Deep Convolutional Neural Networks","type":"techreport","websites":"http://code.google.com/p/cuda-convnet/","id":"6d9545e1-ada6-31f8-8409-168f78831c2b","created":"2024-02-14T13:08:57.299Z","file_attached":false,"profile_id":"f1f70cad-e32d-3de2-a3c0-be1736cb88be","group_id":"5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1","last_modified":"2024-02-14T13:08:57.299Z","read":false,"starred":false,"authored":false,"confirmed":false,"hidden":false,"private_publication":false,"abstract":"We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.","bibtype":"techreport","author":"Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E","bibtex":"@techreport{\n title = {ImageNet Classification with Deep Convolutional Neural Networks},\n type = {techreport},\n websites = {http://code.google.com/p/cuda-convnet/},\n id = {6d9545e1-ada6-31f8-8409-168f78831c2b},\n created = {2024-02-14T13:08:57.299Z},\n file_attached = {false},\n profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},\n group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},\n last_modified = {2024-02-14T13:08:57.299Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.},\n bibtype = {techreport},\n author = {Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E}\n}","author_short":["Krizhevsky, A.","Sutskever, I.","Hinton, G., E."],"urls":{"Website":"http://code.google.com/p/cuda-convnet/"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"krizhevsky-sutskever-hinton-imagenetclassificationwithdeepconvolutionalneuralnetworks","role":"author","metadata":{"authorlinks":{}},"downloads":0},"bibtype":"techreport","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["imagenet","classification","deep","convolutional","neural","networks","krizhevsky","sutskever","hinton"],"title":"ImageNet Classification with Deep Convolutional Neural Networks","year":null}