ImageNet Classification with Deep Convolutional Neural Networks. Krizhevsky, A., Sutskever, I., & Hinton, G., E. Technical Report
ImageNet Classification with Deep Convolutional Neural Networks [link]Website  abstract   bibtex   
We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.
@techreport{
 title = {ImageNet Classification with Deep Convolutional Neural Networks},
 type = {techreport},
 websites = {http://code.google.com/p/cuda-convnet/},
 id = {6d9545e1-ada6-31f8-8409-168f78831c2b},
 created = {2024-02-14T13:08:57.299Z},
 file_attached = {false},
 profile_id = {f1f70cad-e32d-3de2-a3c0-be1736cb88be},
 group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
 last_modified = {2024-02-14T13:08:57.299Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {false},
 hidden = {false},
 private_publication = {false},
 abstract = {We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.},
 bibtype = {techreport},
 author = {Krizhevsky, Alex and Sutskever, Ilya and Hinton, Geoffrey E}
}

Downloads: 0