Convolutional Kernel Networks. Mairal, J., Koniusz, P., Harchaoui, Z., & Schmid, C. Technical Report arXiv:1406.3332, arXiv, November, 2014. arXiv:1406.3332 [cs, stat] type: article
Paper abstract bibtex An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.
@techreport{mairal_convolutional_2014,
title = {Convolutional {Kernel} {Networks}},
url = {http://arxiv.org/abs/1406.3332},
abstract = {An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.},
number = {arXiv:1406.3332},
urldate = {2022-05-23},
institution = {arXiv},
author = {Mairal, Julien and Koniusz, Piotr and Harchaoui, Zaid and Schmid, Cordelia},
month = nov,
year = {2014},
note = {arXiv:1406.3332 [cs, stat]
type: article},
keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning},
}
Downloads: 0
{"_id":"Dzhgg8FPmrm9MBupY","bibbaseid":"mairal-koniusz-harchaoui-schmid-convolutionalkernelnetworks-2014","author_short":["Mairal, J.","Koniusz, P.","Harchaoui, Z.","Schmid, C."],"bibdata":{"bibtype":"techreport","type":"techreport","title":"Convolutional Kernel Networks","url":"http://arxiv.org/abs/1406.3332","abstract":"An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.","number":"arXiv:1406.3332","urldate":"2022-05-23","institution":"arXiv","author":[{"propositions":[],"lastnames":["Mairal"],"firstnames":["Julien"],"suffixes":[]},{"propositions":[],"lastnames":["Koniusz"],"firstnames":["Piotr"],"suffixes":[]},{"propositions":[],"lastnames":["Harchaoui"],"firstnames":["Zaid"],"suffixes":[]},{"propositions":[],"lastnames":["Schmid"],"firstnames":["Cordelia"],"suffixes":[]}],"month":"November","year":"2014","note":"arXiv:1406.3332 [cs, stat] type: article","keywords":"Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning","bibtex":"@techreport{mairal_convolutional_2014,\n\ttitle = {Convolutional {Kernel} {Networks}},\n\turl = {http://arxiv.org/abs/1406.3332},\n\tabstract = {An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art.},\n\tnumber = {arXiv:1406.3332},\n\turldate = {2022-05-23},\n\tinstitution = {arXiv},\n\tauthor = {Mairal, Julien and Koniusz, Piotr and Harchaoui, Zaid and Schmid, Cordelia},\n\tmonth = nov,\n\tyear = {2014},\n\tnote = {arXiv:1406.3332 [cs, stat]\ntype: article},\n\tkeywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning},\n}\n\n","author_short":["Mairal, J.","Koniusz, P.","Harchaoui, Z.","Schmid, C."],"key":"mairal_convolutional_2014","id":"mairal_convolutional_2014","bibbaseid":"mairal-koniusz-harchaoui-schmid-convolutionalkernelnetworks-2014","role":"author","urls":{"Paper":"http://arxiv.org/abs/1406.3332"},"keyword":["Computer Science - Computer Vision and Pattern Recognition","Computer Science - Machine Learning","Statistics - Machine Learning"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"techreport","biburl":"https://bibbase.org/zotero/mxmplx","dataSources":["aXmRAq63YsH7a3ufx"],"keywords":["computer science - computer vision and pattern recognition","computer science - machine learning","statistics - machine learning"],"search_terms":["convolutional","kernel","networks","mairal","koniusz","harchaoui","schmid"],"title":"Convolutional Kernel Networks","year":2014}