Learning Shared, Discriminative, and Compact Representations for Visual Recognition. Lobel, H., Vidal, R., & Soto, A. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015.
Paper abstract bibtex Dictionary-based and part-based methods are among the most popular approaches to visual recognition. In both methods, a mid-level representation is built on top of low-level image descriptors and high-level classifiers are trained on top of the mid-level representation. While earlier methods built the mid-level representation without supervision, there is currently great interest in learning both representations jointly to make the mid-level representation more discriminative. In this work we propose a new approach to visual recognition that jointly learns a shared, discriminative, and compact mid-level representation and a compact high-level representation. By using a structured output learning framework, our approach directly handles the multiclass case at both levels of abstraction. Moreover, by using a group-sparse prior in the structured output learning framework, our approach encourages sharing of visual words and thus reduces the number of words used to represent each class. We test our proposed method on several popular benchmarks. Our results show that, by jointly learning mid- and high-level representations, and fostering the sharing of discriminative visual words among target classes, we are able to achieve state-of-the-art recognition performance using far less visual words than previous approaches.
@Article{ lobel:etal:2015,
author = {H. Lobel and R. Vidal and A. Soto},
title = {Learning Shared, Discriminative, and Compact
Representations for Visual Recognition},
journal = {{IEEE} Transactions on Pattern Analysis and Machine
Intelligence},
volume = {37},
number = {11},
year = {2015},
abstract = {Dictionary-based and part-based methods are among the most
popular approaches to visual recognition. In both methods,
a mid-level representation is built on top of low-level
image descriptors and high-level classifiers are trained on
top of the mid-level representation. While earlier methods
built the mid-level representation without supervision,
there is currently great interest in learning both
representations jointly to make the mid-level
representation more discriminative. In this work we propose
a new approach to visual recognition that jointly learns a
shared, discriminative, and compact mid-level
representation and a compact high-level representation. By
using a structured output learning framework, our approach
directly handles the multiclass case at both levels of
abstraction. Moreover, by using a group-sparse prior in the
structured output learning framework, our approach
encourages sharing of visual words and thus reduces the
number of words used to represent each class. We test our
proposed method on several popular benchmarks. Our results
show that, by jointly learning mid- and high-level
representations, and fostering the sharing of
discriminative visual words among target classes, we are
able to achieve state-of-the-art recognition performance
using far less visual words than previous approaches.},
url = {http://saturno.ing.puc.cl/media/papers_alvaro/Hans-FINAL-PAMI-2015.pdf}
}
Downloads: 0
{"_id":"Dwak3eRCdZAQKipkp","authorIDs":["jAtuJBcGhng4Lq2Nd"],"author_short":["Lobel, H.","Vidal, R.","Soto, A."],"bibbaseid":"lobel-vidal-soto-learningshareddiscriminativeandcompactrepresentationsforvisualrecognition-2015","bibdata":{"bibtype":"article","type":"article","author":[{"firstnames":["H."],"propositions":[],"lastnames":["Lobel"],"suffixes":[]},{"firstnames":["R."],"propositions":[],"lastnames":["Vidal"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"title":"Learning Shared, Discriminative, and Compact Representations for Visual Recognition","journal":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"37","number":"11","year":"2015","abstract":"Dictionary-based and part-based methods are among the most popular approaches to visual recognition. In both methods, a mid-level representation is built on top of low-level image descriptors and high-level classifiers are trained on top of the mid-level representation. While earlier methods built the mid-level representation without supervision, there is currently great interest in learning both representations jointly to make the mid-level representation more discriminative. In this work we propose a new approach to visual recognition that jointly learns a shared, discriminative, and compact mid-level representation and a compact high-level representation. By using a structured output learning framework, our approach directly handles the multiclass case at both levels of abstraction. Moreover, by using a group-sparse prior in the structured output learning framework, our approach encourages sharing of visual words and thus reduces the number of words used to represent each class. We test our proposed method on several popular benchmarks. Our results show that, by jointly learning mid- and high-level representations, and fostering the sharing of discriminative visual words among target classes, we are able to achieve state-of-the-art recognition performance using far less visual words than previous approaches.","url":"http://saturno.ing.puc.cl/media/papers_alvaro/Hans-FINAL-PAMI-2015.pdf","bibtex":"@Article{\t lobel:etal:2015,\n author\t= {H. Lobel and R. Vidal and A. Soto},\n title\t\t= {Learning Shared, Discriminative, and Compact\n\t\t Representations for Visual Recognition},\n journal\t= {{IEEE} Transactions on Pattern Analysis and Machine\n\t\t Intelligence},\n volume\t= {37},\n number\t= {11},\n year\t\t= {2015},\n abstract\t= {Dictionary-based and part-based methods are among the most\n\t\t popular approaches to visual recognition. In both methods,\n\t\t a mid-level representation is built on top of low-level\n\t\t image descriptors and high-level classifiers are trained on\n\t\t top of the mid-level representation. While earlier methods\n\t\t built the mid-level representation without supervision,\n\t\t there is currently great interest in learning both\n\t\t representations jointly to make the mid-level\n\t\t representation more discriminative. In this work we propose\n\t\t a new approach to visual recognition that jointly learns a\n\t\t shared, discriminative, and compact mid-level\n\t\t representation and a compact high-level representation. By\n\t\t using a structured output learning framework, our approach\n\t\t directly handles the multiclass case at both levels of\n\t\t abstraction. Moreover, by using a group-sparse prior in the\n\t\t structured output learning framework, our approach\n\t\t encourages sharing of visual words and thus reduces the\n\t\t number of words used to represent each class. We test our\n\t\t proposed method on several popular benchmarks. Our results\n\t\t show that, by jointly learning mid- and high-level\n\t\t representations, and fostering the sharing of\n\t\t discriminative visual words among target classes, we are\n\t\t able to achieve state-of-the-art recognition performance\n\t\t using far less visual words than previous approaches.},\n url\t\t= {http://saturno.ing.puc.cl/media/papers_alvaro/Hans-FINAL-PAMI-2015.pdf}\n}\n\n","author_short":["Lobel, H.","Vidal, R.","Soto, A."],"key":"lobel:etal:2015","id":"lobel:etal:2015","bibbaseid":"lobel-vidal-soto-learningshareddiscriminativeandcompactrepresentationsforvisualrecognition-2015","role":"author","urls":{"Paper":"http://saturno.ing.puc.cl/media/papers_alvaro/Hans-FINAL-PAMI-2015.pdf"},"metadata":{"authorlinks":{"soto, a":"https://asoto.ing.puc.cl/publications/"}},"downloads":0},"bibtype":"article","biburl":"https://raw.githubusercontent.com/ialab-puc/ialab.ing.puc.cl/master/pubs.bib","creationDate":"2015-03-06T20:45:41.992Z","downloads":0,"keywords":[],"search_terms":["learning","shared","discriminative","compact","representations","visual","recognition","lobel","vidal","soto"],"title":"Learning Shared, Discriminative, and Compact Representations for Visual Recognition","year":2015,"dataSources":["3YPRCmmijLqF4qHXd","sg6yZ29Z2xB5xP79R","sj4fjnZAPkEeYdZqL","m8qFBfFbjk9qWjcmJ","QjT2DEZoWmQYxjHXS"]}