Action Recognition in Video Using Sparse Coding and Relative Features. Alfaro, A., Mery, D., & Soto, A. In CVPR, 2016.
Paper abstract bibtex This work presents an approach to category-based action recognition in video using sparse coding techniques. The proposed approach includes two main contributions: i) A new method to handle intra-class variations by decomposing each video into a reduced set of representative atomic action acts or key-sequences, and ii) A new video descriptor, ITRA: Inter-Temporal Relational Act Descriptor, that exploits the power of comparative reasoning to capture relative similarity relations among key-sequences. In terms of the method to obtain key-sequences, we introduce a loss function that, for each video, leads to the identification of a sparse set of representative key-frames capturing both, relevant particularities arising in the input video, as well as relevant generalities arising in the complete class collection. In terms of the method to obtain the ITRA descriptor, we introduce a novel scheme to quantify relative intra and inter-class similarities among local temporal patterns arising in the videos. The resulting ITRA descriptor demonstrates to be highly effective to discriminate among action categories. As a result, the proposed approach reaches remarkable action recognition performance on several popular benchmark datasets, outperforming alternative state-of-the-art techniques by a large margin.
@InProceedings{ anali:etal:2016,
author = {A. Alfaro and D. Mery and A. Soto},
title = {Action Recognition in Video Using Sparse Coding and
Relative Features},
booktitle = {{CVPR}},
year = {2016},
abstract = {This work presents an approach to category-based action
recognition in video using sparse coding techniques. The
proposed approach includes two main contributions: i) A new
method to handle intra-class variations by decomposing each
video into a reduced set of representative atomic action
acts or key-sequences, and ii) A new video descriptor,
ITRA: Inter-Temporal Relational Act Descriptor, that
exploits the power of comparative reasoning to capture
relative similarity relations among key-sequences. In terms
of the method to obtain key-sequences, we introduce a loss
function that, for each video, leads to the identification
of a sparse set of representative key-frames capturing
both, relevant particularities arising in the input video,
as well as relevant generalities arising in the complete
class collection. In terms of the method to obtain the ITRA
descriptor, we introduce a novel scheme to quantify
relative intra and inter-class similarities among local
temporal patterns arising in the videos. The resulting ITRA
descriptor demonstrates to be highly effective to
discriminate among action categories. As a result, the
proposed approach reaches remarkable action recognition
performance on several popular benchmark datasets,
outperforming alternative state-of-the-art techniques by a
large margin.},
url = {http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersion-Anali-CVPR-2016.pdf}
}
Downloads: 0
{"_id":"TBF63cq5eHEWxvRLD","bibbaseid":"alfaro-mery-soto-actionrecognitioninvideousingsparsecodingandrelativefeatures-2016","downloads":1,"creationDate":"2016-04-26T22:15:37.225Z","title":"Action Recognition in Video Using Sparse Coding and Relative Features","author_short":["Alfaro, A.","Mery, D.","Soto, A."],"year":2016,"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/ialab-puc/ialab.ing.puc.cl/master/pubs.bib","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["A."],"propositions":[],"lastnames":["Alfaro"],"suffixes":[]},{"firstnames":["D."],"propositions":[],"lastnames":["Mery"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"title":"Action Recognition in Video Using Sparse Coding and Relative Features","booktitle":"CVPR","year":"2016","abstract":"This work presents an approach to category-based action recognition in video using sparse coding techniques. The proposed approach includes two main contributions: i) A new method to handle intra-class variations by decomposing each video into a reduced set of representative atomic action acts or key-sequences, and ii) A new video descriptor, ITRA: Inter-Temporal Relational Act Descriptor, that exploits the power of comparative reasoning to capture relative similarity relations among key-sequences. In terms of the method to obtain key-sequences, we introduce a loss function that, for each video, leads to the identification of a sparse set of representative key-frames capturing both, relevant particularities arising in the input video, as well as relevant generalities arising in the complete class collection. In terms of the method to obtain the ITRA descriptor, we introduce a novel scheme to quantify relative intra and inter-class similarities among local temporal patterns arising in the videos. The resulting ITRA descriptor demonstrates to be highly effective to discriminate among action categories. As a result, the proposed approach reaches remarkable action recognition performance on several popular benchmark datasets, outperforming alternative state-of-the-art techniques by a large margin.","url":"http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersion-Anali-CVPR-2016.pdf","bibtex":"@InProceedings{\t anali:etal:2016,\n author\t= {A. Alfaro and D. Mery and A. Soto},\n title\t\t= {Action Recognition in Video Using Sparse Coding and\n\t\t Relative Features},\n booktitle\t= {{CVPR}},\n year\t\t= {2016},\n abstract\t= {This work presents an approach to category-based action\n\t\t recognition in video using sparse coding techniques. The\n\t\t proposed approach includes two main contributions: i) A new\n\t\t method to handle intra-class variations by decomposing each\n\t\t video into a reduced set of representative atomic action\n\t\t acts or key-sequences, and ii) A new video descriptor,\n\t\t ITRA: Inter-Temporal Relational Act Descriptor, that\n\t\t exploits the power of comparative reasoning to capture\n\t\t relative similarity relations among key-sequences. In terms\n\t\t of the method to obtain key-sequences, we introduce a loss\n\t\t function that, for each video, leads to the identification\n\t\t of a sparse set of representative key-frames capturing\n\t\t both, relevant particularities arising in the input video,\n\t\t as well as relevant generalities arising in the complete\n\t\t class collection. In terms of the method to obtain the ITRA\n\t\t descriptor, we introduce a novel scheme to quantify\n\t\t relative intra and inter-class similarities among local\n\t\t temporal patterns arising in the videos. The resulting ITRA\n\t\t descriptor demonstrates to be highly effective to\n\t\t discriminate among action categories. As a result, the\n\t\t proposed approach reaches remarkable action recognition\n\t\t performance on several popular benchmark datasets,\n\t\t outperforming alternative state-of-the-art techniques by a\n\t\t large margin.},\n url\t\t= {http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersion-Anali-CVPR-2016.pdf}\n}\n\n","author_short":["Alfaro, A.","Mery, D.","Soto, A."],"key":"anali:etal:2016","id":"anali:etal:2016","bibbaseid":"alfaro-mery-soto-actionrecognitioninvideousingsparsecodingandrelativefeatures-2016","role":"author","urls":{"Paper":"http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersion-Anali-CVPR-2016.pdf"},"metadata":{"authorlinks":{"mery, d":"https://domingomery.ing.puc.cl/testing/","soto, a":"https://asoto.ing.puc.cl/publications/"}}},"search_terms":["action","recognition","video","using","sparse","coding","relative","features","alfaro","mery","soto"],"keywords":[],"authorIDs":["24TkaxcPXqc7t8oo2","32ZR23o2BFySHbtQK","3ear6KFZSRqbj6YeT","4Pq6KLaQ8jKGXHZWH","5456ff2b8b01c819300000f6","54578d9a2abc8e9f370004f0","5de77464021482de01000070","5e126ca5a4cabfdf01000053","5e158f76f1f31adf01000118","5e16174bf67f7dde010003ad","5e1c779a92587bde0100013e","5e1f631ae8f5ddde010000eb","5e1f7182e8f5ddde010001ff","5e221d6c71dcf8df0100002b","5e253d35561b8fde0100008c","5e26da3642065ede01000066","5e28810c67e11edf0100008d","5e3acefaf2a00cdf010001c8","5e533ea853de8dde01000042","5e54fee296ed20df0100003d","5e62c3aecb259cde010000f9","5e65830c6e5f4cf3010000e7","5e666dfc46e828de010002c9","6cMBYieMJhf6Nd58M","6w6sGsxYSK2Quk6yZ","7xDcntrrtC62vkWM5","ARw5ReidxxZii9TTZ","BjzM7QpRCG7uCF7Zf","DQ4JRTTWkvKXtCNCp","DR8QRotRb5C33T8Rb","FDHBz8WWdRXnWmR7h","GbYBJvxugXMriQwbi","HhRoRmBvwWfD4oLyK","JFk6x26H6LZMoht2n","JvArGGu5qM6EvSCvB","LpqQBhFH3PxepH9KY","MT4TkSGzAp69M3dGt","NKHZCaBn3zKAa5ocL","Pxb9bPepYv65oXxi9","QFECgvB5v2i4j2Qzs","RKv56Kes3h6FwEa55","Rb9TkQ3KkhGAaNyXq","RdND8NxcJDsyZdkcK","SpKJ5YujbHKZnHc4v","TSRdcx4bbYKqcGbDg","W8ogS2GJa6sQKy26c","WTi3X2fT8dzBN5d8b","WfZbctNQYDBaiYW6n","XZny8xuqwfoxzhBCB","Xk2Q5qedS5MFHvjEW","ayym4ZHJzF7jQcz8Q","bbARiTJLYS79ZMFbk","cBxsyeZ37EucQeBYK","cFyFQps7W3Sa2Wope","d6Qqa2JYQxcTSLaGE","dGRBfr8zhMmbwK6eP","dmtweywdYxkx3KjDt","eRLgwkrEk7T7Lmzmf","fMYSCX8RMZap548vv","fN7f2faCKTWJvL9rb","g6iKCQCFnJgKYYHaP","h2WsvS5g4tT8oqpZH","h2hTcQYuf2PB3oF8t","h83jBvZYJPJGutQrs","jAtuJBcGhng4Lq2Nd","pMoo2gotJcdDPwfrw","q5Zunk5Y2ruhw5vyq","rzNGhqxkbt2MvGY29","syJh4acsFmZGsFfQS","uC8ATA8AfngWpYLBq","uoJ7BKv28Q6TtPmPp","vMiJzqEKCsBxBEa3v","vQE6iTPpjxpuLip2Z","wQDRsDjhgpMJDGxWX","wbNg79jvDpzX9zHLK","wk86BgRiooBjy323E","zCbPxKnQGgDHiHMWn","zf9HENjsAzdWLMDAu"],"dataSources":["3YPRCmmijLqF4qHXd","xvobHwzyqavc6cCEz","sg6yZ29Z2xB5xP79R","sj4fjnZAPkEeYdZqL","m8qFBfFbjk9qWjcmJ","QjT2DEZoWmQYxjHXS","68BuKygEnwqbDeD59"]}