Human Action Recognition from Inter-Temporal Dictionaries of Key-Sequences. Alfaro, A., Mery, D., & Soto, A. In 6th Pacific-Rim Symposium on Image and Video Technology, PSIVT, 2013. Paper abstract bibtex 14 downloads This paper addresses the human action recognition in video by proposing a method based on three main processing steps. First, we tackle problems related to intraclass variations and differences in video lengths. We achieve this by reducing an input video to a set of key-sequences that represent atomic meaningful acts of each action class. Second, we use sparse coding techniques to learn a representation for each key-sequence. We then join these representations still preserving information about temporal relationships. We believe that this is a key step of our approach because it provides not only a suitable shared rep resentation to characterize atomic acts, but it also encodes global tem poral consistency among these acts. Accordingly, we call this represen tation inter-temporal acts descriptor. Third, we use this representation and sparse coding techniques to classify new videos. Finally, we show that, our approach outperforms several state-of-the-art methods when is tested using common benchmarks.
@InProceedings{ alfaro:etal:2013,
author = { A. Alfaro and D. Mery and A. Soto},
title = {Human Action Recognition from Inter-Temporal Dictionaries
of Key-Sequences},
booktitle = {6th Pacific-Rim Symposium on Image and Video Technology,
PSIVT},
year = {2013},
abstract = {This paper addresses the human action recognition in video
by proposing a method based on three main processing steps.
First, we tackle problems related to intraclass variations
and differences in video lengths. We achieve this by
reducing an input video to a set of key-sequences that
represent atomic meaningful acts of each action class.
Second, we use sparse coding techniques to learn a
representation for each key-sequence. We then join these
representations still preserving information about temporal
relationships. We believe that this is a key step of our
approach because it provides not only a suitable shared rep
resentation to characterize atomic acts, but it also
encodes global tem poral consistency among these acts.
Accordingly, we call this represen tation inter-temporal
acts descriptor. Third, we use this representation and
sparse coding techniques to classify new videos. Finally,
we show that, our approach outperforms several
state-of-the-art methods when is tested using common
benchmarks.},
url = {http://saturno.ing.puc.cl/media/papers_alvaro/Anali-PSIVT-13.pdf}
}
Downloads: 14
{"_id":{"_str":"534276550e946d920a001164"},"__v":7,"authorIDs":["24TkaxcPXqc7t8oo2","32ZR23o2BFySHbtQK","3ear6KFZSRqbj6YeT","4Pq6KLaQ8jKGXHZWH","5456ff2b8b01c819300000f6","54578d9a2abc8e9f370004f0","5de77464021482de01000070","5e126ca5a4cabfdf01000053","5e158f76f1f31adf01000118","5e16174bf67f7dde010003ad","5e1c779a92587bde0100013e","5e1f631ae8f5ddde010000eb","5e1f7182e8f5ddde010001ff","5e221d6c71dcf8df0100002b","5e253d35561b8fde0100008c","5e26da3642065ede01000066","5e28810c67e11edf0100008d","5e3acefaf2a00cdf010001c8","5e533ea853de8dde01000042","5e54fee296ed20df0100003d","5e62c3aecb259cde010000f9","5e65830c6e5f4cf3010000e7","5e666dfc46e828de010002c9","6cMBYieMJhf6Nd58M","6w6sGsxYSK2Quk6yZ","7xDcntrrtC62vkWM5","ARw5ReidxxZii9TTZ","BjzM7QpRCG7uCF7Zf","DQ4JRTTWkvKXtCNCp","DR8QRotRb5C33T8Rb","FDHBz8WWdRXnWmR7h","GbYBJvxugXMriQwbi","HhRoRmBvwWfD4oLyK","JFk6x26H6LZMoht2n","JvArGGu5qM6EvSCvB","LpqQBhFH3PxepH9KY","MT4TkSGzAp69M3dGt","NKHZCaBn3zKAa5ocL","Pxb9bPepYv65oXxi9","QFECgvB5v2i4j2Qzs","RKv56Kes3h6FwEa55","Rb9TkQ3KkhGAaNyXq","RdND8NxcJDsyZdkcK","SpKJ5YujbHKZnHc4v","TSRdcx4bbYKqcGbDg","W8ogS2GJa6sQKy26c","WTi3X2fT8dzBN5d8b","WfZbctNQYDBaiYW6n","XZny8xuqwfoxzhBCB","Xk2Q5qedS5MFHvjEW","ayym4ZHJzF7jQcz8Q","bbARiTJLYS79ZMFbk","cBxsyeZ37EucQeBYK","cFyFQps7W3Sa2Wope","d6Qqa2JYQxcTSLaGE","dGRBfr8zhMmbwK6eP","dmtweywdYxkx3KjDt","eRLgwkrEk7T7Lmzmf","fMYSCX8RMZap548vv","fN7f2faCKTWJvL9rb","g6iKCQCFnJgKYYHaP","h2WsvS5g4tT8oqpZH","h2hTcQYuf2PB3oF8t","h83jBvZYJPJGutQrs","jAtuJBcGhng4Lq2Nd","pMoo2gotJcdDPwfrw","q5Zunk5Y2ruhw5vyq","rzNGhqxkbt2MvGY29","syJh4acsFmZGsFfQS","uC8ATA8AfngWpYLBq","vMiJzqEKCsBxBEa3v","vQE6iTPpjxpuLip2Z","wQDRsDjhgpMJDGxWX","wbNg79jvDpzX9zHLK","wk86BgRiooBjy323E","zCbPxKnQGgDHiHMWn","zf9HENjsAzdWLMDAu"],"author_short":["Alfaro, A.","Mery, D.","Soto, A."],"bibbaseid":"alfaro-mery-soto-humanactionrecognitionfromintertemporaldictionariesofkeysequences-2013","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["A."],"propositions":[],"lastnames":["Alfaro"],"suffixes":[]},{"firstnames":["D."],"propositions":[],"lastnames":["Mery"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"title":"Human Action Recognition from Inter-Temporal Dictionaries of Key-Sequences","booktitle":"6th Pacific-Rim Symposium on Image and Video Technology, PSIVT","year":"2013","abstract":"This paper addresses the human action recognition in video by proposing a method based on three main processing steps. First, we tackle problems related to intraclass variations and differences in video lengths. We achieve this by reducing an input video to a set of key-sequences that represent atomic meaningful acts of each action class. Second, we use sparse coding techniques to learn a representation for each key-sequence. We then join these representations still preserving information about temporal relationships. We believe that this is a key step of our approach because it provides not only a suitable shared rep resentation to characterize atomic acts, but it also encodes global tem poral consistency among these acts. Accordingly, we call this represen tation inter-temporal acts descriptor. Third, we use this representation and sparse coding techniques to classify new videos. Finally, we show that, our approach outperforms several state-of-the-art methods when is tested using common benchmarks.","url":"http://saturno.ing.puc.cl/media/papers_alvaro/Anali-PSIVT-13.pdf","bibtex":"@InProceedings{\t alfaro:etal:2013,\n author\t= { A. Alfaro and D. Mery and A. Soto},\n title\t\t= {Human Action Recognition from Inter-Temporal Dictionaries\n\t\t of Key-Sequences},\n booktitle\t= {6th Pacific-Rim Symposium on Image and Video Technology,\n\t\t PSIVT},\n year\t\t= {2013},\n abstract\t= {This paper addresses the human action recognition in video\n\t\t by proposing a method based on three main processing steps.\n\t\t First, we tackle problems related to intraclass variations\n\t\t and differences in video lengths. We achieve this by\n\t\t reducing an input video to a set of key-sequences that\n\t\t represent atomic meaningful acts of each action class.\n\t\t Second, we use sparse coding techniques to learn a\n\t\t representation for each key-sequence. We then join these\n\t\t representations still preserving information about temporal\n\t\t relationships. We believe that this is a key step of our\n\t\t approach because it provides not only a suitable shared rep\n\t\t resentation to characterize atomic acts, but it also\n\t\t encodes global tem poral consistency among these acts.\n\t\t Accordingly, we call this represen tation inter-temporal\n\t\t acts descriptor. Third, we use this representation and\n\t\t sparse coding techniques to classify new videos. Finally,\n\t\t we show that, our approach outperforms several\n\t\t state-of-the-art methods when is tested using common\n\t\t benchmarks.},\n url\t\t= {http://saturno.ing.puc.cl/media/papers_alvaro/Anali-PSIVT-13.pdf}\n}\n\n","author_short":["Alfaro, A.","Mery, D.","Soto, A."],"key":"alfaro:etal:2013","id":"alfaro:etal:2013","bibbaseid":"alfaro-mery-soto-humanactionrecognitionfromintertemporaldictionariesofkeysequences-2013","role":"author","urls":{"Paper":"http://saturno.ing.puc.cl/media/papers_alvaro/Anali-PSIVT-13.pdf"},"metadata":{"authorlinks":{"mery, d":"https://domingomery.ing.puc.cl/testing/","soto, a":"https://asoto.ing.puc.cl/publications/"}},"downloads":14},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/ialab-puc/ialab.ing.puc.cl/master/pubs.bib","downloads":14,"keywords":[],"search_terms":["human","action","recognition","inter","temporal","dictionaries","key","sequences","alfaro","mery","soto"],"title":"Human Action Recognition from Inter-Temporal Dictionaries of Key-Sequences","year":2013,"dataSources":["sg6yZ29Z2xB5xP79R","sj4fjnZAPkEeYdZqL","m8qFBfFbjk9qWjcmJ","QjT2DEZoWmQYxjHXS","68BuKygEnwqbDeD59"]}