A Hierarchical Pose-Based Approach to Complex Action Understanding Using Dictionaries of Actionlets and Motion Poselets. Lillo, I., Niebles, J., & Soto, A. In CVPR, 2016. Paper abstract bibtex In this paper, we introduce a new hierarchical model for human action recognition using body joint locations. Our model can categorize complex actions in videos, and perform spatio-temporal annotations of the atomic actions that compose the complex action being performed. That is, for each atomic action, the model generates temporal action annotations by estimating its starting and ending times, as well as, spatial annotations by inferring the human body parts that are involved in executing the action. Our model includes three key novel properties: (i) it can be trained with no spatial supervision, as it can automatically discover active body parts from temporal action annotations only; (ii) it jointly learns flexible representations for motion poselets and actionlets that encode the visual variability of body parts and atomic actions; (iii) a mechanism to discard idle or non-informative body parts which increases its robustness to common pose estimation errors. We evaluate the performance of our method using multiple action recognition benchmarks. Our model consistently outperforms baselines and state-of-the-art action recognition methods.
@InProceedings{ lillo:etal:2016,
author = {I. Lillo and JC. Niebles and A. Soto},
title = {A Hierarchical Pose-Based Approach to Complex Action
Understanding Using Dictionaries of Actionlets and Motion
Poselets},
booktitle = {{CVPR}},
year = {2016},
abstract = {In this paper, we introduce a new hierarchical model for
human action recognition using body joint locations. Our
model can categorize complex actions in videos, and perform
spatio-temporal annotations of the atomic actions that
compose the complex action being performed. That is, for
each atomic action, the model generates temporal action
annotations by estimating its starting and ending times, as
well as, spatial annotations by inferring the human body
parts that are involved in executing the action. Our model
includes three key novel properties: (i) it can be trained
with no spatial supervision, as it can automatically
discover active body parts from temporal action annotations
only; (ii) it jointly learns flexible representations for
motion poselets and actionlets that encode the visual
variability of body parts and atomic actions; (iii) a
mechanism to discard idle or non-informative body parts
which increases its robustness to common pose estimation
errors. We evaluate the performance of our method using
multiple action recognition benchmarks. Our model
consistently outperforms baselines and state-of-the-art
action recognition methods.},
url = {http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersionActivities-CVPR-2016.pdf}
}
Downloads: 0
{"_id":"ZrCYrT9cpBuwrFcac","bibbaseid":"lillo-niebles-soto-ahierarchicalposebasedapproachtocomplexactionunderstandingusingdictionariesofactionletsandmotionposelets-2016","downloads":0,"creationDate":"2016-04-26T22:15:37.222Z","title":"A Hierarchical Pose-Based Approach to Complex Action Understanding Using Dictionaries of Actionlets and Motion Poselets","author_short":["Lillo, I.","Niebles, J.","Soto, A."],"year":2016,"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/ialab-puc/ialab.ing.puc.cl/master/pubs.bib","bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["I."],"propositions":[],"lastnames":["Lillo"],"suffixes":[]},{"firstnames":["JC."],"propositions":[],"lastnames":["Niebles"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Soto"],"suffixes":[]}],"title":"A Hierarchical Pose-Based Approach to Complex Action Understanding Using Dictionaries of Actionlets and Motion Poselets","booktitle":"CVPR","year":"2016","abstract":"In this paper, we introduce a new hierarchical model for human action recognition using body joint locations. Our model can categorize complex actions in videos, and perform spatio-temporal annotations of the atomic actions that compose the complex action being performed. That is, for each atomic action, the model generates temporal action annotations by estimating its starting and ending times, as well as, spatial annotations by inferring the human body parts that are involved in executing the action. Our model includes three key novel properties: (i) it can be trained with no spatial supervision, as it can automatically discover active body parts from temporal action annotations only; (ii) it jointly learns flexible representations for motion poselets and actionlets that encode the visual variability of body parts and atomic actions; (iii) a mechanism to discard idle or non-informative body parts which increases its robustness to common pose estimation errors. We evaluate the performance of our method using multiple action recognition benchmarks. Our model consistently outperforms baselines and state-of-the-art action recognition methods.","url":"http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersionActivities-CVPR-2016.pdf","bibtex":"@InProceedings{\t lillo:etal:2016,\n author\t= {I. Lillo and JC. Niebles and A. Soto},\n title\t\t= {A Hierarchical Pose-Based Approach to Complex Action\n\t\t Understanding Using Dictionaries of Actionlets and Motion\n\t\t Poselets},\n booktitle\t= {{CVPR}},\n year\t\t= {2016},\n abstract\t= {In this paper, we introduce a new hierarchical model for\n\t\t human action recognition using body joint locations. Our\n\t\t model can categorize complex actions in videos, and perform\n\t\t spatio-temporal annotations of the atomic actions that\n\t\t compose the complex action being performed. That is, for\n\t\t each atomic action, the model generates temporal action\n\t\t annotations by estimating its starting and ending times, as\n\t\t well as, spatial annotations by inferring the human body\n\t\t parts that are involved in executing the action. Our model\n\t\t includes three key novel properties: (i) it can be trained\n\t\t with no spatial supervision, as it can automatically\n\t\t discover active body parts from temporal action annotations\n\t\t only; (ii) it jointly learns flexible representations for\n\t\t motion poselets and actionlets that encode the visual\n\t\t variability of body parts and atomic actions; (iii) a\n\t\t mechanism to discard idle or non-informative body parts\n\t\t which increases its robustness to common pose estimation\n\t\t errors. We evaluate the performance of our method using\n\t\t multiple action recognition benchmarks. Our model\n\t\t consistently outperforms baselines and state-of-the-art\n\t\t action recognition methods.},\n url\t\t= {http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersionActivities-CVPR-2016.pdf}\n}\n\n","author_short":["Lillo, I.","Niebles, J.","Soto, A."],"key":"lillo:etal:2016","id":"lillo:etal:2016","bibbaseid":"lillo-niebles-soto-ahierarchicalposebasedapproachtocomplexactionunderstandingusingdictionariesofactionletsandmotionposelets-2016","role":"author","urls":{"Paper":"http://saturno.ing.puc.cl/media/papers_alvaro/FinalVersionActivities-CVPR-2016.pdf"},"metadata":{"authorlinks":{"soto, a":"https://asoto.ing.puc.cl/publications/"}},"downloads":0},"search_terms":["hierarchical","pose","based","approach","complex","action","understanding","using","dictionaries","actionlets","motion","poselets","lillo","niebles","soto"],"keywords":[],"authorIDs":["32ZR23o2BFySHbtQK","3ear6KFZSRqbj6YeT","4Pq6KLaQ8jKGXHZWH","54578d9a2abc8e9f370004f0","5e126ca5a4cabfdf01000053","5e158f76f1f31adf01000118","5e16174bf67f7dde010003ad","5e1f631ae8f5ddde010000eb","5e1f7182e8f5ddde010001ff","5e26da3642065ede01000066","5e3acefaf2a00cdf010001c8","5e62c3aecb259cde010000f9","5e65830c6e5f4cf3010000e7","5e666dfc46e828de010002c9","6cMBYieMJhf6Nd58M","6w6sGsxYSK2Quk6yZ","7xDcntrrtC62vkWM5","ARw5ReidxxZii9TTZ","BjzM7QpRCG7uCF7Zf","DQ4JRTTWkvKXtCNCp","GbYBJvxugXMriQwbi","HhRoRmBvwWfD4oLyK","JFk6x26H6LZMoht2n","JvArGGu5qM6EvSCvB","LpqQBhFH3PxepH9KY","MT4TkSGzAp69M3dGt","QFECgvB5v2i4j2Qzs","RKv56Kes3h6FwEa55","Rb9TkQ3KkhGAaNyXq","RdND8NxcJDsyZdkcK","SpKJ5YujbHKZnHc4v","TSRdcx4bbYKqcGbDg","W8ogS2GJa6sQKy26c","WTi3X2fT8dzBN5d8b","WfZbctNQYDBaiYW6n","XZny8xuqwfoxzhBCB","Xk2Q5qedS5MFHvjEW","bbARiTJLYS79ZMFbk","cBxsyeZ37EucQeBYK","cFyFQps7W3Sa2Wope","dGRBfr8zhMmbwK6eP","eRLgwkrEk7T7Lmzmf","fMYSCX8RMZap548vv","g6iKCQCFnJgKYYHaP","h2hTcQYuf2PB3oF8t","h83jBvZYJPJGutQrs","jAtuJBcGhng4Lq2Nd","pMoo2gotJcdDPwfrw","q5Zunk5Y2ruhw5vyq","rzNGhqxkbt2MvGY29","uC8ATA8AfngWpYLBq","uoJ7BKv28Q6TtPmPp","vMiJzqEKCsBxBEa3v","vQE6iTPpjxpuLip2Z","wQDRsDjhgpMJDGxWX","wbNg79jvDpzX9zHLK","wk86BgRiooBjy323E","zCbPxKnQGgDHiHMWn","zf9HENjsAzdWLMDAu"],"dataSources":["3YPRCmmijLqF4qHXd","sg6yZ29Z2xB5xP79R","sj4fjnZAPkEeYdZqL","m8qFBfFbjk9qWjcmJ","QjT2DEZoWmQYxjHXS"]}