Human action interpretation using convolutional neural network: a survey. Malik, Z. & Shapiai, M. I. B. Machine Vision and Applications, 33(3):37, March, 2022. Paper doi abstract bibtex Human action interpretation (HAI) is one of the trending domains in the era of computer vision. It can further be divided into human action recognition (HAR) and human action detection (HAD). The HAR analyzes frames and provides label(s) to overall video, whereas the HAD localizes actor first, in each frame, and then estimates the action score for the detected region. The effectiveness of a HAI model is highly dependent on the representation of spatiotemporal features and the model’s architectural design. For the effective representation of these features, various studies have been carried out. Moreover, to better learn these features and to get the action score on the basis of these features, different designs of deep architectures have also been proposed. Among various deep architectures, convolutional neural network (CNN) is relatively more explored for HAI due to its lesser computational cost. To provide overview of these efforts, various surveys have been published to date; however, none of these surveys is focusing the features’ representation and design of proposed architectures in detail. Secondly, none of these studies is focusing the pose assisted HAI techniques. This study provides a more detailed survey on existing CNN-based HAI techniques by incorporating the frame level as well as pose level spatiotemporal features-based techniques. Besides these, it offers comparative study on different publicly available datasets used to evaluate HAI models based on various spatiotemporal features’ representations. Furthermore, it also discusses the limitations and challenges of the HAI and concludes that human action interpretation from visual data is still very far from the actual interpretation of human action in realistic videos which are continuous in nature and may contain multiple human beings performing multiple actions sequentially or in parallel.
@article{malik_human_2022,
title = {Human action interpretation using convolutional neural network: a survey},
volume = {33},
issn = {1432-1769},
shorttitle = {Human action interpretation using convolutional neural network},
url = {https://doi.org/10.1007/s00138-022-01291-0},
doi = {10.1007/s00138-022-01291-0},
abstract = {Human action interpretation (HAI) is one of the trending domains in the era of computer vision. It can further be divided into human action recognition (HAR) and human action detection (HAD). The HAR analyzes frames and provides label(s) to overall video, whereas the HAD localizes actor first, in each frame, and then estimates the action score for the detected region. The effectiveness of a HAI model is highly dependent on the representation of spatiotemporal features and the model’s architectural design. For the effective representation of these features, various studies have been carried out. Moreover, to better learn these features and to get the action score on the basis of these features, different designs of deep architectures have also been proposed. Among various deep architectures, convolutional neural network (CNN) is relatively more explored for HAI due to its lesser computational cost. To provide overview of these efforts, various surveys have been published to date; however, none of these surveys is focusing the features’ representation and design of proposed architectures in detail. Secondly, none of these studies is focusing the pose assisted HAI techniques. This study provides a more detailed survey on existing CNN-based HAI techniques by incorporating the frame level as well as pose level spatiotemporal features-based techniques. Besides these, it offers comparative study on different publicly available datasets used to evaluate HAI models based on various spatiotemporal features’ representations. Furthermore, it also discusses the limitations and challenges of the HAI and concludes that human action interpretation from visual data is still very far from the actual interpretation of human action in realistic videos which are continuous in nature and may contain multiple human beings performing multiple actions sequentially or in parallel.},
language = {en},
number = {3},
urldate = {2022-03-25},
journal = {Machine Vision and Applications},
author = {Malik, Zainab and Shapiai, Mohd Ibrahim Bin},
month = mar,
year = {2022},
pages = {37},
}
Downloads: 0
{"_id":"gjddFnLtBwxr7kKsf","bibbaseid":"malik-shapiai-humanactioninterpretationusingconvolutionalneuralnetworkasurvey-2022","author_short":["Malik, Z.","Shapiai, M. I. B."],"bibdata":{"bibtype":"article","type":"article","title":"Human action interpretation using convolutional neural network: a survey","volume":"33","issn":"1432-1769","shorttitle":"Human action interpretation using convolutional neural network","url":"https://doi.org/10.1007/s00138-022-01291-0","doi":"10.1007/s00138-022-01291-0","abstract":"Human action interpretation (HAI) is one of the trending domains in the era of computer vision. It can further be divided into human action recognition (HAR) and human action detection (HAD). The HAR analyzes frames and provides label(s) to overall video, whereas the HAD localizes actor first, in each frame, and then estimates the action score for the detected region. The effectiveness of a HAI model is highly dependent on the representation of spatiotemporal features and the model’s architectural design. For the effective representation of these features, various studies have been carried out. Moreover, to better learn these features and to get the action score on the basis of these features, different designs of deep architectures have also been proposed. Among various deep architectures, convolutional neural network (CNN) is relatively more explored for HAI due to its lesser computational cost. To provide overview of these efforts, various surveys have been published to date; however, none of these surveys is focusing the features’ representation and design of proposed architectures in detail. Secondly, none of these studies is focusing the pose assisted HAI techniques. This study provides a more detailed survey on existing CNN-based HAI techniques by incorporating the frame level as well as pose level spatiotemporal features-based techniques. Besides these, it offers comparative study on different publicly available datasets used to evaluate HAI models based on various spatiotemporal features’ representations. Furthermore, it also discusses the limitations and challenges of the HAI and concludes that human action interpretation from visual data is still very far from the actual interpretation of human action in realistic videos which are continuous in nature and may contain multiple human beings performing multiple actions sequentially or in parallel.","language":"en","number":"3","urldate":"2022-03-25","journal":"Machine Vision and Applications","author":[{"propositions":[],"lastnames":["Malik"],"firstnames":["Zainab"],"suffixes":[]},{"propositions":[],"lastnames":["Shapiai"],"firstnames":["Mohd","Ibrahim","Bin"],"suffixes":[]}],"month":"March","year":"2022","pages":"37","bibtex":"@article{malik_human_2022,\n\ttitle = {Human action interpretation using convolutional neural network: a survey},\n\tvolume = {33},\n\tissn = {1432-1769},\n\tshorttitle = {Human action interpretation using convolutional neural network},\n\turl = {https://doi.org/10.1007/s00138-022-01291-0},\n\tdoi = {10.1007/s00138-022-01291-0},\n\tabstract = {Human action interpretation (HAI) is one of the trending domains in the era of computer vision. It can further be divided into human action recognition (HAR) and human action detection (HAD). The HAR analyzes frames and provides label(s) to overall video, whereas the HAD localizes actor first, in each frame, and then estimates the action score for the detected region. The effectiveness of a HAI model is highly dependent on the representation of spatiotemporal features and the model’s architectural design. For the effective representation of these features, various studies have been carried out. Moreover, to better learn these features and to get the action score on the basis of these features, different designs of deep architectures have also been proposed. Among various deep architectures, convolutional neural network (CNN) is relatively more explored for HAI due to its lesser computational cost. To provide overview of these efforts, various surveys have been published to date; however, none of these surveys is focusing the features’ representation and design of proposed architectures in detail. Secondly, none of these studies is focusing the pose assisted HAI techniques. This study provides a more detailed survey on existing CNN-based HAI techniques by incorporating the frame level as well as pose level spatiotemporal features-based techniques. Besides these, it offers comparative study on different publicly available datasets used to evaluate HAI models based on various spatiotemporal features’ representations. Furthermore, it also discusses the limitations and challenges of the HAI and concludes that human action interpretation from visual data is still very far from the actual interpretation of human action in realistic videos which are continuous in nature and may contain multiple human beings performing multiple actions sequentially or in parallel.},\n\tlanguage = {en},\n\tnumber = {3},\n\turldate = {2022-03-25},\n\tjournal = {Machine Vision and Applications},\n\tauthor = {Malik, Zainab and Shapiai, Mohd Ibrahim Bin},\n\tmonth = mar,\n\tyear = {2022},\n\tpages = {37},\n}\n\n\n\n","author_short":["Malik, Z.","Shapiai, M. I. B."],"key":"malik_human_2022","id":"malik_human_2022","bibbaseid":"malik-shapiai-humanactioninterpretationusingconvolutionalneuralnetworkasurvey-2022","role":"author","urls":{"Paper":"https://doi.org/10.1007/s00138-022-01291-0"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["iwKepCrWBps7ojhDx"],"keywords":[],"search_terms":["human","action","interpretation","using","convolutional","neural","network","survey","malik","shapiai"],"title":"Human action interpretation using convolutional neural network: a survey","year":2022}