Human Action Recognition Using a Temporal Hierarchy of Covariance Descriptors on 3D Joint Locations. Hussein, M. E., Torki, M., Gowayyed, M. A., & El-Saban, M. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
Human Action Recognition Using a Temporal Hierarchy of Covariance Descriptors on 3D Joint Locations [link]Paper  abstract   bibtex   
Human action recognition from videos is a challenging machine vision task with multiple important application domains, such as human-robot/machine interaction, interactive entertainment, multimedia information retrieval, and surveillance. In this paper, we present a novel approach to human action recognition from 3D skeleton sequences extracted from depth data. We use the covariance matrix for skeleton joint locations over time as a discriminative descriptor for a sequence. To encode the relationship between joint movement and time, we deploy multiple covariance matrices over sub-sequences in a hierarchical fashion. The descriptor has a fixed length that is independent from the length of the described sequence. Our experiments show that using the covariance descriptor with an off-the-shelf classification algorithm outperforms the state of the art in action recognition on multiple datasets, captured either via a Kinect-type sensor or a sophisticated motion capture system. We also include an evaluation on a novel large dataset using our own annotation.
@inproceedings{hussein_human_2013,
	title = {Human Action Recognition Using a Temporal Hierarchy of Covariance Descriptors on 3D Joint Locations},
	rights = {Authors who submit to this conference agree to the following terms:    a) Authors transfer their copyrights in their paper to the International Joint Conferences on Artificial Intelligence, Inc. ({IJCAI}), in order to deal with future requests for reprints, translations, anthologies, reproductions, excerpts, and other publications. This grant will include, without limitation, the entire copyright in the paper in all countries of the world, including all renewals, extensions, and reversions thereof, whether such rights currently exist or hereafter come into effect, and also the exclusive right to create electronic versions of the paper, to the extent that such right is not subsumed under copyright.    b) Every named author warrants that he/she is the sole author and owner of the copyright in the paper, except for those portions shown to be in quotations; that the paper is original throughout; and that their right to make the grants set forth above is complete and unencumbered. If anyone brings any claim or action alleging facts that, if true, constitute a breach of any of the foregoing warranties, each author, individually and collectively, will hold harmless and indemnify {IJCAI}, their grantees, their licensees, and their distributors against any liability, whether under judgment, decree, or compromise, and any legal fees and expenses arising out of that claim or actions, and the undersigned will cooperate fully in any defense {IJCAI} may make to such claim or action. Moreover, each author agrees to cooperate in any claim or other action seeking to protect or enforce any right the author has granted to {IJCAI} in the paper. If any such claim or action fails because of facts that constitute a breach of any of the foregoing warranties, each author agrees to reimburse whomever brings such claim or action for expenses and attorney\’s fees incurred therein.   c) \ In return for these rights, {IJCAI} hereby grants to each author, and the employers for whom the work was performed, royalty-free permission to: 1. retain all proprietary rights (such as patent rights) other than copyright and the publication rights transferred to {IJCAI}; 2. personally reuse all or portions of the paper in other works of their own authorship; 3. make oral presentation of the material in any forum; 4. reproduce, or have reproduced, the  paper for the author\’s personal use, or for company use provided that {IJCAI} copyright and the source are indicated, and that the copies are not used in a way that implies {IJCAI} endorsement of a product or service of an employer, and that the copies per se are not offered for sale. The foregoing right shall not permit the posting of the paper in electronic or digital form on any computer network, except by the author or the author\’s employer, and then only on the author\’s or the employer\’s own World Wide Web page or ftp site. Such Web page or ftp site, in addition to the aforementioned requirements of this Paragraph, must provide an electronic reference or link back to the {IJCAI} electronic server (http://www.ijcai.org), and shall not post other {IJCAI} copyrighted materials not of the author\’s or the employer\’s creation (including tables of contents with links to other papers) without {IJCAI}\’s written permission; \>5. make limited distribution of all or portions of the above paper prior to publication. 6. In the case of work performed under U.S. Government contract, {IJCAI} grants the U.S. Government royalty-free permission to reproduce all or portions of the above paper, and to authorize others to do so, for U.S. Government purposes. In the event the above paper is not accepted and published by {IJCAI}, or is withdrawn by the author(s) before acceptance by {IJCAI}, this agreement becomes null and void.},
	url = {https://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6869},
	abstract = {Human action recognition from videos is a challenging machine vision task with multiple important application domains, such as human-robot/machine interaction, interactive entertainment, multimedia information retrieval, and surveillance. In this paper, we present a novel approach to human action recognition from 3D skeleton sequences extracted from depth data. We use the covariance matrix for skeleton joint locations over time as a discriminative descriptor for a sequence. To encode the relationship between joint movement and time, we deploy multiple covariance matrices over sub-sequences in a hierarchical fashion. The descriptor has a fixed length that is independent from the length of the described sequence. Our experiments show that using the covariance descriptor with an off-the-shelf classification algorithm outperforms the state of the art in action recognition on multiple datasets, captured either via a Kinect-type sensor or a sophisticated motion capture system. We also include an evaluation on a novel large dataset using our own annotation.},
	eventtitle = {Twenty-Third International Joint Conference on Artificial Intelligence},
	booktitle = {Twenty-Third International Joint Conference on Artificial Intelligence},
	author = {Hussein, Mohamed E. and Torki, Marwan and Gowayyed, Mohammad A. and El-Saban, Motaz},
	urldate = {2019-05-01},
	date = {2013-06-30},
	year = {2013},
	langid = {english},
	file = {Full Text PDF:C\:\\Users\\Mohamed Hussein\\Zotero\\storage\\TFVT8K65\\Hussein et al. - 2013 - Human Action Recognition Using a Temporal Hierarch.pdf:application/pdf;Snapshot:C\:\\Users\\Mohamed Hussein\\Zotero\\storage\\GCTVR8WW\\6869.html:text/html}
}

Downloads: 0