Human action recognition in stereoscopic videos based on bag of features and disparity pyramids. Iosifidis, A., Tefas, A., Nikolaidis, N., & Pitas, I. In 2014 22nd European Signal Processing Conference (EUSIPCO), pages 1317-1321, Sep., 2014.
Paper abstract bibtex In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.
@InProceedings{6952463,
author = {A. Iosifidis and A. Tefas and N. Nikolaidis and I. Pitas},
booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},
title = {Human action recognition in stereoscopic videos based on bag of features and disparity pyramids},
year = {2014},
pages = {1317-1321},
abstract = {In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.},
keywords = {image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;Hollywood 3D database;Videos;Stereo image processing;Databases;Three-dimensional displays;Cameras;Vectors;Computer vision;Human Action Recognition;Stereoscopic Videos;Disparity Pyramids;Bag of Features},
issn = {2076-1465},
month = {Sep.},
url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917287.pdf},
}
Downloads: 0
{"_id":"SAY4amtxg4W7L26Xv","bibbaseid":"iosifidis-tefas-nikolaidis-pitas-humanactionrecognitioninstereoscopicvideosbasedonbagoffeaturesanddisparitypyramids-2014","authorIDs":[],"author_short":["Iosifidis, A.","Tefas, A.","Nikolaidis, N.","Pitas, I."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["A."],"propositions":[],"lastnames":["Iosifidis"],"suffixes":[]},{"firstnames":["A."],"propositions":[],"lastnames":["Tefas"],"suffixes":[]},{"firstnames":["N."],"propositions":[],"lastnames":["Nikolaidis"],"suffixes":[]},{"firstnames":["I."],"propositions":[],"lastnames":["Pitas"],"suffixes":[]}],"booktitle":"2014 22nd European Signal Processing Conference (EUSIPCO)","title":"Human action recognition in stereoscopic videos based on bag of features and disparity pyramids","year":"2014","pages":"1317-1321","abstract":"In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.","keywords":"image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;Hollywood 3D database;Videos;Stereo image processing;Databases;Three-dimensional displays;Cameras;Vectors;Computer vision;Human Action Recognition;Stereoscopic Videos;Disparity Pyramids;Bag of Features","issn":"2076-1465","month":"Sep.","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917287.pdf","bibtex":"@InProceedings{6952463,\n author = {A. Iosifidis and A. Tefas and N. Nikolaidis and I. Pitas},\n booktitle = {2014 22nd European Signal Processing Conference (EUSIPCO)},\n title = {Human action recognition in stereoscopic videos based on bag of features and disparity pyramids},\n year = {2014},\n pages = {1317-1321},\n abstract = {In this paper, we propose a method for human action recognition in unconstrained environments based on stereoscopic videos. We describe a video representation scheme that exploits the enriched visual and disparity information that is available for such data. Each stereoscopic video is represented by multiple vectors, evaluated on video locations corresponding to different disparity zones. By using these vectors, multiple action descriptions can be determined that either correspond to specific disparity zones, or combine information appearing in different disparity zones in the classification phase. Experimental results denote that the proposed approach enhances action classification performance, when compared to the standard approach, and achieves state-of-the-art performance on the Hollywood 3D database designed for the recognition of complex actions in unconstrained environments.},\n keywords = {image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;Hollywood 3D database;Videos;Stereo image processing;Databases;Three-dimensional displays;Cameras;Vectors;Computer vision;Human Action Recognition;Stereoscopic Videos;Disparity Pyramids;Bag of Features},\n issn = {2076-1465},\n month = {Sep.},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917287.pdf},\n}\n\n","author_short":["Iosifidis, A.","Tefas, A.","Nikolaidis, N.","Pitas, I."],"key":"6952463","id":"6952463","bibbaseid":"iosifidis-tefas-nikolaidis-pitas-humanactionrecognitioninstereoscopicvideosbasedonbagoffeaturesanddisparitypyramids-2014","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2014/html/papers/1569917287.pdf"},"keyword":["image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;Hollywood 3D database;Videos;Stereo image processing;Databases;Three-dimensional displays;Cameras;Vectors;Computer vision;Human Action Recognition;Stereoscopic Videos;Disparity Pyramids;Bag of Features"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2014url.bib","creationDate":"2021-02-13T17:43:41.684Z","downloads":0,"keywords":["image classification;image motion analysis;image representation;stereo image processing;video signal processing;human action recognition;stereoscopic videos;bag of features;disparity pyramids;unconstrained environments;video representation scheme;disparity information;visual information;multiple vectors;video locations;disparity zones;multiple action descriptions;classification phase;action classification performance enhancement;hollywood 3d database;videos;stereo image processing;databases;three-dimensional displays;cameras;vectors;computer vision;human action recognition;stereoscopic videos;disparity pyramids;bag of features"],"search_terms":["human","action","recognition","stereoscopic","videos","based","bag","features","disparity","pyramids","iosifidis","tefas","nikolaidis","pitas"],"title":"Human action recognition in stereoscopic videos based on bag of features and disparity pyramids","year":2014,"dataSources":["A2ezyFL6GG6na7bbs","oZFG3eQZPXnykPgnE"]}