Biomechanical-based approach to data augmentation for one-shot gesture recognition. Cabrera, M., M., E. & Wachs, J., J., J., P. In Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, pages 38-44, 5, 2018. IEEE.
Biomechanical-based approach to data augmentation for one-shot gesture recognition [link]Website  abstract   bibtex   
Most common approaches to one-shot gesture recognition have leveraged mainly conventional machine learning solutions and image based data augmentation techniques, ignoring the mechanisms that are used by humans to perceive and execute gestures, a key contextual component in this process. The novelty of this work consists on modeling the process that leads to the creation of gestures, rather than observing the gesture alone. In this approach, the context considered involves the way in which humans produce the gestures - the kinematic and biomechanical characteristics associated with gesture production and execution. By understanding the main 'modes' of variation we can replicate the single observation many times. Consequently, the main strategy proposed in this paper includes generating a data set of human-like examples based on 'naturalistic' features extracted from a single gesture sample while preserving fundamentally human characteristics like visual saliency, smooth transitions and economy of motion. The availability of a large data set of realistic samples allows the use state-of-the-art classifiers for further recognition. Several classifiers were trained and their recognition accuracies were assessed and compared to previous one-shot learning approaches. An average recognition accuracy of 95% among all classifiers highlights the relevance of keeping the human 'in the loop' to effectively achieve one-shot gesture recognition.
@inproceedings{
 title = {Biomechanical-based approach to data augmentation for one-shot gesture recognition},
 type = {inproceedings},
 year = {2018},
 identifiers = {[object Object]},
 keywords = {Biomechanics,Data Augmentation,Gesture Recognition,One Shot Learning},
 pages = {38-44},
 websites = {https://ieeexplore.ieee.org/document/8373809/},
 month = {5},
 publisher = {IEEE},
 id = {918b6d5f-0ad5-3fa8-a859-d33ae13c47a8},
 created = {2021-06-04T19:36:48.413Z},
 accessed = {2018-07-18},
 file_attached = {false},
 profile_id = {f6c02e5e-2d2f-3786-8fa8-871d32fc2b9b},
 last_modified = {2021-06-07T19:16:58.326Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Cabrera2018b},
 folder_uuids = {b43d1b86-b425-4322-b575-14547700e015},
 private_publication = {false},
 abstract = {Most common approaches to one-shot gesture recognition have leveraged mainly conventional machine learning solutions and image based data augmentation techniques, ignoring the mechanisms that are used by humans to perceive and execute gestures, a key contextual component in this process. The novelty of this work consists on modeling the process that leads to the creation of gestures, rather than observing the gesture alone. In this approach, the context considered involves the way in which humans produce the gestures - the kinematic and biomechanical characteristics associated with gesture production and execution. By understanding the main 'modes' of variation we can replicate the single observation many times. Consequently, the main strategy proposed in this paper includes generating a data set of human-like examples based on 'naturalistic' features extracted from a single gesture sample while preserving fundamentally human characteristics like visual saliency, smooth transitions and economy of motion. The availability of a large data set of realistic samples allows the use state-of-the-art classifiers for further recognition. Several classifiers were trained and their recognition accuracies were assessed and compared to previous one-shot learning approaches. An average recognition accuracy of 95% among all classifiers highlights the relevance of keeping the human 'in the loop' to effectively achieve one-shot gesture recognition.},
 bibtype = {inproceedings},
 author = {Cabrera, M.E. Maria Eugenia and Wachs, J.P. JP Juan Pablo},
 booktitle = {Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018}
}

Downloads: 0