Audio-driven human body motion analysis and synthesis. Ofli, F., Canton-Ferrer, C., Tilmanne, J., Demir, Y., Bozkurt, E., Yemez, Y., Erzin, E., & Tekalp, A. M. In 2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, of International Conference on Acoustics Speech and Signal Processing (ICASSP), pages 2233-2236, 2008. 33rd IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, MAR 30-APR 04, 2008
doi  abstract   bibtex   
This paper presents a framework for audio-driven human body motion analysis and synthesis. We address the problem in the context of a dance performance, where gestures and movements of the dancer are mainly driven by a musical piece and characterized by the repetition of a set of dance figures. The system is trained in a supervised manner using the multiview video recordings of the dancer. The human body posture is extracted from multiview video information without any human intervention using a novel marker-based algorithm based on annealing particle filtering. Audio is analyzed to extract beat and tempo information. The joint analysis of audio and motion features provides a correlation model that is then used to animate a dancing avatar when driven with any musical piece of the same genre. Results are provided showing the effectiveness of the proposed algorithm.
@inproceedings{ ISI:000257456701220,
Author = {Ofli, F. and Canton-Ferrer, C. and Tilmanne, J. and Demir, Y. and
   Bozkurt, E. and Yemez, Y. and Erzin, E. and Tekalp, A. M.},
Book-Group-Author = {{IEEE}},
Title = {{Audio-driven human body motion analysis and synthesis}},
Booktitle = {{2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL
   PROCESSING, VOLS 1-12}},
Series = {{International Conference on Acoustics Speech and Signal Processing
   (ICASSP)}},
Year = {{2008}},
Pages = {{2233-2236}},
Note = {{33rd IEEE International Conference on Acoustics, Speech and Signal
   Processing, Las Vegas, NV, MAR 30-APR 04, 2008}},
Abstract = {{This paper presents a framework for audio-driven human body motion
   analysis and synthesis. We address the problem in the context of a dance
   performance, where gestures and movements of the dancer are mainly
   driven by a musical piece and characterized by the repetition of a set
   of dance figures. The system is trained in a supervised manner using the
   multiview video recordings of the dancer. The human body posture is
   extracted from multiview video information without any human
   intervention using a novel marker-based algorithm based on annealing
   particle filtering. Audio is analyzed to extract beat and tempo
   information. The joint analysis of audio and motion features provides a
   correlation model that is then used to animate a dancing avatar when
   driven with any musical piece of the same genre. Results are provided
   showing the effectiveness of the proposed algorithm.}},
DOI = {{10.1109/ICASSP.2008.4518089}},
ISSN = {{1520-6149}},
ISBN = {{978-1-4244-1483-3}},
ResearcherID-Numbers = {{Erzin, Engin/H-1716-2011}},
ORCID-Numbers = {{Erzin, Engin/0000-0002-2715-2368}},
Unique-ID = {{ISI:000257456701220}},
}

Downloads: 0