abstract bibtex

In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.

@Article{Ilg2004, author = {Ilg, W. and Bakir, G. H. and Mezger, J. and Giese, M. A.}, title = {On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics}, journal = {International Journal of Humanoid Robotics}, year = {2004}, volume = {1}, pages = {613--636}, abstract = {In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.}, groups = {STAT841, IROS2014, EMBC2014}, keywords = {Segmentation}, review = {Ilg2004 INTRODUCTION - Need to be able to learn motion from small amounts of training data - Need to learn a whole class of movement from small sample sets - Spatio-Temporal Morphable Models (STMM) can do this - Weights spatial and temporal displacement fields - Used in many areas, like computer vision and graphics already - Applied to fields with short motions (one gait, one arm, etc) - This paper expands STMM to represent motion primitives 1) unsupervised movement primitive id 2) their approx by STMM 3) auto-concatenation of motion HIERARCHICAL SPATIO-TEMPORAL MORPHABLE MODELS (HSTMM) - Breaking down a complex motion into primitives is not new - Need to look at both perception and generation - perception: robust identification of primitives - generation: primitives should dfeine generic blocks of motion - want something robust and fast - using ZVC to get a general idea of where segments are - matching segments to primitives, using dynamic programming Identifying Movement Primitives - Need to look for primitives in nosiy data - Use dynamic programming (breaking it down to smaller problems) - looking for matches between prototype movements and the search window by looking at key features - so we look at the various key features in our search window and key features (which are...?) in the prototype... - and match based on minimizing distance - what about extra/missing features? - so instead of making sure all the key features matches up, we only need to match a handful Morphable Models - Uses dynamic time warping (non-linear transformation of time data) - look at movements of feature points (as oppose to all DOFs?) - so spatial and temporal shifts allows comparison of motions from different objects and movement timing - minimizing weighted sum of quadratics (so least squares) - do this in two steps - dynamic programming: temporally sample system and optimize on that (discrete) - then take linear interpolation between sampled points and optimize that (continous) - however, after this transformation process, we could end up with disconts and artifacts - but we can address them - the results of the HSTMM segmentation gives us 'L' points in the data - resample the original data in each of the segment area, after shifting it so they start and end at zero (linearly) - "linear combination of elements" (not too sure what this means) - "rewarping and concatentation..." (not sure what this means either) rest of the post talks about examples Ilg \etal \cite{Ilg2004} employed DTW in a multi-tier fashion. The observation signal is dimensionally reduced by removing all data points that are not at a velocity zero, as velocity zeros denote turning points in the motion, and thus can be considered as key features of the motion. DTW is performed on this reduced dataset to align these key features. A tolerance is included to allow for missed key features, as the template and the observation may not have the same velocity zeros, to reduce the number of mapping singularities. Each window in these high-level segments is resampled to have the same number of data points. A finer alignment is performed in each of these windows. Assuming that the observed motion can be warped to the template with the proper temporal and spatial warping, the algorithm uses DTW to calculate an optimal temporal mapping path between the template and the observation, and applies some interpolated shifting around the suggested mapping path to minimize the temporal difference between the two signals. Once the optimal time warping is found, the spatial distance offset can be calculated. The temporal and spatial warping variables are also constrained to minimize the amount of warping required to obtain the best fit \cite{Giese1999}. The algorithm was implemented as part of a motion generation algorithm, and not specifically for segmentation, so segmentation and identification accuracy were not reported. Similar to previous DTW algorithms, this algorithm does not address the computational costs of using DP, and thus may not scale well to higher dimensions.}, timestamp = {2011.01.17}, }

Downloads: 0