On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics. Ilg, W., Bakir, G. H., Mezger, J., & Giese, M. A. International Journal of Humanoid Robotics, 1:613--636, 2004. abstract bibtex In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.
@Article{Ilg2004,
author = {Ilg, W. and Bakir, G. H. and Mezger, J. and Giese, M. A.},
title = {On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics},
journal = {International Journal of Humanoid Robotics},
year = {2004},
volume = {1},
pages = {613--636},
abstract = {In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.},
groups = {STAT841, IROS2014, EMBC2014},
keywords = {Segmentation},
review = {Ilg2004
INTRODUCTION
- Need to be able to learn motion from small amounts of training data
- Need to learn a whole class of movement from small sample sets
- Spatio-Temporal Morphable Models (STMM) can do this
- Weights spatial and temporal displacement fields
- Used in many areas, like computer vision and graphics already
- Applied to fields with short motions (one gait, one arm, etc)
- This paper expands STMM to represent motion primitives
1) unsupervised movement primitive id
2) their approx by STMM
3) auto-concatenation of motion
HIERARCHICAL SPATIO-TEMPORAL MORPHABLE MODELS (HSTMM)
- Breaking down a complex motion into primitives is not new
- Need to look at both perception and generation
- perception: robust identification of primitives
- generation: primitives should dfeine generic blocks of motion
- want something robust and fast
- using ZVC to get a general idea of where segments are
- matching segments to primitives, using dynamic programming
Identifying Movement Primitives
- Need to look for primitives in nosiy data
- Use dynamic programming (breaking it down to smaller problems)
- looking for matches between prototype movements and the search window by looking at key features
- so we look at the various key features in our search window and key features (which are...?) in the prototype...
- and match based on minimizing distance
- what about extra/missing features?
- so instead of making sure all the key features matches up, we only need to match a handful
Morphable Models
- Uses dynamic time warping (non-linear transformation of time data)
- look at movements of feature points (as oppose to all DOFs?)
- so spatial and temporal shifts allows comparison of motions from different objects and movement timing
- minimizing weighted sum of quadratics (so least squares)
- do this in two steps
- dynamic programming: temporally sample system and optimize on that (discrete)
- then take linear interpolation between sampled points and optimize that (continous)
- however, after this transformation process, we could end up with disconts and artifacts
- but we can address them
- the results of the HSTMM segmentation gives us 'L' points in the data
- resample the original data in each of the segment area, after shifting it so they start and end at zero (linearly)
- "linear combination of elements" (not too sure what this means)
- "rewarping and concatentation..." (not sure what this means either)
rest of the post talks about examples
Ilg \etal \cite{Ilg2004} employed DTW in a multi-tier fashion. The observation signal is dimensionally reduced by removing all data points that are not at a velocity zero, as velocity zeros denote turning points in the motion, and thus can be considered as key features of the motion. DTW is performed on this reduced dataset to align these key features. A tolerance is included to allow for missed key features, as the template and the observation may not have the same velocity zeros, to reduce the number of mapping singularities. Each window in these high-level segments is resampled to have the same number of data points. A finer alignment is performed in each of these windows. Assuming that the observed motion can be warped to the template with the proper temporal and spatial warping, the algorithm uses DTW to calculate an optimal temporal mapping path between the template and the observation, and applies some interpolated shifting around the suggested mapping path to minimize the temporal difference between the two signals. Once the optimal time warping is found, the spatial distance offset can be calculated. The temporal and spatial warping variables are also constrained to minimize the amount of warping required to obtain the best fit \cite{Giese1999}. The algorithm was implemented as part of a motion generation algorithm, and not specifically for segmentation, so segmentation and identification accuracy were not reported. Similar to previous DTW algorithms, this algorithm does not address the computational costs of using DP, and thus may not scale well to higher dimensions.},
timestamp = {2011.01.17},
}
Downloads: 0
{"_id":"9KGmJibMHCmLuyAYE","bibbaseid":"ilg-bakir-mezger-giese-ontherepresentationlearningandtransferofspatiotemporalmovementcharacteristics-2004","downloads":0,"creationDate":"2017-09-14T16:34:36.557Z","title":"On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics","author_short":["Ilg, W.","Bakir, G. H.","Mezger, J.","Giese, M. A."],"year":2004,"bibtype":"article","biburl":"https://raw.githubusercontent.com/jfslin/jfslin.github.io/master/jf2lin.bib","bibdata":{"bibtype":"article","type":"article","author":[{"propositions":[],"lastnames":["Ilg"],"firstnames":["W."],"suffixes":[]},{"propositions":[],"lastnames":["Bakir"],"firstnames":["G.","H."],"suffixes":[]},{"propositions":[],"lastnames":["Mezger"],"firstnames":["J."],"suffixes":[]},{"propositions":[],"lastnames":["Giese"],"firstnames":["M.","A."],"suffixes":[]}],"title":"On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics","journal":"International Journal of Humanoid Robotics","year":"2004","volume":"1","pages":"613--636","abstract":"In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.","groups":"STAT841, IROS2014, EMBC2014","keywords":"Segmentation","review":"Ilg2004 INTRODUCTION - Need to be able to learn motion from small amounts of training data - Need to learn a whole class of movement from small sample sets - Spatio-Temporal Morphable Models (STMM) can do this - Weights spatial and temporal displacement fields - Used in many areas, like computer vision and graphics already - Applied to fields with short motions (one gait, one arm, etc) - This paper expands STMM to represent motion primitives 1) unsupervised movement primitive id 2) their approx by STMM 3) auto-concatenation of motion HIERARCHICAL SPATIO-TEMPORAL MORPHABLE MODELS (HSTMM) - Breaking down a complex motion into primitives is not new - Need to look at both perception and generation - perception: robust identification of primitives - generation: primitives should dfeine generic blocks of motion - want something robust and fast - using ZVC to get a general idea of where segments are - matching segments to primitives, using dynamic programming Identifying Movement Primitives - Need to look for primitives in nosiy data - Use dynamic programming (breaking it down to smaller problems) - looking for matches between prototype movements and the search window by looking at key features - so we look at the various key features in our search window and key features (which are...?) in the prototype... - and match based on minimizing distance - what about extra/missing features? - so instead of making sure all the key features matches up, we only need to match a handful Morphable Models - Uses dynamic time warping (non-linear transformation of time data) - look at movements of feature points (as oppose to all DOFs?) - so spatial and temporal shifts allows comparison of motions from different objects and movement timing - minimizing weighted sum of quadratics (so least squares) - do this in two steps - dynamic programming: temporally sample system and optimize on that (discrete) - then take linear interpolation between sampled points and optimize that (continous) - however, after this transformation process, we could end up with disconts and artifacts - but we can address them - the results of the HSTMM segmentation gives us 'L' points in the data - resample the original data in each of the segment area, after shifting it so they start and end at zero (linearly) - \"linear combination of elements\" (not too sure what this means) - \"rewarping and concatentation...\" (not sure what this means either) rest of the post talks about examples Ilg \\etal i̧teIlg2004 employed DTW in a multi-tier fashion. The observation signal is dimensionally reduced by removing all data points that are not at a velocity zero, as velocity zeros denote turning points in the motion, and thus can be considered as key features of the motion. DTW is performed on this reduced dataset to align these key features. A tolerance is included to allow for missed key features, as the template and the observation may not have the same velocity zeros, to reduce the number of mapping singularities. Each window in these high-level segments is resampled to have the same number of data points. A finer alignment is performed in each of these windows. Assuming that the observed motion can be warped to the template with the proper temporal and spatial warping, the algorithm uses DTW to calculate an optimal temporal mapping path between the template and the observation, and applies some interpolated shifting around the suggested mapping path to minimize the temporal difference between the two signals. Once the optimal time warping is found, the spatial distance offset can be calculated. The temporal and spatial warping variables are also constrained to minimize the amount of warping required to obtain the best fit i̧teGiese1999. The algorithm was implemented as part of a motion generation algorithm, and not specifically for segmentation, so segmentation and identification accuracy were not reported. Similar to previous DTW algorithms, this algorithm does not address the computational costs of using DP, and thus may not scale well to higher dimensions.","timestamp":"2011.01.17","bibtex":"@Article{Ilg2004,\n author = {Ilg, W. and Bakir, G. H. and Mezger, J. and Giese, M. A.},\n title = {On the Representation, Learning and Transfer of Spatio-Temporal Movement Characteristics},\n journal = {International Journal of Humanoid Robotics},\n year = {2004},\n volume = {1},\n pages = {613--636},\n abstract = {In this paper we present a learning-based approach for the modelling of complex movement sequences. Based on the method of Spatio-Temporal Morphable Models (STMMS. We derive a hierarchical algorithm that, in a first step, identifies automatically movement elements in movement sequences based on a coarse spatio-temporal description, and in a second step models these movement primitives by approximation through linear combinations of learned example movement trajectories. We describe the different steps of the algorithm and show how it can be applied for modelling and synthesis of complex sequences of human movements that contain movement elements with variable style. The proposed method is demonstrated on different applications of movement representation relevant for imitation learning of movement styles in humanoid robotics.},\n groups = {STAT841, IROS2014, EMBC2014},\n keywords = {Segmentation},\n review = {Ilg2004\n\nINTRODUCTION\n- Need to be able to learn motion from small amounts of training data\n- Need to learn a whole class of movement from small sample sets\n- Spatio-Temporal Morphable Models (STMM) can do this\n - Weights spatial and temporal displacement fields\n - Used in many areas, like computer vision and graphics already\n - Applied to fields with short motions (one gait, one arm, etc)\n- This paper expands STMM to represent motion primitives\n 1) unsupervised movement primitive id\n 2) their approx by STMM\n 3) auto-concatenation of motion\n\nHIERARCHICAL SPATIO-TEMPORAL MORPHABLE MODELS (HSTMM)\n- Breaking down a complex motion into primitives is not new\n- Need to look at both perception and generation\n - perception: robust identification of primitives\n - generation: primitives should dfeine generic blocks of motion\n- want something robust and fast\n - using ZVC to get a general idea of where segments are\n - matching segments to primitives, using dynamic programming\n\nIdentifying Movement Primitives\n- Need to look for primitives in nosiy data\n- Use dynamic programming (breaking it down to smaller problems)\n - looking for matches between prototype movements and the search window by looking at key features\n - so we look at the various key features in our search window and key features (which are...?) in the prototype...\n - and match based on minimizing distance\n - what about extra/missing features?\n - so instead of making sure all the key features matches up, we only need to match a handful\n\nMorphable Models\n- Uses dynamic time warping (non-linear transformation of time data)\n- look at movements of feature points (as oppose to all DOFs?)\n- so spatial and temporal shifts allows comparison of motions from different objects and movement timing\n - minimizing weighted sum of quadratics (so least squares)\n - do this in two steps\n - dynamic programming: temporally sample system and optimize on that (discrete)\n - then take linear interpolation between sampled points and optimize that (continous)\n - however, after this transformation process, we could end up with disconts and artifacts\n - but we can address them\n - the results of the HSTMM segmentation gives us 'L' points in the data\n - resample the original data in each of the segment area, after shifting it so they start and end at zero (linearly)\n - \"linear combination of elements\" (not too sure what this means)\n - \"rewarping and concatentation...\" (not sure what this means either)\n\nrest of the post talks about examples\n\nIlg \\etal \\cite{Ilg2004} employed DTW in a multi-tier fashion. The observation signal is dimensionally reduced by removing all data points that are not at a velocity zero, as velocity zeros denote turning points in the motion, and thus can be considered as key features of the motion. DTW is performed on this reduced dataset to align these key features. A tolerance is included to allow for missed key features, as the template and the observation may not have the same velocity zeros, to reduce the number of mapping singularities. Each window in these high-level segments is resampled to have the same number of data points. A finer alignment is performed in each of these windows. Assuming that the observed motion can be warped to the template with the proper temporal and spatial warping, the algorithm uses DTW to calculate an optimal temporal mapping path between the template and the observation, and applies some interpolated shifting around the suggested mapping path to minimize the temporal difference between the two signals. Once the optimal time warping is found, the spatial distance offset can be calculated. The temporal and spatial warping variables are also constrained to minimize the amount of warping required to obtain the best fit \\cite{Giese1999}. The algorithm was implemented as part of a motion generation algorithm, and not specifically for segmentation, so segmentation and identification accuracy were not reported. Similar to previous DTW algorithms, this algorithm does not address the computational costs of using DP, and thus may not scale well to higher dimensions.},\n timestamp = {2011.01.17},\n}\n\n","author_short":["Ilg, W.","Bakir, G. H.","Mezger, J.","Giese, M. A."],"key":"Ilg2004","id":"Ilg2004","bibbaseid":"ilg-bakir-mezger-giese-ontherepresentationlearningandtransferofspatiotemporalmovementcharacteristics-2004","role":"author","urls":{},"keyword":["Segmentation"],"downloads":0},"search_terms":["representation","learning","transfer","spatio","temporal","movement","characteristics","ilg","bakir","mezger","giese"],"keywords":["segmentation"],"authorIDs":[],"dataSources":["iCsmKnycRmHPxmhBd"]}