In *Proceedings of the SIAM International Conference on Data Mining*, pages 1--11, 2001.

abstract bibtex

abstract bibtex

this paper we address both these problems by introducing a modification of DTW. The crucial difference is in the features we consider when attempting to find the correct warping. Rather than use the raw data, we consider only the (estimated) local derivatives of the data

@InProceedings{Keogh2001b, Title = {Derivative Dynamic Time Warping}, Author = {Keogh, E. J. AND Pazzani, M. J.}, Booktitle = {Proceedings of the SIAM International Conference on Data Mining}, Year = {2001}, Pages = {1--11}, Abstract = {this paper we address both these problems by introducing a modification of DTW. The crucial difference is in the features we consider when attempting to find the correct warping. Rather than use the raw data, we consider only the (estimated) local derivatives of the data}, Keywords = {dynamic time warping}, Review = {Keogh2001b Base algorithm: DTW Templated? Yes Details: Want to address singularity warping. DTW only considers y-axis points and not really any temporal or larger picture things. Instead of using Euclidean distance as the cost function, the cost function is now the square of the diffrence of the derivatives of the two functions. This incorporate time information. Verification: Used some ... pretty random sensor sets. But also they also inserted random "bumps" into the data to see if they match up, and generated their own metric. Error reported: Reported better results then DTW without Keogh and Pazzani \cite{Keogh2001b} noted that classic DTW considers only the position data and does not account for higher level features. They proposed the Derivative DTW (DDTW), and examined the derivative of the data streams instead by using the square of the difference of the derivatives of the two signals instead of the Euclidean distance. This allows the algorithm to negate global effects such as signal bias. The derivative also allows more of the latent features to be emphasized. Using only the Euclidean distance, DTW will map two points of identical value together, even if one point is part of a falling trend and the other is part of a rising trend. Using the derivate will allow the larger pattern to be captured. This algorithm was verified on a set of space shuttle sensor readings, the currency exchange rate between the German Deutschmark and five other European currencies over the span of six months, as well as a set of electroencephlograph (EEG) measurements. All sets of data show similar trends but are non-identical. Comparison between DTW and DDTW shows that DTW tends to be overaggressive in the warping, while DDTW provides a more accurate mapping. No classification or segmentation accuracy values were ported. Although the DDTW algorithm outperforms standard DTW, they have similar runtime, meaning that it does not scale well to higher dimensions. Keogh and Pazzani \cite{Keogh2001b} noted that classic DTW considers only the position data and does not account for higher level features. They proposed the derivative DTW (dDTW), and examined the derivative of the data streams by using the square of the difference of the derivatives of the two signals instead of the Euclidean distance. This allows the algorithm to negate global effects such as signal bias, and allows more of the latent features to be emphasized. Using only the Euclidean distance, DTW will map two points of identical value together, even if one point is part of a falling trend and the other is part of a rising trend. Using the derivative will allow the larger pattern to be captured. Comparison between DTW and dDTW shows that DTW tends to be overaggressive in the warping, while dDTW provides a more accurate mapping. No classification or segmentation accuracy values were ported.}, Timestamp = {2011.04.01} }

Downloads: 0