Applying Multivariate Segmentation Methods to Human Activity Recognition From Wearable Sensors' Data. Li, K., Habre, R., Deng, H., Urman, R., Morrison, J., Gilliland, F. D., Ambite, J. L., Stripelis, D., Chiang, Y., Lin, Y., Bui, A. A., King, C., Hosseini, A., Vliet, E. V., Sarrafzadeh, M., & Eckel, S. P. JMIR mHealth and uHealth, 7(2):e11201, February, 2019. Paper doi abstract bibtex 2 downloads BACKGROUND: Time-resolved quantification of physical activity can contribute to both personalized medicine and epidemiological research studies, for example, managing and identifying triggers of asthma exacerbations. A growing number of reportedly accurate machine learning algorithms for human activity recognition (HAR) have been developed using data from wearable devices (eg, smartwatch and smartphone). However, many HAR algorithms depend on fixed-size sampling windows that may poorly adapt to real-world conditions in which activity bouts are of unequal duration. A small sliding window can produce noisy predictions under stable conditions, whereas a large sliding window may miss brief bursts of intense activity. OBJECTIVE: We aimed to create an HAR framework adapted to variable duration activity bouts by (1) detecting the change points of activity bouts in a multivariate time series and (2) predicting activity for each homogeneous window defined by these change points. METHODS: We applied standard fixed-width sliding windows (4-6 different sizes) or greedy Gaussian segmentation (GGS) to identify break points in filtered triaxial accelerometer and gyroscope data. After standard feature engineering, we applied an Xgboost model to predict physical activity within each window and then converted windowed predictions to instantaneous predictions to facilitate comparison across segmentation methods. We applied these methods in 2 datasets: the human activity recognition using smartphones (HARuS) dataset where a total of 30 adults performed activities of approximately equal duration (approximately 20 seconds each) while wearing a waist-worn smartphone, and the Biomedical REAl-Time Health Evaluation for Pediatric Asthma (BREATHE) dataset where a total of 14 children performed 6 activities for approximately 10 min each while wearing a smartwatch. To mimic a real-world scenario, we generated artificial unequal activity bout durations in the BREATHE data by randomly subdividing each activity bout into 10 segments and randomly concatenating the 60 activity bouts. Each dataset was divided into ~90% training and ~10% holdout testing. RESULTS: In the HARuS data, GGS produced the least noisy predictions of 6 physical activities and had the second highest accuracy rate of 91.06% (the highest accuracy rate was 91.79% for the sliding window of size 0.8 second). In the BREATHE data, GGS again produced the least noisy predictions and had the highest accuracy rate of 79.4% of predictions for 6 physical activities. CONCLUSIONS: In a scenario with variable duration activity bouts, GGS multivariate segmentation produced smart-sized windows with more stable predictions and a higher accuracy rate than traditional fixed-size sliding window approaches. Overall, accuracy was good in both datasets but, as expected, it was slightly lower in the more real-world study using wrist-worn smartwatches in children (BREATHE) than in the more tightly controlled study using waist-worn smartphones in adults (HARuS). We implemented GGS in an offline setting, but it could be adapted for real-time prediction with streaming data.
@article{li_applying_2019,
title = {Applying {Multivariate} {Segmentation} {Methods} to {Human} {Activity} {Recognition} {From} {Wearable} {Sensors}' {Data}.},
volume = {7},
copyright = {(c)Kenan Li, Rima Habre, Huiyu Deng, Robert Urman, John Morrison, Frank D Gilliland, Jose Luis Ambite, Dimitris Stripelis, Yao-Yi Chiang, Yijun Lin, Alex AT Bui, Christine King, Anahita Hosseini, Eleanne Van Vliet, Majid Sarrafzadeh, Sandrah P Eckel. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 07.02.2019.},
issn = {2291-5222 2291-5222},
url = {https://mhealth.jmir.org/2019/2/e11201/},
doi = {10.2196/11201},
abstract = {BACKGROUND: Time-resolved quantification of physical activity can contribute to both personalized medicine and epidemiological research studies, for example, managing and identifying triggers of asthma exacerbations. A growing number of reportedly accurate machine learning algorithms for human activity recognition (HAR) have been developed using data from wearable devices (eg, smartwatch and smartphone). However, many HAR algorithms depend on fixed-size sampling windows that may poorly adapt to real-world conditions in which activity bouts are of unequal duration. A small sliding window can produce noisy predictions under stable conditions, whereas a large sliding window may miss brief bursts of intense activity. OBJECTIVE: We aimed to create an HAR framework adapted to variable duration activity bouts by (1) detecting the change points of activity bouts in a multivariate time series and (2) predicting activity for each homogeneous window defined by these change points. METHODS: We applied standard fixed-width sliding windows (4-6 different sizes) or greedy Gaussian segmentation (GGS) to identify break points in filtered triaxial accelerometer and gyroscope data. After standard feature engineering, we applied an Xgboost model to predict physical activity within each window and then converted windowed predictions to instantaneous predictions to facilitate comparison across segmentation methods. We applied these methods in 2 datasets: the human activity recognition using smartphones (HARuS) dataset where a total of 30 adults performed activities of approximately equal duration (approximately 20 seconds each) while wearing a waist-worn smartphone, and the Biomedical REAl-Time Health Evaluation for Pediatric Asthma (BREATHE) dataset where a total of 14 children performed 6 activities for approximately 10 min each while wearing a smartwatch. To mimic a real-world scenario, we generated artificial unequal activity bout durations in the BREATHE data by randomly subdividing each activity bout into 10 segments and randomly concatenating the 60 activity bouts. Each dataset was divided into {\textasciitilde}90\% training and {\textasciitilde}10\% holdout testing. RESULTS: In the HARuS data, GGS produced the least noisy predictions of 6 physical activities and had the second highest accuracy rate of 91.06\% (the highest accuracy rate was 91.79\% for the sliding window of size 0.8 second). In the BREATHE data, GGS again produced the least noisy predictions and had the highest accuracy rate of 79.4\% of predictions for 6 physical activities. CONCLUSIONS: In a scenario with variable duration activity bouts, GGS multivariate segmentation produced smart-sized windows with more stable predictions and a higher accuracy rate than traditional fixed-size sliding window approaches. Overall, accuracy was good in both datasets but, as expected, it was slightly lower in the more real-world study using wrist-worn smartwatches in children (BREATHE) than in the more tightly controlled study using waist-worn smartphones in adults (HARuS). We implemented GGS in an offline setting, but it could be adapted for real-time prediction with streaming data.},
language = {eng},
number = {2},
journal = {JMIR mHealth and uHealth},
author = {Li, Kenan and Habre, Rima and Deng, Huiyu and Urman, Robert and Morrison, John and Gilliland, Frank D. and Ambite, Jose Luis and Stripelis, Dimitris and Chiang, Yao-Yi and Lin, Yijun and Bui, Alex At and King, Christine and Hosseini, Anahita and Vliet, Eleanne Van and Sarrafzadeh, Majid and Eckel, Sandrah P.},
month = feb,
year = {2019},
pmid = {30730297},
pmcid = {PMC6386646},
keywords = {machine learning, physical activity, smartphone, statistical data analysis wearable devices},
pages = {e11201},
}
Downloads: 2
{"_id":"LgwqzGJ7EddAnQn9q","bibbaseid":"li-habre-deng-urman-morrison-gilliland-ambite-stripelis-etal-applyingmultivariatesegmentationmethodstohumanactivityrecognitionfromwearablesensorsdata-2019","author_short":["Li, K.","Habre, R.","Deng, H.","Urman, R.","Morrison, J.","Gilliland, F. D.","Ambite, J. L.","Stripelis, D.","Chiang, Y.","Lin, Y.","Bui, A. A.","King, C.","Hosseini, A.","Vliet, E. V.","Sarrafzadeh, M.","Eckel, S. P."],"bibdata":{"bibtype":"article","type":"article","title":"Applying Multivariate Segmentation Methods to Human Activity Recognition From Wearable Sensors' Data.","volume":"7","copyright":"(c)Kenan Li, Rima Habre, Huiyu Deng, Robert Urman, John Morrison, Frank D Gilliland, Jose Luis Ambite, Dimitris Stripelis, Yao-Yi Chiang, Yijun Lin, Alex AT Bui, Christine King, Anahita Hosseini, Eleanne Van Vliet, Majid Sarrafzadeh, Sandrah P Eckel. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 07.02.2019.","issn":"2291-5222 2291-5222","url":"https://mhealth.jmir.org/2019/2/e11201/","doi":"10.2196/11201","abstract":"BACKGROUND: Time-resolved quantification of physical activity can contribute to both personalized medicine and epidemiological research studies, for example, managing and identifying triggers of asthma exacerbations. A growing number of reportedly accurate machine learning algorithms for human activity recognition (HAR) have been developed using data from wearable devices (eg, smartwatch and smartphone). However, many HAR algorithms depend on fixed-size sampling windows that may poorly adapt to real-world conditions in which activity bouts are of unequal duration. A small sliding window can produce noisy predictions under stable conditions, whereas a large sliding window may miss brief bursts of intense activity. OBJECTIVE: We aimed to create an HAR framework adapted to variable duration activity bouts by (1) detecting the change points of activity bouts in a multivariate time series and (2) predicting activity for each homogeneous window defined by these change points. METHODS: We applied standard fixed-width sliding windows (4-6 different sizes) or greedy Gaussian segmentation (GGS) to identify break points in filtered triaxial accelerometer and gyroscope data. After standard feature engineering, we applied an Xgboost model to predict physical activity within each window and then converted windowed predictions to instantaneous predictions to facilitate comparison across segmentation methods. We applied these methods in 2 datasets: the human activity recognition using smartphones (HARuS) dataset where a total of 30 adults performed activities of approximately equal duration (approximately 20 seconds each) while wearing a waist-worn smartphone, and the Biomedical REAl-Time Health Evaluation for Pediatric Asthma (BREATHE) dataset where a total of 14 children performed 6 activities for approximately 10 min each while wearing a smartwatch. To mimic a real-world scenario, we generated artificial unequal activity bout durations in the BREATHE data by randomly subdividing each activity bout into 10 segments and randomly concatenating the 60 activity bouts. Each dataset was divided into ~90% training and ~10% holdout testing. RESULTS: In the HARuS data, GGS produced the least noisy predictions of 6 physical activities and had the second highest accuracy rate of 91.06% (the highest accuracy rate was 91.79% for the sliding window of size 0.8 second). In the BREATHE data, GGS again produced the least noisy predictions and had the highest accuracy rate of 79.4% of predictions for 6 physical activities. CONCLUSIONS: In a scenario with variable duration activity bouts, GGS multivariate segmentation produced smart-sized windows with more stable predictions and a higher accuracy rate than traditional fixed-size sliding window approaches. Overall, accuracy was good in both datasets but, as expected, it was slightly lower in the more real-world study using wrist-worn smartwatches in children (BREATHE) than in the more tightly controlled study using waist-worn smartphones in adults (HARuS). We implemented GGS in an offline setting, but it could be adapted for real-time prediction with streaming data.","language":"eng","number":"2","journal":"JMIR mHealth and uHealth","author":[{"propositions":[],"lastnames":["Li"],"firstnames":["Kenan"],"suffixes":[]},{"propositions":[],"lastnames":["Habre"],"firstnames":["Rima"],"suffixes":[]},{"propositions":[],"lastnames":["Deng"],"firstnames":["Huiyu"],"suffixes":[]},{"propositions":[],"lastnames":["Urman"],"firstnames":["Robert"],"suffixes":[]},{"propositions":[],"lastnames":["Morrison"],"firstnames":["John"],"suffixes":[]},{"propositions":[],"lastnames":["Gilliland"],"firstnames":["Frank","D."],"suffixes":[]},{"propositions":[],"lastnames":["Ambite"],"firstnames":["Jose","Luis"],"suffixes":[]},{"propositions":[],"lastnames":["Stripelis"],"firstnames":["Dimitris"],"suffixes":[]},{"propositions":[],"lastnames":["Chiang"],"firstnames":["Yao-Yi"],"suffixes":[]},{"propositions":[],"lastnames":["Lin"],"firstnames":["Yijun"],"suffixes":[]},{"propositions":[],"lastnames":["Bui"],"firstnames":["Alex","At"],"suffixes":[]},{"propositions":[],"lastnames":["King"],"firstnames":["Christine"],"suffixes":[]},{"propositions":[],"lastnames":["Hosseini"],"firstnames":["Anahita"],"suffixes":[]},{"propositions":[],"lastnames":["Vliet"],"firstnames":["Eleanne","Van"],"suffixes":[]},{"propositions":[],"lastnames":["Sarrafzadeh"],"firstnames":["Majid"],"suffixes":[]},{"propositions":[],"lastnames":["Eckel"],"firstnames":["Sandrah","P."],"suffixes":[]}],"month":"February","year":"2019","pmid":"30730297","pmcid":"PMC6386646","keywords":"machine learning, physical activity, smartphone, statistical data analysis wearable devices","pages":"e11201","bibtex":"@article{li_applying_2019,\n\ttitle = {Applying {Multivariate} {Segmentation} {Methods} to {Human} {Activity} {Recognition} {From} {Wearable} {Sensors}' {Data}.},\n\tvolume = {7},\n\tcopyright = {(c)Kenan Li, Rima Habre, Huiyu Deng, Robert Urman, John Morrison, Frank D Gilliland, Jose Luis Ambite, Dimitris Stripelis, Yao-Yi Chiang, Yijun Lin, Alex AT Bui, Christine King, Anahita Hosseini, Eleanne Van Vliet, Majid Sarrafzadeh, Sandrah P Eckel. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 07.02.2019.},\n\tissn = {2291-5222 2291-5222},\n\turl = {https://mhealth.jmir.org/2019/2/e11201/},\n\tdoi = {10.2196/11201},\n\tabstract = {BACKGROUND: Time-resolved quantification of physical activity can contribute to both personalized medicine and epidemiological research studies, for example, managing and identifying triggers of asthma exacerbations. A growing number of reportedly accurate machine learning algorithms for human activity recognition (HAR) have been developed using data from wearable devices (eg, smartwatch and smartphone). However, many HAR algorithms depend on fixed-size sampling windows that may poorly adapt to real-world conditions in which activity bouts are of unequal duration. A small sliding window can produce noisy predictions under stable conditions, whereas a large sliding window may miss brief bursts of intense activity. OBJECTIVE: We aimed to create an HAR framework adapted to variable duration activity bouts by (1) detecting the change points of activity bouts in a multivariate time series and (2) predicting activity for each homogeneous window defined by these change points. METHODS: We applied standard fixed-width sliding windows (4-6 different sizes) or greedy Gaussian segmentation (GGS) to identify break points in filtered triaxial accelerometer and gyroscope data. After standard feature engineering, we applied an Xgboost model to predict physical activity within each window and then converted windowed predictions to instantaneous predictions to facilitate comparison across segmentation methods. We applied these methods in 2 datasets: the human activity recognition using smartphones (HARuS) dataset where a total of 30 adults performed activities of approximately equal duration (approximately 20 seconds each) while wearing a waist-worn smartphone, and the Biomedical REAl-Time Health Evaluation for Pediatric Asthma (BREATHE) dataset where a total of 14 children performed 6 activities for approximately 10 min each while wearing a smartwatch. To mimic a real-world scenario, we generated artificial unequal activity bout durations in the BREATHE data by randomly subdividing each activity bout into 10 segments and randomly concatenating the 60 activity bouts. Each dataset was divided into {\\textasciitilde}90\\% training and {\\textasciitilde}10\\% holdout testing. RESULTS: In the HARuS data, GGS produced the least noisy predictions of 6 physical activities and had the second highest accuracy rate of 91.06\\% (the highest accuracy rate was 91.79\\% for the sliding window of size 0.8 second). In the BREATHE data, GGS again produced the least noisy predictions and had the highest accuracy rate of 79.4\\% of predictions for 6 physical activities. CONCLUSIONS: In a scenario with variable duration activity bouts, GGS multivariate segmentation produced smart-sized windows with more stable predictions and a higher accuracy rate than traditional fixed-size sliding window approaches. Overall, accuracy was good in both datasets but, as expected, it was slightly lower in the more real-world study using wrist-worn smartwatches in children (BREATHE) than in the more tightly controlled study using waist-worn smartphones in adults (HARuS). We implemented GGS in an offline setting, but it could be adapted for real-time prediction with streaming data.},\n\tlanguage = {eng},\n\tnumber = {2},\n\tjournal = {JMIR mHealth and uHealth},\n\tauthor = {Li, Kenan and Habre, Rima and Deng, Huiyu and Urman, Robert and Morrison, John and Gilliland, Frank D. and Ambite, Jose Luis and Stripelis, Dimitris and Chiang, Yao-Yi and Lin, Yijun and Bui, Alex At and King, Christine and Hosseini, Anahita and Vliet, Eleanne Van and Sarrafzadeh, Majid and Eckel, Sandrah P.},\n\tmonth = feb,\n\tyear = {2019},\n\tpmid = {30730297},\n\tpmcid = {PMC6386646},\n\tkeywords = {machine learning, physical activity, smartphone, statistical data analysis wearable devices},\n\tpages = {e11201},\n}\n\n","author_short":["Li, K.","Habre, R.","Deng, H.","Urman, R.","Morrison, J.","Gilliland, F. D.","Ambite, J. L.","Stripelis, D.","Chiang, Y.","Lin, Y.","Bui, A. A.","King, C.","Hosseini, A.","Vliet, E. V.","Sarrafzadeh, M.","Eckel, S. P."],"key":"li_applying_2019","id":"li_applying_2019","bibbaseid":"li-habre-deng-urman-morrison-gilliland-ambite-stripelis-etal-applyingmultivariatesegmentationmethodstohumanactivityrecognitionfromwearablesensorsdata-2019","role":"author","urls":{"Paper":"https://mhealth.jmir.org/2019/2/e11201/"},"keyword":["machine learning","physical activity","smartphone","statistical data analysis wearable devices"],"metadata":{"authorlinks":{}},"downloads":2},"bibtype":"article","biburl":"https://api.zotero.org/users/3649949/collections/52M3HD2M/items?key=kvw05jEWpV9zO4gNkD1KQFRV&format=bibtex&limit=100","dataSources":["HAFwXuLZf7sqJvp2S","u2FapfsC5Fb8utfsp","3sPtWLmmdPRfH69LS","8u7qceCaxL8Gt5PoF","Lsm8pmGSv2KvYKbGa","qkN2F4hKQojRGQeTy","cR2bQCnuvgoCQwTEh","Zv8utRXNjhXZcJdZX","zguJ5LkMpKLhRDgWX","M3cm7WzF5gdELQkoy","x94sDkjv6sHRisXm3"],"keywords":["machine learning","physical activity","smartphone","statistical data analysis wearable devices"],"search_terms":["applying","multivariate","segmentation","methods","human","activity","recognition","wearable","sensors","data","li","habre","deng","urman","morrison","gilliland","ambite","stripelis","chiang","lin","bui","king","hosseini","vliet","sarrafzadeh","eckel"],"title":"Applying Multivariate Segmentation Methods to Human Activity Recognition From Wearable Sensors' Data.","year":2019,"downloads":2}