Automated Estimation of Food Type and Amount Consumed from Body-worn Audio and Motion Sensors. Mirtchouk, M., Merck, C., & Kleinberg, S. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp), 2016.
Automated Estimation of Food Type and Amount Consumed from Body-worn Audio and Motion Sensors [link]Website  abstract   bibtex   
Determining when an individual is eating can be useful for tracking behavior and identifying patterns, but to create nutrition logs automatically or provide real-time feedback to people with chronic disease, we need to identify both what they are consuming and in what quantity. However, food type and amount have mainly been estimated using image data (requiring user involvement) or acoustic sensors (tested with a restricted set of foods rather than representative meals). As a result, there is not yet a highly accurate automated nutrition monitoring method that can be used with a variety of foods. We propose that multi-modal sensing (in-ear audio plus head and wrist motion) can be used to more accurately classify food type, as audio and motion features provide complementary information. Further, we propose that knowing food type is critical for estimating amount consumed in combination with sensor data. To test this we use data from people wearing audio and motion sensors, with ground truth annotated from video and continuous scale data. With data from 40 unique foods we achieve a classification accuracy of 82.7% with a combination of sensors (versus 67.8% for audio alone and 76.2% for head and wrist motion). Weight estimation error was reduced from a baseline of 127.3% to 35.4% absolute relative error. Ultimately, our estimates of food type and amount can be linked to food databases to provide automated calorie estimates from continuously-collected data.
@inProceedings{
 title = {Automated Estimation of Food Type and Amount Consumed from Body-worn Audio and Motion Sensors},
 type = {inProceedings},
 year = {2016},
 identifiers = {[object Object]},
 keywords = {audio,auracle,food-type-amount,imu,in-ear-mic,intake-detection,motion,wrist},
 websites = {http://dx.doi.org/10.1145/2971648.2971677},
 id = {47b81193-7a04-3666-a8c6-a966718813af},
 created = {2018-07-12T21:32:23.041Z},
 file_attached = {false},
 profile_id = {f954d000-ce94-3da6-bd26-b983145a920f},
 group_id = {b0b145a3-980e-3ad7-a16f-c93918c606ed},
 last_modified = {2018-07-12T21:32:23.041Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {mirtchouk:foodtype},
 source_type = {inproceedings},
 private_publication = {false},
 abstract = {Determining when an individual is eating can be useful for tracking behavior and identifying patterns, but to create nutrition logs automatically or provide real-time feedback to people with chronic disease, we need to identify both what they are consuming and in what quantity. However, food type and amount have mainly been estimated using image data (requiring user involvement) or acoustic sensors (tested with a restricted set of foods rather than representative meals). As a result, there is not yet a highly accurate automated nutrition monitoring method that can be used with a variety of foods. We propose that multi-modal sensing (in-ear audio plus head and wrist motion) can be used to more accurately classify food type, as audio and motion features provide complementary information. Further, we propose that knowing food type is critical for estimating amount consumed in combination with sensor data. To test this we use data from people wearing audio and motion sensors, with ground truth annotated from video and continuous scale data. With data from 40 unique foods we achieve a classification accuracy of 82.7% with a combination of sensors (versus 67.8% for audio alone and 76.2% for head and wrist motion). Weight estimation error was reduced from a baseline of 127.3% to 35.4% absolute relative error. Ultimately, our estimates of food type and amount can be linked to food databases to provide automated calorie estimates from continuously-collected data.},
 bibtype = {inProceedings},
 author = {Mirtchouk, Mark and Merck, Christopher and Kleinberg, Samantha},
 booktitle = {Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)}
}
Downloads: 0