Multimodality Sensing for Eating Recognition. Merck, C., Maher, C., Mirtchouk, M., Zheng, M., Huang, Y., & Kleinberg, S. In Proceedings of the EAI International Conference on Pervasive Computing Technologies for Healthcare, 2016. ACM Press.
Multimodality Sensing for Eating Recognition [pdf]Website  abstract   bibtex   
While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes.

Downloads: 0