DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning. Lane, N., D., Georgiev, P., & Qendro, L. In Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp), pages 283-294, 9, 2015. ACM.
DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning [link]Website  abstract   bibtex   
Microphones are remarkably powerful sensors of human behavior and context. However, audio sensing is highly susceptible to wild fluctuations in accuracy when used in diverse acoustic environments (such as, bedrooms, vehicles, or cafes), that users encounter on a daily basis. Towards addressing this challenge, we turn to the field of deep learning; an area of machine learning that has radically changed related audio modeling domains like speech recognition. In this paper, we present DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks. We train DeepEar with a large-scale dataset including unlabeled data from 168 place visits. The resulting learned model, involving 2.3M parameters, enables DeepEar to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. Finally, we show DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.
@inProceedings{
 title = {DeepEar: Robust Smartphone Audio Sensing in Unconstrained Acoustic Environments Using Deep Learning},
 type = {inProceedings},
 year = {2015},
 identifiers = {[object Object]},
 keywords = {audio,context,deep-learning,machine-learning,sensing,sensors},
 pages = {283-294},
 websites = {http://dx.doi.org/10.1145/2750858.2804262},
 month = {9},
 publisher = {ACM},
 id = {8560d241-6b39-33e6-8af8-4374a7b2fb41},
 created = {2018-07-12T21:32:24.038Z},
 file_attached = {false},
 profile_id = {f954d000-ce94-3da6-bd26-b983145a920f},
 group_id = {b0b145a3-980e-3ad7-a16f-c93918c606ed},
 last_modified = {2018-07-12T21:32:24.038Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {true},
 hidden = {false},
 citation_key = {lane:deepear},
 source_type = {inproceedings},
 private_publication = {false},
 abstract = {Microphones are remarkably powerful sensors of human behavior and context. However, audio sensing is highly susceptible to wild fluctuations in accuracy when used in diverse acoustic environments (such as, bedrooms, vehicles, or cafes), that users encounter on a daily basis. Towards addressing this challenge, we turn to the field of deep learning; an area of machine learning that has radically changed related audio modeling domains like speech recognition. In this paper, we present DeepEar -- the first mobile audio sensing framework built from coupled Deep Neural Networks (DNNs) that simultaneously perform common audio sensing tasks. We train DeepEar with a large-scale dataset including unlabeled data from 168 place visits. The resulting learned model, involving 2.3M parameters, enables DeepEar to significantly increase inference robustness to background noise beyond conventional approaches present in mobile devices. Finally, we show DeepEar is feasible for smartphones by building a cloud-free DSP-based prototype that runs continuously, using only 6% of the smartphone's battery daily.},
 bibtype = {inProceedings},
 author = {Lane, Nicholas D and Georgiev, Petko and Qendro, Lorena},
 booktitle = {Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (Ubicomp)}
}

Downloads: 0