Automatic state discovery for unstructured audio scene classification. Ramos, J., Siddiqi, S., Dubrawski, A., Gordon, G., & Sharma, A. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, pages 2154-2157, 2010. IEEE.
Automatic state discovery for unstructured audio scene classification [link]Website  abstract   bibtex   
In this paper we present a novel scheme for unstructured audio scene classification that possesses three highly desirable and powerful features: autonomy, scalability, and robustness. Our scheme is based on our recently introduced machine learning algorithm called Simultaneous Temporal And Contextual Splitting (STACS) that discovers the appropriate number of states and efficiently learns accurate Hidden Markov Model (HMM) parameters for the given data. STACS-based algorithms train HMMs up to five times faster than Baum-Welch, avoid the overfitting problem commonly encountered in learning large state-space HMMs using Expectation Maximization (EM) methods such as Baum-Welch, and achieve superior classification results on a very diverse dataset with minimal pre-processing. Furthermore, our scheme has proven to be highly effective for building real-world applications and has been integrated into a commercial surveillance system as an event detection component.
@inProceedings{
 title = {Automatic state discovery for unstructured audio scene classification},
 type = {inProceedings},
 year = {2010},
 identifiers = {[object Object]},
 keywords = {Audio classification,Hidden Markov Models,Topology learning},
 pages = {2154-2157},
 websites = {http://ieeexplore.ieee.org/document/5495605/},
 publisher = {IEEE},
 id = {fe64d8ff-685f-35a1-b30f-a6eef9a1f2d9},
 created = {2016-10-12T18:53:32.000Z},
 accessed = {2016-10-12},
 file_attached = {false},
 profile_id = {12d6cebf-2930-3e11-91dd-0c111bc32a4d},
 last_modified = {2016-10-17T18:42:13.000Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Ramos2010},
 folder_uuids = {51a43a59-08ba-4949-a58c-a0f7a14dd889},
 abstract = {In this paper we present a novel scheme for unstructured audio scene classification that possesses three highly desirable and powerful features: autonomy, scalability, and robustness. Our scheme is based on our recently introduced machine learning algorithm called Simultaneous Temporal And Contextual Splitting (STACS) that discovers the appropriate number of states and efficiently learns accurate Hidden Markov Model (HMM) parameters for the given data. STACS-based algorithms train HMMs up to five times faster than Baum-Welch, avoid the overfitting problem commonly encountered in learning large state-space HMMs using Expectation Maximization (EM) methods such as Baum-Welch, and achieve superior classification results on a very diverse dataset with minimal pre-processing. Furthermore, our scheme has proven to be highly effective for building real-world applications and has been integrated into a commercial surveillance system as an event detection component.},
 bibtype = {inProceedings},
 author = {Ramos, Julian and Siddiqi, Sajid and Dubrawski, Artur and Gordon, Geoffrey and Sharma, Abhishek},
 booktitle = {ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings}
}

Downloads: 0