Combination of sequential class distributions from multiple channels using Markov fusion networks. Glodek, M., Schels, M., Schwenker, F., & Palm, G. Journal on Multimodal User Interfaces, March, 2014. 00000
Combination of sequential class distributions from multiple channels using Markov fusion networks [link]Paper  doi  abstract   bibtex   
The recognition of patterns in real-time scenarios has become an important trend in the field of multi-modal user interfaces in human computer interaction. Cognitive technical systems aim to improve the human computer interaction by means of recognizing the situative context, e.g. by activity recognition (Ahad et al. in IEEE, 1896–1901, 2008), or by estimating the affective state (Zeng et al., IEEE Trans Pattern Anal Mach Intell 31(1):39–58, 2009) of the human dialogue partner. Classifier systems developed for such applications must operate on multiple modalities and must integrate the available decisions over large time periods. We address this topic by introducing the Markov fusion network (MFN) which is a novel classifier combination approach, for the integration of multi-class and multi-modal decisions continuously over time. The MFN combines results while meeting real-time requirements, weighting decisions of the modalities dynamically, and dealing with sensor failures. The proposed MFN has been evaluated in two empirical studies: the recognition of objects involved in human activities, and the recognition of emotions where we successfully demonstrate its outstanding performance. Furthermore, we show how the MFN can be applied in a variety of different architectures and the several options to configure the model in order to meet the demands of a distinct problem.
@article{glodek_combination_2014,
	title = {Combination of sequential class distributions from multiple channels using {Markov} fusion networks},
	issn = {1783-7677, 1783-8738},
	url = {http://link.springer.com/article/10.1007/s12193-014-0149-0},
	doi = {10.1007/s12193-014-0149-0},
	abstract = {The recognition of patterns in real-time scenarios has become an important trend in the field of multi-modal user interfaces in human computer interaction. Cognitive technical systems aim to improve the human computer interaction by means of recognizing the situative context, e.g. by activity recognition (Ahad et al. in IEEE, 1896–1901, 2008), or by estimating the affective state (Zeng et al., IEEE Trans Pattern Anal Mach Intell 31(1):39–58, 2009) of the human dialogue partner. Classifier systems developed for such applications must operate on multiple modalities and must integrate the available decisions over large time periods. We address this topic by introducing the Markov fusion network (MFN) which is a novel classifier combination approach, for the integration of multi-class and multi-modal decisions continuously over time. The MFN combines results while meeting real-time requirements, weighting decisions of the modalities dynamically, and dealing with sensor failures. The proposed MFN has been evaluated in two empirical studies: the recognition of objects involved in human activities, and the recognition of emotions where we successfully demonstrate its outstanding performance. Furthermore, we show how the MFN can be applied in a variety of different architectures and the several options to configure the model in order to meet the demands of a distinct problem.},
	language = {en},
	urldate = {2014-05-19TZ},
	journal = {Journal on Multimodal User Interfaces},
	author = {Glodek, Michael and Schels, Martin and Schwenker, Friedhelm and Palm, Günther},
	month = mar,
	year = {2014},
	note = {00000},
	pages = {1--16}
}

Downloads: 0