Multimodal Data Analysis and Visualization to Study the Usage of Electronic Health Records. Weibel, N., Ashfaq, S., Calvitti, A., Hollan, J. D., & Agha, Z. In Proceedings of the 7th International Conference on Pervasive Computing Technologies for Healthcare, of PervasiveHealth '13, pages 282–283, ICST, Brussels, Belgium, Belgium, 2013. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering). Poster
Multimodal Data Analysis and Visualization to Study the Usage of Electronic Health Records [link]Paper  doi  abstract   bibtex   
Understanding interaction with Electronic Health Records (EHR), often means to understand the multimodal nature of the physician-patient interaction, as well as the interaction with other materials (e.g. paper charts), in addition to analyze the tasks fulfilled by the doctor on his computerized system. Recent approaches started to analyze and quantify speech, gaze, body movements, etc. and represent a very promising way to complement classic software usability. However, it is hard to characterize multimodal activity, since often it requires manual coding of hours of video data. We present our approach to use automatic tracking of body, audio signals and gaze in the medical office to achieve multimodal analysis of EHR.
@inproceedings{weibel_multimodal_2013,
	address = {ICST, Brussels, Belgium, Belgium},
	series = {{PervasiveHealth} '13},
	title = {Multimodal {Data} {Analysis} and {Visualization} to {Study} the {Usage} of {Electronic} {Health} {Records}},
	isbn = {978-1-936968-80-0},
	url = {http://dx.doi.org/10.4108/icst.pervasivehealth.2013.252025},
	doi = {10.4108/icst.pervasivehealth.2013.252025},
	abstract = {Understanding interaction with Electronic Health Records (EHR), often means to understand the multimodal nature of the physician-patient interaction, as well as the interaction with other materials (e.g. paper charts), in addition to analyze the tasks fulfilled by the doctor on his computerized system. Recent approaches started to analyze and quantify speech, gaze, body movements, etc. and represent a very promising way to complement classic software usability. However, it is hard to characterize multimodal activity, since often it requires manual coding of hours of video data. We present our approach to use automatic tracking of body, audio signals and gaze in the medical office to achieve multimodal analysis of EHR.},
	urldate = {2018-12-06},
	booktitle = {Proceedings of the 7th {International} {Conference} on {Pervasive} {Computing} {Technologies} for {Healthcare}},
	publisher = {ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering)},
	author = {Weibel, Nadir and Ashfaq, Shazia and Calvitti, Alan and Hollan, James D. and Agha, Zia},
	year = {2013},
	note = {Poster},
	pages = {282--283},
}

Downloads: 0