TouchPosing: Multi-modal Interaction with Geospatial Data. Daiber, F., Gehring, S., Löchtefeld, M., & Krüger, A. In Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, of MUM '12, pages 8:1--8:4, New York, NY, USA, 2012. ACM.
TouchPosing: Multi-modal Interaction with Geospatial Data [link]Paper  doi  abstract   bibtex   
Multi-touch interaction offers opportunities to interact with complex data. Especially the exploration of geographical data, which until today mostly relies on mice and keyboard input, could benefit from this interaction paradigm. However, the gestures that are required to interact with complex systems like Geographic Information Systems (GIS) increase in difficulty with every additional functionality. This paper describes a novel interaction approach that allows non-expert users to easily explore geographic data using a combination of multi-touch gestures and handpostures. The use of the additional input modality -- handpose -- is supposed to avoid more complex multi-touch gestures. Furthermore the screen of a wearable device serves as another output modality that on one hand avoids occlusion and on the other hand serves as a magic lens.
@inproceedings{daiber_touchposing:_2012,
	address = {New York, NY, USA},
	series = {{MUM} '12},
	title = {{TouchPosing}: {Multi}-modal {Interaction} with {Geospatial} {Data}},
	isbn = {978-1-4503-1815-0},
	shorttitle = {{TouchPosing}},
	url = {http://doi.acm.org/10.1145/2406367.2406377},
	doi = {10.1145/2406367.2406377},
	abstract = {Multi-touch interaction offers opportunities to interact with complex data. Especially the exploration of geographical data, which until today mostly relies on mice and keyboard input, could benefit from this interaction paradigm. However, the gestures that are required to interact with complex systems like Geographic Information Systems (GIS) increase in difficulty with every additional functionality. This paper describes a novel interaction approach that allows non-expert users to easily explore geographic data using a combination of multi-touch gestures and handpostures. The use of the additional input modality -- handpose -- is supposed to avoid more complex multi-touch gestures. Furthermore the screen of a wearable device serves as another output modality that on one hand avoids occlusion and on the other hand serves as a magic lens.},
	urldate = {2014-05-13TZ},
	booktitle = {Proceedings of the 11th {International} {Conference} on {Mobile} and {Ubiquitous} {Multimedia}},
	publisher = {ACM},
	author = {Daiber, Florian and Gehring, Sven and Löchtefeld, Markus and Krüger, Antonio},
	year = {2012},
	pages = {8:1--8:4}
}

Downloads: 0