\n
\n\n \n \n \n \n \n \n MatchPoint: Spontaneous Spatial Coupling of Body Movement for Touchless Pointing.\n \n \n \n \n\n\n \n Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n In
Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, of
UIST '17, pages 179–192, New York, NY, USA, 2017. Association for Computing Machinery\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3126594.3126626,\nauthor = {Clarke, Christopher and Gellersen, Hans},\ntitle = {MatchPoint: Spontaneous Spatial Coupling of Body Movement for Touchless Pointing},\nyear = {2017},\nisbn = {9781450349819},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3126594.3126626},\ndoi = {10.1145/3126594.3126626},\nabstract = {Pointing is a fundamental interaction technique where user movement is translated\nto spatial input on a display. Conventionally, this is based on a rigid configuration\nof a display coupled with a pointing device that determines the types of movement\nthat can be sensed, and the specific ways users can affect pointer input. Spontaneous\nspatial coupling is a novel input technique that instead allows any body movement,\nor movement of tangible objects, to be appropriated for touchless pointing on an ad\nhoc basis. Pointer acquisition is facilitated by the display presenting graphical\nobjects in motion, to which users can synchronise to define a temporary spatial coupling\nwith the body part or tangible object they used in the process. The technique can\nbe deployed using minimal hardware, as demonstrated by MatchPoint, a generic computer\nvision-based implementation of the technique that requires only a webcam. We explore\nthe design space of spontaneous spatial coupling, demonstrate the versatility of the\ntechnique with application examples, and evaluate MatchPoint performance using a multi-directional\npointing task.},\nbooktitle = {Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology},\npages = {179–192},\nnumpages = {14},\nkeywords = {vison-based interfaces, computer vision, gesture input, pointing, input techniques, touchless input, user input, bodily interaction, motion-matching},\nlocation = {Qu\\'{e}bec City, QC, Canada},\nseries = {UIST '17}\n}\n\n
\n
\n\n\n
\n Pointing is a fundamental interaction technique where user movement is translated to spatial input on a display. Conventionally, this is based on a rigid configuration of a display coupled with a pointing device that determines the types of movement that can be sensed, and the specific ways users can affect pointer input. Spontaneous spatial coupling is a novel input technique that instead allows any body movement, or movement of tangible objects, to be appropriated for touchless pointing on an ad hoc basis. Pointer acquisition is facilitated by the display presenting graphical objects in motion, to which users can synchronise to define a temporary spatial coupling with the body part or tangible object they used in the process. The technique can be deployed using minimal hardware, as demonstrated by MatchPoint, a generic computer vision-based implementation of the technique that requires only a webcam. We explore the design space of spontaneous spatial coupling, demonstrate the versatility of the technique with application examples, and evaluate MatchPoint performance using a multi-directional pointing task.\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Remote Control by Body Movement in Synchrony with Orbiting Widgets: An Evaluation of TraceMatch.\n \n \n \n \n\n\n \n Clarke, C., Bellino, A., Esteves, A., & Gellersen, H.\n\n\n \n\n\n\n
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 1(3). September 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1145/3130910,\nauthor = {Clarke, Christopher and Bellino, Alessio and Esteves, Augusto and Gellersen, Hans},\ntitle = {Remote Control by Body Movement in Synchrony with Orbiting Widgets: An Evaluation of TraceMatch},\nyear = {2017},\nissue_date = {September 2017},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {1},\nnumber = {3},\nurl = {https://doi.org/10.1145/3130910},\ndoi = {10.1145/3130910},\nabstract = {In this work we consider how users can use body movement for remote control with minimal\neffort and maximum flexibility. TraceMatch is a novel technique where the interface\ndisplays available controls as circular widgets with orbiting targets, and where users\ncan trigger a control by mimicking the displayed motion. The technique uses computer\nvision to detect circular motion as a uniform type of input, but is highly appropriable\nas users can produce matching motion with any part of their body. We present three\nstudies that investigate input performance with different parts of the body, user\npreferences, and spontaneous choice of movements for input in realistic application\nscenarios. The results show that users can provide effective input with their head,\nhands and while holding objects, that multiple controls can be effectively distinguished\nby the difference in presented phase and direction of movement, and that users choose\nand switch modes of input seamlessly.},\njournal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},\nmonth = sep,\narticleno = {45},\nnumpages = {22},\nkeywords = {Gesture input, Movement correlation, Motion matching, Computer vision, Input techniques, Remote control, User input, Motion correlation, Path mimicry, User evaluation, Vision-based interfaces}\n}\n\n
\n
\n\n\n
\n In this work we consider how users can use body movement for remote control with minimal effort and maximum flexibility. TraceMatch is a novel technique where the interface displays available controls as circular widgets with orbiting targets, and where users can trigger a control by mimicking the displayed motion. The technique uses computer vision to detect circular motion as a uniform type of input, but is highly appropriable as users can produce matching motion with any part of their body. We present three studies that investigate input performance with different parts of the body, user preferences, and spontaneous choice of movements for input in realistic application scenarios. The results show that users can provide effective input with their head, hands and while holding objects, that multiple controls can be effectively distinguished by the difference in presented phase and direction of movement, and that users choose and switch modes of input seamlessly.\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Motion Correlation: Selecting Objects by Matching Their Movement.\n \n \n \n \n\n\n \n Velloso, E., Carter, M., Newn, J., Esteves, A., Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n
ACM Trans. Comput.-Hum. Interact., 24(3). April 2017.\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1145/3064937,\nauthor = {Velloso, Eduardo and Carter, Marcus and Newn, Joshua and Esteves, Augusto and Clarke, Christopher and Gellersen, Hans},\ntitle = {Motion Correlation: Selecting Objects by Matching Their Movement},\nyear = {2017},\nissue_date = {July 2017},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {24},\nnumber = {3},\nissn = {1073-0516},\nurl = {https://doi.org/10.1145/3064937},\ndoi = {10.1145/3064937},\nabstract = {Selection is a canonical task in user interfaces, commonly supported by presenting\nobjects for acquisition by pointing. In this article, we consider motion correlation\nas an alternative for selection. The principle is to represent available objects by\nmotion in the interface, have users identify a target by mimicking its specific motion,\nand use the correlation between the system’s output with the user’s input to determine\nthe selection. The resulting interaction has compelling properties, as users are guided\nby motion feedback, and only need to copy a presented motion. Motion correlation has\nbeen explored in earlier work but only recently begun to feature in holistic interface\ndesigns. We provide a first comprehensive review of the principle, and present an\nanalysis of five previously published works, in which motion correlation underpinned\nthe design of novel gaze and gesture interfaces for diverse application contexts.\nWe derive guidelines for motion correlation algorithms, motion feedback, choice of\nmodalities, overall design of motion correlation interfaces, and identify opportunities\nand challenges identified for future research and design.},\njournal = {ACM Trans. Comput.-Hum. Interact.},\nmonth = apr,\narticleno = {22},\nnumpages = {35},\nkeywords = {motion tracking, natural user interfaces, gesture interfaces, gaze interaction, Motion correlation, interaction techniques, eye tracking}\n}\n\n
\n
\n\n\n
\n Selection is a canonical task in user interfaces, commonly supported by presenting objects for acquisition by pointing. In this article, we consider motion correlation as an alternative for selection. The principle is to represent available objects by motion in the interface, have users identify a target by mimicking its specific motion, and use the correlation between the system’s output with the user’s input to determine the selection. The resulting interaction has compelling properties, as users are guided by motion feedback, and only need to copy a presented motion. Motion correlation has been explored in earlier work but only recently begun to feature in holistic interface designs. We provide a first comprehensive review of the principle, and present an analysis of five previously published works, in which motion correlation underpinned the design of novel gaze and gesture interfaces for diverse application contexts. We derive guidelines for motion correlation algorithms, motion feedback, choice of modalities, overall design of motion correlation interfaces, and identify opportunities and challenges identified for future research and design.\n
\n\n\n
\n
\n\n \n \n \n \n \n AURORA: autonomous real-time on-board video analytics.\n \n \n \n\n\n \n Angelov, P., Sadeghi Tehran, P., & Clarke, C.\n\n\n \n\n\n\n
Neural Computing and Applications, 28(5): 855–865. 2017.\n
\n\n
\n\n
\n\n
\n\n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@article{angelov2017aurora,\n title={AURORA: autonomous real-time on-board video analytics},\n author={Angelov, Plamen and Sadeghi Tehran, Pouria and Clarke, Christopher},\n journal={Neural Computing and Applications},\n volume={28},\n number={5},\n pages={855--865},\n year={2017}\n}\n\n
\n
\n\n\n\n