var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fchristopherclarke.net%2Fpublications.bib&commas=true&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fchristopherclarke.net%2Fpublications.bib&commas=true&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fchristopherclarke.net%2Fpublications.bib&commas=true&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n AB1460 DEVELOPMENT STRATEGIES AND THEORETICAL UNDERPINNINGS OF SMARTPHONE APPS TO SUPPORT SELF-MANAGEMENT OF RHEUMATIC AND MUSCULOSKELETAL DISEASES: A SYSTEMATIC LITERATURE REVIEW.\n \n \n \n \n\n\n \n Barnett, R., Clarke, C., Sengupta, R., & Rouse, P. C.\n\n\n \n\n\n\n Annals of the Rheumatic Diseases, 83(Suppl 1): 2092–2092. 2024.\n \n\n\n\n
\n\n\n\n \n \n \"AB1460Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article {Barnett2092,\n\tauthor = {Barnett, R. and Clarke, C. and Sengupta, R. and Rouse, P. C.},\n\ttitle = {AB1460 DEVELOPMENT STRATEGIES AND THEORETICAL UNDERPINNINGS OF SMARTPHONE APPS TO SUPPORT SELF-MANAGEMENT OF RHEUMATIC AND MUSCULOSKELETAL DISEASES: A SYSTEMATIC LITERATURE REVIEW},\n\tvolume = {83},\n\tnumber = {Suppl 1},\n\tpages = {2092--2092},\n\tyear = {2024},\n\tdoi = {10.1136/annrheumdis-2024-eular.1646},\n\tpublisher = {BMJ Publishing Group Ltd},\n\tabstract = {Background: Smartphone apps are being developed to support and empower people living with rheumatic and musculoskeletal diseases (RMDs) in their self-management. Motivational behaviour change theory and principles from human-computer interaction can inform the design of effective smartphone self-management interventions to ensure that interventions are engaging, effective and motivate positive health behaviours. However, it is not yet known whether these theoretical frameworks are being effectively harnessed in rheumatology.Objectives: To review the scientific peer-reviewed literature, to identify how smartphone apps are being developed and utilised in rheumatology to support people living with RMDs in their self-management. A comprehensive overview of utilised theoretical frameworks, development processes, behaviour change techniques and targeted health behaviours are described.Methods: Searches were conducted within PubMed, Scopus, Web of Science, Embase, MEDLINE and PsychInfo, up to 16th October 2023. Peer-reviewed publications were identified and screened for eligibility by two independent reviewers. Data were extracted on populations, targeted health behaviours and utilised theoretical frameworks, development processes, and behaviour change techniques. A systematic, narrative evidence synthesis is presented.Results: This systematic review identified 72 relevant publications, pertaining to 50 smartphone self-management apps. The majority of interventions were identified from Europe (52\\%), followed by the Asia-Pacific (22\\%), Americas (20\\%), and Middle East/Africa (6\\%) regions. Interventions were designed for a variety of RMDs (Figure 1) including osteoarthritis (28\\%), rheumatoid arthritis (22\\%), gout (10\\%) and juvenile conditions (10\\%). Approximately half of identified apps reported co-design with patients (48\\%) and healthcare professionals (52\\%). Identified publications reported a wide range of behaviours targeted by RMD smartphone self-management apps: the most common being physical activity (50\\%), symptom management (28\\%), psychological regulation (e.g., mindfulness, stress reduction, mood regulation, gratitude, disease acceptance, coping; 26\\%), diet or weight management (16\\%), and medication adherence (16\\%). Just a third (34\\%) of identified apps explicitly acknowledged alignment with clinical guidelines within their respective publications, and even fewer (24\\%) cited a published development framework or guideline (Table 1). Less than a third (26\\%) reported design with consideration of motivational behaviour change theory, six (12\\%) reported theoretical frameworks to promote app engagement (three of which used persuasive systems design), and five (10\\%) acknowledged use of behaviour change techniques (most frequently pertaining to goals and planning, feedback and monitoring).Figure 1. Populations targeted by identified self-management appsTable 1. Cited use of a motivational/ behaviour change theory or published development framework or guideline to inform app developmentConclusion: These results indicate that only half of RMD self-management apps are developed in collaboration with patients and healthcare professionals, and a minority are designed with consideration of motivational behaviour change theory, or theoretical frameworks to promote app engagement. This is a missed opportunity, and in order to optimise the potential of rheumatology smartphone self-management apps, these interventions must be developed in collaboration with users, and theoretical frameworks applied, to optimise app engagement and long-term positive health behaviour change (self-management). Application of theoretical frameworks to inform app development, guided and driven by the patient perspective, will allow researchers and healthcare professionals to better understand, effectively harness and test the causal pathways/ mechanisms underpinning intervention engagement and health behaviour change, to optimise intervention effects. To thereby design interventions that better support and empower patients in their self-management.REFERENCES: NIL.Acknowledgements: We acknowledge Dr Simon Jones for his input and supervision during the first few months of the project. This work was supported by the Sir Halley Stewart Trust [grant number:2316], who provided funding for RBs time during the first few months of the project. The Sir Halley Stewart Trust played no role in development of the protocol or conduct of the study, and views expressed herein do not reflect views of the Trust.Disclosure of Interests: Rosemarie Barnett Scientific writer for a stand-alone piece of work for Pfizer, unrelated to this abstract, Christopher Clarke: None declared, Raj Sengupta Has received honoraria from Abbvie, Biogen, BMS, Eli Lilly, MSD, Novartis, Pfizer, Roche and UCB, Has received grants from Abbvie, Celgene, Novartis, and UCB, Peter C Rouse: None declared.},\n\tissn = {0003-4967},\n\tURL = {https://ard.bmj.com/content/83/Suppl_1/2092.1},\n\teprint = {https://ard.bmj.com/content/83/Suppl_1/2092.1.full.pdf},\n\tjournal = {Annals of the Rheumatic Diseases}\n}\n\n\n
\n
\n\n\n
\n Background: Smartphone apps are being developed to support and empower people living with rheumatic and musculoskeletal diseases (RMDs) in their self-management. Motivational behaviour change theory and principles from human-computer interaction can inform the design of effective smartphone self-management interventions to ensure that interventions are engaging, effective and motivate positive health behaviours. However, it is not yet known whether these theoretical frameworks are being effectively harnessed in rheumatology.Objectives: To review the scientific peer-reviewed literature, to identify how smartphone apps are being developed and utilised in rheumatology to support people living with RMDs in their self-management. A comprehensive overview of utilised theoretical frameworks, development processes, behaviour change techniques and targeted health behaviours are described.Methods: Searches were conducted within PubMed, Scopus, Web of Science, Embase, MEDLINE and PsychInfo, up to 16th October 2023. Peer-reviewed publications were identified and screened for eligibility by two independent reviewers. Data were extracted on populations, targeted health behaviours and utilised theoretical frameworks, development processes, and behaviour change techniques. A systematic, narrative evidence synthesis is presented.Results: This systematic review identified 72 relevant publications, pertaining to 50 smartphone self-management apps. The majority of interventions were identified from Europe (52%), followed by the Asia-Pacific (22%), Americas (20%), and Middle East/Africa (6%) regions. Interventions were designed for a variety of RMDs (Figure 1) including osteoarthritis (28%), rheumatoid arthritis (22%), gout (10%) and juvenile conditions (10%). Approximately half of identified apps reported co-design with patients (48%) and healthcare professionals (52%). Identified publications reported a wide range of behaviours targeted by RMD smartphone self-management apps: the most common being physical activity (50%), symptom management (28%), psychological regulation (e.g., mindfulness, stress reduction, mood regulation, gratitude, disease acceptance, coping; 26%), diet or weight management (16%), and medication adherence (16%). Just a third (34%) of identified apps explicitly acknowledged alignment with clinical guidelines within their respective publications, and even fewer (24%) cited a published development framework or guideline (Table 1). Less than a third (26%) reported design with consideration of motivational behaviour change theory, six (12%) reported theoretical frameworks to promote app engagement (three of which used persuasive systems design), and five (10%) acknowledged use of behaviour change techniques (most frequently pertaining to goals and planning, feedback and monitoring).Figure 1. Populations targeted by identified self-management appsTable 1. Cited use of a motivational/ behaviour change theory or published development framework or guideline to inform app developmentConclusion: These results indicate that only half of RMD self-management apps are developed in collaboration with patients and healthcare professionals, and a minority are designed with consideration of motivational behaviour change theory, or theoretical frameworks to promote app engagement. This is a missed opportunity, and in order to optimise the potential of rheumatology smartphone self-management apps, these interventions must be developed in collaboration with users, and theoretical frameworks applied, to optimise app engagement and long-term positive health behaviour change (self-management). Application of theoretical frameworks to inform app development, guided and driven by the patient perspective, will allow researchers and healthcare professionals to better understand, effectively harness and test the causal pathways/ mechanisms underpinning intervention engagement and health behaviour change, to optimise intervention effects. To thereby design interventions that better support and empower patients in their self-management.REFERENCES: NIL.Acknowledgements: We acknowledge Dr Simon Jones for his input and supervision during the first few months of the project. This work was supported by the Sir Halley Stewart Trust [grant number:2316], who provided funding for RBs time during the first few months of the project. The Sir Halley Stewart Trust played no role in development of the protocol or conduct of the study, and views expressed herein do not reflect views of the Trust.Disclosure of Interests: Rosemarie Barnett Scientific writer for a stand-alone piece of work for Pfizer, unrelated to this abstract, Christopher Clarke: None declared, Raj Sengupta Has received honoraria from Abbvie, Biogen, BMS, Eli Lilly, MSD, Novartis, Pfizer, Roche and UCB, Has received grants from Abbvie, Celgene, Novartis, and UCB, Peter C Rouse: None declared.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DeformIO: Dynamic Stiffness Control on a Deformable Force-sensing Display.\n \n \n \n \n\n\n \n Nash, J. D., Steer, C., Dinca, T., Sharma, A., Favaratto Santos, A., Wildgoose, B. T., Ager, A., Clarke, C., & Alexander, J.\n\n\n \n\n\n\n In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems, of CHI EA '24, New York, NY, USA, 2024. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"DeformIO:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3613905.3650772,\nauthor = {Nash, James David and Steer, Cameron and Dinca, Teodora and Sharma, Adwait and Favaratto Santos, Alvaro and Wildgoose, Benjamin Timothy and Ager, Alexander and Clarke, Christopher and Alexander, Jason},\ntitle = {DeformIO: Dynamic Stiffness Control on a Deformable Force-sensing Display},\nyear = {2024},\nisbn = {9798400703317},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3613905.3650772},\ndoi = {10.1145/3613905.3650772},\nabstract = {Introducing DeformIO, a novel deformable display with co-located force input and variable stiffness output. Unlike prior work, our approach does not require pin arrays or re-configurable panels. Instead, we leveraged pneumatics and resistive sensing to enable force detection and stiffness control on a soft continuous surface. This allows users to perceive rich tactile feedback on a soft surface and replicates the benefits of fluid finger movement from traditional glass-based screens. Using a robotic arm, we conducted a series of evaluations with 3,267 trials to quantify the performance of touch and force input, as well as stiffness output. Additionally, our study confirmed users’ ability to apply multiple force inputs simultaneously and distinguish stiffness levels. We illustrate how DeformIO enhances interaction through a vision for everyday interaction and include two implemented self-contained demonstrations.},\nbooktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},\narticleno = {98},\nnumpages = {8},\nkeywords = {Deformable Display, Force Input, Pneumatics, Variable Stiffness},\nlocation = {\n},\nseries = {CHI EA '24}\n}\n\n
\n
\n\n\n
\n Introducing DeformIO, a novel deformable display with co-located force input and variable stiffness output. Unlike prior work, our approach does not require pin arrays or re-configurable panels. Instead, we leveraged pneumatics and resistive sensing to enable force detection and stiffness control on a soft continuous surface. This allows users to perceive rich tactile feedback on a soft surface and replicates the benefits of fluid finger movement from traditional glass-based screens. Using a robotic arm, we conducted a series of evaluations with 3,267 trials to quantify the performance of touch and force input, as well as stiffness output. Additionally, our study confirmed users’ ability to apply multiple force inputs simultaneously and distinguish stiffness levels. We illustrate how DeformIO enhances interaction through a vision for everyday interaction and include two implemented self-contained demonstrations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n REVEAL: REal and Virtual Environments Augmentation Lab @ Bath.\n \n \n \n \n\n\n \n Potts, D., Jicol, C., Clarke, C., O'Neill, E., Fitton, I. S., Dark, E., Oliveira Da Silva, M. M., Broad, Z., Sehgal, T., Hartley, J., Dalton, J., Proulx, M. J, & Lutteroth, C.\n\n\n \n\n\n\n In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems, of CHI EA '24, New York, NY, USA, 2024. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"REVEAL:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3613905.3648658,\nauthor = {Potts, Dominic and Jicol, Crescent and Clarke, Christopher and O'Neill, Eamonn and Fitton, Isabel Sophie and Dark, Elizabeth and Oliveira Da Silva, Manoela Milena and Broad, Zoe and Sehgal, Tarini and Hartley, Joseph and Dalton, Jeremy and Proulx, Michael J and Lutteroth, Christof},\ntitle = {REVEAL: REal and Virtual Environments Augmentation Lab @ Bath},\nyear = {2024},\nisbn = {9798400703317},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3613905.3648658},\ndoi = {10.1145/3613905.3648658},\nabstract = {The REal and Virtual Environments Augmentation Lab (REVEAL) at the University of Bath is an interdisciplinary research centre focusing on immersive technology. REVEAL investigates the fundamental principles, applications and interaction techniques of extended reality (XR), including virtual reality (VR) and augmented reality (AR). In this Interactivity demo, we will showcase some of our VR research across three areas: affective VR exergaming, learning with virtual avatars, and gaze interaction in VR.},\nbooktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},\narticleno = {411},\nnumpages = {5},\nkeywords = {affect, avatars, emotion recognition, emotions, exergaming, gaze, interaction, learning, virtual reality},\nlocation = {\n},\nseries = {CHI EA '24}\n}\n\n
\n
\n\n\n
\n The REal and Virtual Environments Augmentation Lab (REVEAL) at the University of Bath is an interdisciplinary research centre focusing on immersive technology. REVEAL investigates the fundamental principles, applications and interaction techniques of extended reality (XR), including virtual reality (VR) and augmented reality (AR). In this Interactivity demo, we will showcase some of our VR research across three areas: affective VR exergaming, learning with virtual avatars, and gaze interaction in VR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Watch This! Observational Learning in VR Promotes Better Far Transfer than Active Learning for a Fine Psychomotor Task.\n \n \n \n \n\n\n \n Fitton, I. S., Dark, E., Oliveira da Silva, M. M., Dalton, J., Proulx, M. J, Clarke, C., & Lutteroth, C.\n\n\n \n\n\n\n In Proceedings of the CHI Conference on Human Factors in Computing Systems, of CHI '24, New York, NY, USA, 2024. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"WatchPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3613904.3642550,\nauthor = {Fitton, Isabel Sophie and Dark, Elizabeth and Oliveira da Silva, Manoela Milena and Dalton, Jeremy and Proulx, Michael J and Clarke, Christopher and Lutteroth, Christof},\ntitle = {Watch This! Observational Learning in VR Promotes Better Far Transfer than Active Learning for a Fine Psychomotor Task},\nyear = {2024},\nisbn = {9798400703300},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3613904.3642550},\ndoi = {10.1145/3613904.3642550},\nabstract = {Virtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a ‘learning-by-doing’ active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with ‘hands-on’ practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.},\nbooktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems},\narticleno = {721},\nnumpages = {19},\nkeywords = {Active, Avatar Similarity, Observational, Psychomotor, Skills Training, Virtual Reality},\nlocation = {Honolulu, HI, USA},\nseries = {CHI '24}\n}\n\n
\n
\n\n\n
\n Virtual Reality (VR) holds great potential for psychomotor training, with existing applications using almost exclusively a ‘learning-by-doing’ active learning approach, despite the possible benefits of incorporating observational learning. We compared active learning (n=26) with different variations of observational learning in VR for a manual assembly task. For observational learning, we considered three levels of visual similarity between the demonstrator avatar and the user, dissimilar (n=25), minimally similar (n=26), or a self-avatar (n=25), as similarity has been shown to improve learning. Our results suggest observational learning can be effective in VR when combined with ‘hands-on’ practice and can lead to better far skill transfer to real-world contexts that differ from the training context. Furthermore, we found self-similarity in observational learning can be counterproductive when focusing on a manual task, and skills decay quickly without further training. We discuss these findings and derive design recommendations for future VR training.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming.\n \n \n \n \n\n\n \n Potts, D., Broad, Z., Sehgal, T., Hartley, J., O'Neill, E., Jicol, C., Clarke, C., & Lutteroth, C.\n\n\n \n\n\n\n In Proceedings of the CHI Conference on Human Factors in Computing Systems, of CHI '24, New York, NY, USA, 2024. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"SweatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3613904.3642611,\nauthor = {Potts, Dominic and Broad, Zoe and Sehgal, Tarini and Hartley, Joseph and O'Neill, Eamonn and Jicol, Crescent and Clarke, Christopher and Lutteroth, Christof},\ntitle = {Sweating the Details: Emotion Recognition and the Influence of Physical Exertion in Virtual Reality Exergaming},\nyear = {2024},\nisbn = {9798400703300},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3613904.3642611},\ndoi = {10.1145/3613904.3642611},\nabstract = {There is great potential for adapting Virtual Reality (VR) exergames based on a user’s affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.},\nbooktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems},\narticleno = {757},\nnumpages = {21},\nkeywords = {affect recognition, emotion recognition, exergaming, high-intensity exercise, physiological sensing, psychophysiological correlates, virtual reality},\nlocation = {Honolulu, HI, USA},\nseries = {CHI '24}\n}\n\n
\n
\n\n\n
\n There is great potential for adapting Virtual Reality (VR) exergames based on a user’s affective state. However, physical activity and VR interfere with physiological sensors, making affect recognition challenging. We conducted a study (n=72) in which users experienced four emotion inducing VR exergaming environments (happiness, sadness, stress and calmness) at three different levels of exertion (low, medium, high). We collected physiological measures through pupillometry, electrodermal activity, heart rate, and facial tracking, as well as subjective affect ratings. Our validated virtual environments, data, and analyses are openly available. We found that the level of exertion influences the way affect can be recognised, as well as affect itself. Furthermore, our results highlight the importance of data cleaning to account for environmental and interpersonal factors interfering with physiological measures. The results shed light on the relationships between physiological measures and affective states and inform design choices about sensors and data cleaning approaches for affective VR.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n P169 Optimising self-management in axial spondyloarthritis: a qualitative exploration of patient perspectives.\n \n \n \n\n\n \n Barnett, R., Clarke, C., Sengupta, R., & Rouse, P.\n\n\n \n\n\n\n Rheumatology, 63(Supplement_1): keae163–208. 2024.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{barnett2024p169,\n  title={P169 Optimising self-management in axial spondyloarthritis: a qualitative exploration of patient perspectives},\n  author={Barnett, Rosie and Clarke, Christopher and Sengupta, Raj and Rouse, Peter},\n  journal={Rheumatology},\n  volume={63},\n  number={Supplement\\_1},\n  pages={keae163--208},\n  year={2024},\n  publisher={Oxford University Press}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Effectiveness of Fully Automated Digital Interventions in Promoting Mental Well-Being in the General Population: Systematic Review and Meta-Analysis.\n \n \n \n \n\n\n \n Groot, J., MacLellan, A., Butler, M., Todor, E., Zulfiqar, M., Thackrah, T., Clarke, C., Brosnan, M., & Ainsworth, B.\n\n\n \n\n\n\n JMIR Ment Health, 10: e44658. Oct 2023.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@Article{info:doi/10.2196/44658,\nauthor="Groot, Julia\nand MacLellan, Alexander\nand Butler, Madelaine\nand Todor, Elisa\nand Zulfiqar, Mahnoor\nand Thackrah, Timothy\nand Clarke, Christopher\nand Brosnan, Mark\nand Ainsworth, Ben",\ntitle="The Effectiveness of Fully Automated Digital Interventions in Promoting Mental Well-Being in the General Population: Systematic Review and Meta-Analysis",\njournal="JMIR Ment Health",\nyear="2023",\nmonth="Oct",\nday="19",\nvolume="10",\npages="e44658",\nkeywords="mental well-being; promotion; intervention; digital; web-based; apps; mobile phone",\nabstract="Background: Recent years have highlighted an increasing need to promote mental well-being in the general population. This has led to a rapidly growing market for fully automated digital mental well-being tools. Although many individuals have started using these tools in their daily lives, evidence on the overall effectiveness of digital mental well-being tools is currently lacking. Objective: This study aims to review the evidence on the effectiveness of fully automated digital interventions in promoting mental well-being in the general population. Methods: Following the preregistration of the systematic review protocol on PROSPERO, searches were carried out in MEDLINE, Web of Science, Cochrane, PsycINFO, PsycEXTRA, Scopus, and ACM Digital (initial searches in February 2022; updated in October 2022). Studies were included if they contained a general population sample and a fully automated digital intervention that exclusively used psychological mental well-being promotion activities. Two reviewers, blinded to each other's decisions, conducted data selection, extraction, and quality assessment of the included studies. Narrative synthesis and a random-effects model of per-protocol data were adopted. Results: We included 19 studies that involved 7243 participants. These studies included 24 fully automated digital mental well-being interventions, of which 15 (63{\\%}) were included in the meta-analysis. Compared with no intervention, there was a significant small effect of fully automated digital mental well-being interventions on mental well-being in the general population (standardized mean difference 0.19, 95{\\%} CI 0.04-0.33; P=.02). Specifically, mindfulness-, acceptance-, commitment-, and compassion-based interventions significantly promoted mental well-being in the general population (P=.006); insufficient evidence was available for positive psychology and cognitive behavioral therapy--based interventions; and contraindications were found for integrative approaches. Overall, there was substantial heterogeneity, which could be partially explained by the intervention duration, comparator, and study outcomes. The risk of bias was high, and confidence in the quality of the evidence was very low (Grading of Recommendations, Assessment, Development, and Evaluations), primarily because of the high rates of study dropout (average 37{\\%}; range 0{\\%}-85{\\%}) and suboptimal intervention adherence (average 40{\\%}). Conclusions: This study provides a novel contribution to knowledge regarding the effectiveness, strengths, and weaknesses of fully automated digital mental well-being interventions in the general population. Future research and practice should consider these findings when developing fully automated digital mental well-being tools. In addition, research should aim to investigate positive psychology and cognitive behavioral therapy--based tools as well as develop further strategies to improve adherence and reduce dropout in fully automated digital mental well-being interventions. Finally, it should aim to understand when and for whom these interventions are particularly beneficial. Trial Registration: PROSPERO CRD42022310702; https://tinyurl.com/yc7tcwy7 ",\nissn="2368-7959",\ndoi="10.2196/44658",\nurl="https://mental.jmir.org/2023/1/e44658",\nurl="https://doi.org/10.2196/44658",\nurl="http://www.ncbi.nlm.nih.gov/pubmed/37856172"\n}\n\n\n
\n
\n\n\n
\n Background: Recent years have highlighted an increasing need to promote mental well-being in the general population. This has led to a rapidly growing market for fully automated digital mental well-being tools. Although many individuals have started using these tools in their daily lives, evidence on the overall effectiveness of digital mental well-being tools is currently lacking. Objective: This study aims to review the evidence on the effectiveness of fully automated digital interventions in promoting mental well-being in the general population. Methods: Following the preregistration of the systematic review protocol on PROSPERO, searches were carried out in MEDLINE, Web of Science, Cochrane, PsycINFO, PsycEXTRA, Scopus, and ACM Digital (initial searches in February 2022; updated in October 2022). Studies were included if they contained a general population sample and a fully automated digital intervention that exclusively used psychological mental well-being promotion activities. Two reviewers, blinded to each other's decisions, conducted data selection, extraction, and quality assessment of the included studies. Narrative synthesis and a random-effects model of per-protocol data were adopted. Results: We included 19 studies that involved 7243 participants. These studies included 24 fully automated digital mental well-being interventions, of which 15 (63%) were included in the meta-analysis. Compared with no intervention, there was a significant small effect of fully automated digital mental well-being interventions on mental well-being in the general population (standardized mean difference 0.19, 95% CI 0.04-0.33; P=.02). Specifically, mindfulness-, acceptance-, commitment-, and compassion-based interventions significantly promoted mental well-being in the general population (P=.006); insufficient evidence was available for positive psychology and cognitive behavioral therapy–based interventions; and contraindications were found for integrative approaches. Overall, there was substantial heterogeneity, which could be partially explained by the intervention duration, comparator, and study outcomes. The risk of bias was high, and confidence in the quality of the evidence was very low (Grading of Recommendations, Assessment, Development, and Evaluations), primarily because of the high rates of study dropout (average 37%; range 0%-85%) and suboptimal intervention adherence (average 40%). Conclusions: This study provides a novel contribution to knowledge regarding the effectiveness, strengths, and weaknesses of fully automated digital mental well-being interventions in the general population. Future research and practice should consider these findings when developing fully automated digital mental well-being tools. In addition, research should aim to investigate positive psychology and cognitive behavioral therapy–based tools as well as develop further strategies to improve adherence and reduce dropout in fully automated digital mental well-being interventions. Finally, it should aim to understand when and for whom these interventions are particularly beneficial. Trial Registration: PROSPERO CRD42022310702; https://tinyurl.com/yc7tcwy7 \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n REVEAL: REal and Virtual Environments Augmentation Lab @ Bath.\n \n \n \n \n\n\n \n Lutteroth, C., Jicol, C., Clarke, C., Proulx, M. J, O'Neill, E., Petrini, K., Fitton, I. S., Tor, E., & Yoon, J.\n\n\n \n\n\n\n In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI EA '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"REVEAL:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544549.3583934,\nauthor = {Lutteroth, Christof and Jicol, Crescent and Clarke, Christopher and Proulx, Michael J and O'Neill, Eamonn and Petrini, Karin and Fitton, Isabel Sophie and Tor, Emilia and Yoon, Jinha},\ntitle = {REVEAL: REal and Virtual Environments Augmentation Lab @ Bath},\nyear = {2023},\nisbn = {9781450394222},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544549.3583934},\ndoi = {10.1145/3544549.3583934},\nabstract = {The REal and Virtual Environments Augmentation Lab (REVEAL) is a research centre for Human-Computer Interaction at the University of Bath with a focus on immersive technology. Virtual reality (VR) has the potential to change the way we experience the world and interact with each other. It can be used for a wide range of applications such as exercise, training, entertainment, and therapy. In this demo paper, we will showcase some of our VR research that has been published at CHI which can enhance VR experiences across three specific areas of study: learning with avatars, presence, and exergaming.},\nbooktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {470},\nnumpages = {4},\nkeywords = {Avatars, Exergaming, Learning, Presence, Virtual Reality},\nlocation = {Hamburg, Germany},\nseries = {CHI EA '23}\n}\n\n
\n
\n\n\n
\n The REal and Virtual Environments Augmentation Lab (REVEAL) is a research centre for Human-Computer Interaction at the University of Bath with a focus on immersive technology. Virtual reality (VR) has the potential to change the way we experience the world and interact with each other. It can be used for a wide range of applications such as exercise, training, entertainment, and therapy. In this demo paper, we will showcase some of our VR research that has been published at CHI which can enhance VR experiences across three specific areas of study: learning with avatars, presence, and exergaming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FakeForward: Using Deepfake Technology for Feedforward Learning.\n \n \n \n \n\n\n \n Clarke, C., Xu, J., Zhu, Y., Dharamshi, K., McGill, H., Black, S., & Lutteroth, C.\n\n\n \n\n\n\n In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"FakeForward:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544548.3581100,\nauthor = {Clarke, Christopher and Xu, Jingnan and Zhu, Ye and Dharamshi, Karan and McGill, Harry and Black, Stephen and Lutteroth, Christof},\ntitle = {FakeForward: Using Deepfake Technology for Feedforward Learning},\nyear = {2023},\nisbn = {9781450394215},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544548.3581100},\ndoi = {10.1145/3544548.3581100},\nabstract = {Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward – a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.},\nbooktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {715},\nnumpages = {17},\nkeywords = {Deepfake, Feedforward, Fitness, Physical Exercise, Public Speaking, Skill Acquisition, Training, Videos},\nlocation = {Hamburg, Germany},\nseries = {CHI '23}\n}\n\n
\n
\n\n\n
\n Videos are commonly used to support learning of new skills, to improve existing skills, and as a source of motivation for training. Video self-modelling (VSM) is a learning technique that improves performance and motivation by showing a user a video of themselves performing a skill at a level they have not yet achieved. Traditional VSM is very data and labour intensive: a lot of video footage needs to be collected and manually edited in order to create an effective self-modelling video. We address this by presenting FakeForward – a method which uses deepfakes to create self-modelling videos from videos of other people. FakeForward turns videos of better-performing people into effective, personalised training tools by replacing their face with the user’s. We investigate how FakeForward can be effectively applied and demonstrate its efficacy in rapidly improving performance in physical exercises, as well as confidence and perceived competence in public speaking.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dancing with the Avatars: Minimal Avatar Customisation Enhances Learning in a Psychomotor Task.\n \n \n \n \n\n\n \n Fitton, I., Clarke, C., Dalton, J., Proulx, M. J, & Lutteroth, C.\n\n\n \n\n\n\n In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"DancingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544548.3580944,\nauthor = {Fitton, Isabel and Clarke, Christopher and Dalton, Jeremy and Proulx, Michael J and Lutteroth, Christof},\ntitle = {Dancing with the Avatars: Minimal Avatar Customisation Enhances Learning in a Psychomotor Task},\nyear = {2023},\nisbn = {9781450394215},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544548.3580944},\ndoi = {10.1145/3544548.3580944},\nabstract = {Virtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated ‘minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.},\nbooktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {714},\nnumpages = {16},\nkeywords = {Avatar Customisation, Psychomotor, Skills Training, Virtual Environments, Virtual Reality},\nlocation = {Hamburg, Germany},\nseries = {CHI '23}\n}\n\n
\n
\n\n\n
\n Virtual environments can support psychomotor learning by allowing learners to observe instructor avatars. Instructor avatars that look like the learner hold promise in enhancing learning; however, it is unclear whether this works for psychomotor tasks and how similar avatars need to be. We investigated ‘minimal’ customisation of instructor avatars, approximating a learner’s appearance by matching only key visual features: gender, skin-tone, and hair colour. These avatars can be created easily and avoid problems of highly similar avatars. Using modern dancing as a skill to learn, we compared the effects of visually similar and dissimilar avatars, considering both learning on a screen (n=59) and in VR (n=38). Our results indicate that minimal avatar customisation leads to significantly more vivid visual imagery of the dance moves than dissimilar avatars. We analyse variables affecting interindividual differences, discuss the results in relation to theory, and derive design implications for psychomotor training in virtual environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality.\n \n \n \n \n\n\n \n Jicol, C., Clarke, C., Tor, E., Yip, H. L., Yoon, J., Bevan, C., Bowden, H., Brann, E., Cater, K., Cole, R., Deeley, Q., Eidinow, E., O'Neill, E., Lutteroth, C., & Proulx, M. J\n\n\n \n\n\n\n In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"ImaginePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544548.3581212,\nauthor = {Jicol, Crescent and Clarke, Christopher and Tor, Emilia and Yip, Hiu Lam and Yoon, Jinha and Bevan, Chris and Bowden, Hugh and Brann, Elisa and Cater, Kirsten and Cole, Richard and Deeley, Quinton and Eidinow, Esther and O'Neill, Eamonn and Lutteroth, Christof and Proulx, Michael J},\ntitle = {Imagine That! Imaginative Suggestibility Affects Presence in Virtual Reality},\nyear = {2023},\nisbn = {9781450394215},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544548.3581212},\ndoi = {10.1145/3544548.3581212},\nabstract = {Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience – a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants’ (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.},\nbooktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {397},\nnumpages = {11},\nkeywords = {imaginative suggestibility, presence, priming, virtual reality},\nlocation = {Hamburg, Germany},\nseries = {CHI '23}\n}\n\n
\n
\n\n\n
\n Personality characteristics can affect how much presence an individual experiences in virtual reality, and researchers have explored how it may be possible to prime users to increase their sense of presence. A personality characteristic that has yet to be explored in the VR literature is imaginative suggestibility, the ability of an individual to successfully experience an imaginary scenario as if it were real. In this paper, we explore how suggestibility and priming affect presence when consulting an ancient oracle in VR as part of an educational experience – a common VR application. We show for the first time how imaginative suggestibility is a major factor which affects presence and emotions experienced in VR, while priming cues have no effect on participants’ (n=128) user experience, contrasting results from prior work. We consider the impacts of these findings for VR design and provide guidelines based on our results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Realism and Field of View Affect Presence in VR but Not the Way You Think.\n \n \n \n \n\n\n \n Jicol, C., Clarke, C., Tor, E., Dakin, R. M, Lancaster, T. C., Chang, S. T., Petrini, K., O'Neill, E., Proulx, M. J, & Lutteroth, C.\n\n\n \n\n\n\n In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"RealismPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544548.3581448,\nauthor = {Jicol, Crescent and Clarke, Christopher and Tor, Emilia and Dakin, Rebecca M and Lancaster, Tom Charlie and Chang, Sze Tung and Petrini, Karin and O'Neill, Eamonn and Proulx, Michael J and Lutteroth, Christof},\ntitle = {Realism and Field of View Affect Presence in VR but Not the Way You Think},\nyear = {2023},\nisbn = {9781450394215},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544548.3581448},\ndoi = {10.1145/3544548.3581448},\nabstract = {Presence is one of the most studied and most important variables in immersive virtual reality (VR) and it influences the effectiveness of many VR applications. Separate bodies of research indicate that presence is determined by (1) technical factors such as the visual realism of a virtual environment (VE) and the field of view (FoV), and (2) human factors such as emotions and agency. However, it remains unknown how technical and human factors may interact in the presence formation process. We conducted a user study (n=360) to investigate the effects of visual realism (high/low), FoV (high/low), emotions (focusing on fear) and agency (yes/no) on presence. Counter to previous assumptions, technical factors did not affect presence directly but were moderated through human factors. We propose TAP-Fear, a structural equation model that describes how design decisions, technical factors and human factors combine and interact in the formation of presence.},\nbooktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {399},\nnumpages = {17},\nkeywords = {agency, emotions, field of view, level of detail, presence, virtual reality},\nlocation = {Hamburg, Germany},\nseries = {CHI '23}\n}\n\n
\n
\n\n\n
\n Presence is one of the most studied and most important variables in immersive virtual reality (VR) and it influences the effectiveness of many VR applications. Separate bodies of research indicate that presence is determined by (1) technical factors such as the visual realism of a virtual environment (VE) and the field of view (FoV), and (2) human factors such as emotions and agency. However, it remains unknown how technical and human factors may interact in the presence formation process. We conducted a user study (n=360) to investigate the effects of visual realism (high/low), FoV (high/low), emotions (focusing on fear) and agency (yes/no) on presence. Counter to previous assumptions, technical factors did not affect presence directly but were moderated through human factors. We propose TAP-Fear, a structural equation model that describes how design decisions, technical factors and human factors combine and interact in the formation of presence.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection.\n \n \n \n \n\n\n \n Sidenmark, L., Clarke, C., Newn, J., Lystbæk, M. N., Pfeuffer, K., & Gellersen, H.\n\n\n \n\n\n\n In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, of CHI '23, New York, NY, USA, 2023. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"VergencePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544548.3580685,\nauthor = {Sidenmark, Ludwig and Clarke, Christopher and Newn, Joshua and Lystb\\ae{}k, Mathias N. and Pfeuffer, Ken and Gellersen, Hans},\ntitle = {Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection},\nyear = {2023},\nisbn = {9781450394215},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3544548.3580685},\ndoi = {10.1145/3544548.3580685},\nabstract = {Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is limited by motor and sensor limitations. To circumvent these limitations, we propose a vergence-based motion correlation method to detect visual attention toward very small targets. Smooth depth movements relative to the user are induced on 3D objects, which cause slow vergence eye movements when looked upon. Using the principle of motion correlation, the depth movements of the object and vergence eye movements are matched to determine which object the user is focussing on. In two user studies, we demonstrate how the technique can reliably infer gaze attention on very small targets, systematically explore how different stimulus motions affect attention detection, and show how the technique can be extended to multi-target selection. Finally, we provide example applications using the concept and design guidelines for small target and accuracy-independent attention detection in 3D environments.},\nbooktitle = {Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems},\narticleno = {257},\nnumpages = {15},\nkeywords = {Attention Detection, Gaze, Motion Correlation, Selection, Small Targets, Vergence, Virtual Reality},\nlocation = {Hamburg, Germany},\nseries = {CHI '23}\n}\n\n
\n
\n\n\n
\n Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is limited by motor and sensor limitations. To circumvent these limitations, we propose a vergence-based motion correlation method to detect visual attention toward very small targets. Smooth depth movements relative to the user are induced on 3D objects, which cause slow vergence eye movements when looked upon. Using the principle of motion correlation, the depth movements of the object and vergence eye movements are matched to determine which object the user is focussing on. In two user studies, we demonstrate how the technique can reliably infer gaze attention on very small targets, systematically explore how different stimulus motions affect attention detection, and show how the technique can be extended to multi-target selection. Finally, we provide example applications using the concept and design guidelines for small target and accuracy-independent attention detection in 3D environments.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Step Into My Mind Palace: Exploration of a Collaborative Paragogy Tool in VR.\n \n \n \n\n\n \n Sims, R., Chang, B., Bennett, V., Krishnan, A., Aboubakar, A., Coman, G., Bahrami, A., Huang, Z., Clarke, C., & Karnik, A.\n\n\n \n\n\n\n In 2022 8th International Conference of the Immersive Learning Research Network (iLRN), pages 1-8, 2022. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{9815936,\n  author={Sims, Robert and Chang, Barry and Bennett, Verity and Krishnan, Advaith and Aboubakar, Abdalslam and Coman, George and Bahrami, Abdulrazak and Huang, Zehao and Clarke, Christopher and Karnik, Abhijit},\n  booktitle={2022 8th International Conference of the Immersive Learning Research Network (iLRN)},\n  title={Step Into My Mind Palace: Exploration of a Collaborative Paragogy Tool in VR},\n  year={2022},\n  volume={},\n  number={},\n  pages={1-8},\n  doi={10.23919/iLRN55037.2022.9815936}}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n OpenEarable: Open Hardware Earable Sensing Platform.\n \n \n \n \n\n\n \n Röddiger, T., King, T., Roodt, D. R., Clarke, C., & Beigl, M.\n\n\n \n\n\n\n In Proceedings of the 1st International Workshop on Earable Computing, of EarComp’22, pages 29–34, New York, NY, USA, 2022. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"OpenEarable:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3544793.3563415,\ntitle = {OpenEarable: Open Hardware Earable Sensing Platform},\nauthor = {Röddiger, Tobias and King, Tobias and Roodt, Dylan Ray and Clarke, Christopher and Beigl, Michael},\nyear = 2022,\nbooktitle = {Proceedings of the 1st International Workshop on Earable Computing},\nlocation = {Cambridge, United Kingdom},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nseries = {EarComp’22},\npages = {29–34},\ndoi = {10.1145/3544793.3563415},\nurl = {https://doi.org/10.1145/3544793.3563415},\nnumpages = 6,\nkeywords = {In-Ear Headphones, IMU, Monitoring}\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena.\n \n \n \n \n\n\n \n Röddiger, T., Clarke, C., Breitling, P., Schneegans, T., Zhao, H., Gellersen, H., & Beigl, M.\n\n\n \n\n\n\n Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 6(3). sep 2022.\n \n\n\n\n
\n\n\n\n \n \n \"SensingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1145/3550314,\nauthor = {R\\"{o}ddiger, Tobias and Clarke, Christopher and Breitling, Paula and Schneegans, Tim and Zhao, Haibin and Gellersen, Hans and Beigl, Michael},\ntitle = {Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena},\nyear = {2022},\nissue_date = {September 2022},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {6},\nnumber = {3},\nurl = {https://doi.org/10.1145/3550314},\ndoi = {10.1145/3550314},\nabstract = {Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform.},\njournal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},\nmonth = {sep},\narticleno = {135},\nnumpages = {57},\nkeywords = {earables, ear wearable, ear-mounted, headphones, earbuds, ear-attached, ear-worn, hearables, earpiece, earphones, ear-based}\n}\n\n
\n
\n\n\n
\n Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advanced Visual Interfaces for Augmented Video.\n \n \n \n \n\n\n \n Coccoli, M., Galluccio, I., Torre, I., Amenduni, F., Cattaneo, A., & Clarke, C.\n\n\n \n\n\n\n In Proceedings of the 2022 International Conference on Advanced Visual Interfaces, of AVI 2022, New York, NY, USA, 2022. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"AdvancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3531073.3535253,\nauthor = {Coccoli, Mauro and Galluccio, Ilenia and Torre, Ilaria and Amenduni, Francesca and Cattaneo, Alberto and Clarke, Christopher},\ntitle = {Advanced Visual Interfaces for Augmented Video},\nyear = {2022},\nisbn = {9781450397193},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3531073.3535253},\ndoi = {10.1145/3531073.3535253},\nabstract = {The growing use of online videos across a wide range of applications, including education and training, demands new approaches to enhance their utility and the user experience, and to minimise or overcome their limitations. In addition, these new approaches must consider the needs of users with different requirements, abilities, and usage contexts. Advances in human-computer interaction, immersive video, artificial intelligence and adaptive systems can be effectively exploited to this aim, opening up exciting opportunities for enhancing the video medium. The purpose of this workshop is to bring together experts in the fields above and from popular application domains in order to provide a forum for discussing the current state-of-the-art and requirements for specific application domains, in addition to proposing experimental and theoretical approaches.},\nbooktitle = {Proceedings of the 2022 International Conference on Advanced Visual Interfaces},\narticleno = {91},\nnumpages = {3},\nkeywords = {visual feedback, intelligent user interfaces, 360 degree video, hypervideos},\nlocation = {Frascati, Rome, Italy},\nseries = {AVI 2022}\n}\n\n
\n
\n\n\n
\n The growing use of online videos across a wide range of applications, including education and training, demands new approaches to enhance their utility and the user experience, and to minimise or overcome their limitations. In addition, these new approaches must consider the needs of users with different requirements, abilities, and usage contexts. Advances in human-computer interaction, immersive video, artificial intelligence and adaptive systems can be effectively exploited to this aim, opening up exciting opportunities for enhancing the video medium. The purpose of this workshop is to bring together experts in the fields above and from popular application domains in order to provide a forum for discussing the current state-of-the-art and requirements for specific application domains, in addition to proposing experimental and theoretical approaches.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Reactive Video: Movement Sonification as Auditory Feedback for Supporting Physical Activity.\n \n \n \n\n\n \n Cavdir, D., Clarke, C., Chiu, P., Denoue, L., & Kimber, D.\n\n\n \n\n\n\n In New Interfaces for Musical Expression (NIME) 2021. 2021.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{cavdir2021reactive,\n  title={Reactive Video: Movement Sonification as Auditory Feedback for Supporting Physical Activity},\n  author={Cavdir, Doga and Clarke, Christopher and Chiu, Patrick and Denoue, Laurent and Kimber, Don},\n  booktitle={New Interfaces for Musical Expression (NIME) 2021},\n  year={2021}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction.\n \n \n \n \n\n\n \n Röddiger, T., Clarke, C., Wolffram, D., Budde, M., & Beigl, M.\n\n\n \n\n\n\n In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, of CHI '21, New York, NY, USA, 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"EarRumble:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3411764.3445205,\nauthor = {R\\"{o}ddiger, Tobias and Clarke, Christopher and Wolffram, Daniel and Budde, Matthias and Beigl, Michael},\ntitle = {EarRumble: Discreet Hands- and Eyes-Free Input by Voluntary Tensor Tympani Muscle Contraction},\nyear = {2021},\nisbn = {9781450380966},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3411764.3445205},\ndoi = {10.1145/3411764.3445205},\nabstract = {We explore how discreet input can be provided using the tensor tympani - a small muscle\nin the middle ear that some people can voluntarily contract to induce a dull rumbling\nsound. We investigate the prevalence and ability to control the muscle through an\nonline questionnaire (N=192) in which 43.2% of respondents reported the ability to\n“ear rumble”. Data collected from participants (N=16) shows how in-ear barometry can\nbe used to detect voluntary tensor tympani contraction in the sealed ear canal. This\ndata was used to train a classifier based on three simple ear rumble “gestures” which\nachieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction,\ngrounded in three manual, dual-task application scenarios (N=8). This highlights the\napplicability of EarRumble as a low-effort and discreet eyes- and hands-free interaction\ntechnique that users found “magical” and “almost telepathic”. },\nbooktitle = {Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems},\narticleno = {743},\nnumpages = {14},\nkeywords = {discreet interaction, in-ear barometry, hearables, earables, tensor tympani muscle, subtle gestures},\nlocation = {Yokohama, Japan},\nseries = {CHI '21}\n}\n\n
\n
\n\n\n
\n We explore how discreet input can be provided using the tensor tympani - a small muscle in the middle ear that some people can voluntarily contract to induce a dull rumbling sound. We investigate the prevalence and ability to control the muscle through an online questionnaire (N=192) in which 43.2% of respondents reported the ability to “ear rumble”. Data collected from participants (N=16) shows how in-ear barometry can be used to detect voluntary tensor tympani contraction in the sealed ear canal. This data was used to train a classifier based on three simple ear rumble “gestures” which achieved 95% accuracy. Finally, we evaluate the use of ear rumbling for interaction, grounded in three manual, dual-task application scenarios (N=8). This highlights the applicability of EarRumble as a low-effort and discreet eyes- and hands-free interaction technique that users found “magical” and “almost telepathic”. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaze+Hold: Eyes-Only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye.\n \n \n \n \n\n\n \n Ramirez Gomez, A. R., Clarke, C., Sidenmark, L., & Gellersen, H.\n\n\n \n\n\n\n In ACM Symposium on Eye Tracking Research and Applications, of ETRA '21 Full Papers, New York, NY, USA, 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"Gaze+Hold:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3448017.3457381,\nauthor = {Ramirez Gomez, Argenis Ramirez and Clarke, Christopher and Sidenmark, Ludwig and Gellersen, Hans},\ntitle = {Gaze+Hold: Eyes-Only Direct Manipulation with Continuous Gaze Modulated by Closure of One Eye},\nyear = {2021},\nisbn = {9781450383448},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3448017.3457381},\ndoi = {10.1145/3448017.3457381},\nabstract = { The eyes are coupled in their gaze function and therefore usually treated as a single\ninput channel, limiting the range of interactions. However, people are able to open\nand close one eye while still gazing with the other. We introduce Gaze+Hold as an\neyes-only technique that builds on this ability to leverage the eyes as separate input\nchannels, with one eye modulating the state of interaction while the other provides\ncontinuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore\nthrough the design of Gaze+Hold techniques for a range of user interface tasks. In\na user study, we evaluated performance, usability and user’s spontaneous choice of\neye for modulation of input. The results show that users are effective with Gaze+Hold.\nThe choice of dominant versus non-dominant eye had no effect on performance, perceived\nusability and workload. This is significant for the utility of Gaze+Hold as it affords\nflexibility for mapping of either eye in different configurations.},\nbooktitle = {ACM Symposium on Eye Tracking Research and Applications},\narticleno = {10},\nnumpages = {12},\nkeywords = {Winks, Eye Tracking, Design, Gaze Pointing, Direct Manipulation, Closing eyelids, Gaze Interaction},\nlocation = {Virtual Event, Germany},\nseries = {ETRA '21 Full Papers}\n}\n\n
\n
\n\n\n
\n The eyes are coupled in their gaze function and therefore usually treated as a single input channel, limiting the range of interactions. However, people are able to open and close one eye while still gazing with the other. We introduce Gaze+Hold as an eyes-only technique that builds on this ability to leverage the eyes as separate input channels, with one eye modulating the state of interaction while the other provides continuous input. Gaze+Hold enables direct manipulation beyond pointing which we explore through the design of Gaze+Hold techniques for a range of user interface tasks. In a user study, we evaluated performance, usability and user’s spontaneous choice of eye for modulation of input. The results show that users are effective with Gaze+Hold. The choice of dominant versus non-dominant eye had no effect on performance, perceived usability and workload. This is significant for the utility of Gaze+Hold as it affords flexibility for mapping of either eye in different configurations.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Reactive Video: Adaptive Video Playback Based on User Motion for Supporting Physical Activity.\n \n \n \n \n\n\n \n Clarke, C., Cavdir, D., Chiu, P., Denoue, L., & Kimber, D.\n\n\n \n\n\n\n In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, of UIST '20, pages 196–208, New York, NY, USA, 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"ReactivePaper\n  \n \n \n \"Reactive link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3379337.3415591,\nauthor = {Clarke, Christopher and Cavdir, Doga and Chiu, Patrick and Denoue, Laurent and Kimber, Don},\ntitle = {Reactive Video: Adaptive Video Playback Based on User Motion for Supporting Physical Activity},\nyear = {2020},\nisbn = {9781450375146},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3379337.3415591},\ndoi = {10.1145/3379337.3415591},\nabstract = {Videos are a convenient platform to begin, maintain, or improve a fitness program\nor physical activity. Traditional video systems allow users to manipulate videos through\nspecific user interface actions such as button clicks or mouse drags, but have no\nmodel of what the user is doing and are unable to adapt in useful ways. We present\nadaptive video playback, which seamlessly synchronises video playback with the user's\nmovements, building upon the principle of direct manipulation video navigation. We\nimplement adaptive video playback in Reactive Video, a vision-based system which supports\nusers learning or practising a physical skill. The use of pre-existing videos removes\nthe need to create bespoke content or specially authored videos, and the system can\nprovide real-time guidance and feedback to better support users when learning new\nmovements. Adaptive video playback using a discrete Bayes and particle filter are\nevaluated on a data set collected of participants performing tai chi and radio exercises.\nResults show that both approaches can accurately adapt to the user's movements, however\nreversing playback can be problematic.},\nbooktitle = {Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology},\npages = {196–208},\nnumpages = {13},\nkeywords = {physical activity, direct manipulation, full body, probabilistic},\nlocation = {Virtual Event, USA},\nseries = {UIST '20},\nurl_Link = {https://eprints.lancs.ac.uk/id/eprint/147601/1/ReactiveVideo_UIST2020_PrePrint.pdf}\n}\n\n
\n
\n\n\n
\n Videos are a convenient platform to begin, maintain, or improve a fitness program or physical activity. Traditional video systems allow users to manipulate videos through specific user interface actions such as button clicks or mouse drags, but have no model of what the user is doing and are unable to adapt in useful ways. We present adaptive video playback, which seamlessly synchronises video playback with the user's movements, building upon the principle of direct manipulation video navigation. We implement adaptive video playback in Reactive Video, a vision-based system which supports users learning or practising a physical skill. The use of pre-existing videos removes the need to create bespoke content or specially authored videos, and the system can provide real-time guidance and feedback to better support users when learning new movements. Adaptive video playback using a discrete Bayes and particle filter are evaluated on a data set collected of participants performing tai chi and radio exercises. Results show that both approaches can accurately adapt to the user's movements, however reversing playback can be problematic.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion Coupling of Earable Devices in Camera View.\n \n \n \n \n\n\n \n Clarke, C., Ehrich, P., & Gellersen, H.\n\n\n \n\n\n\n In 19th International Conference on Mobile and Ubiquitous Multimedia, of MUM 2020, pages 13–17, New York, NY, USA, 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3428361.3428470,\nauthor = {Clarke, Christopher and Ehrich, Peter and Gellersen, Hans},\ntitle = {Motion Coupling of Earable Devices in Camera View},\nyear = {2020},\nisbn = {9781450388702},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3428361.3428470},\ndoi = {10.1145/3428361.3428470},\nabstract = { Earables, earphones augmented with inertial sensors and real-time data accessibility,\nprovide the opportunity for private audio channels in public settings. One of the\nmain challenges of achieving this goal is to correctly associate which device belongs\nto which user without prior information. In this paper, we explore how motion of an\nearable, as measured by the on-board accelerometer, can be correlated against detected\nfaces from a webcam to accurately match which user is wearing the device. We conduct\na data collection and explore which type of user movement can be accurately detected\nusing this approach, and investigate how varying the speed of the movement affects\ndetection rates. Our results show that the approach achieves greater detection results\nfor faster movements, and that it can differentiate the same movement across different\nparticipants with a detection rate of 86%, increasing to 92% when differentiating\na movement against others.},\nbooktitle = {19th International Conference on Mobile and Ubiquitous Multimedia},\npages = {13–17},\nnumpages = {5},\nkeywords = {Spontaneous device association, earable, motion coupling.},\nlocation = {Essen, Germany},\nseries = {MUM 2020}\n}\n\n
\n
\n\n\n
\n Earables, earphones augmented with inertial sensors and real-time data accessibility, provide the opportunity for private audio channels in public settings. One of the main challenges of achieving this goal is to correctly associate which device belongs to which user without prior information. In this paper, we explore how motion of an earable, as measured by the on-board accelerometer, can be correlated against detected faces from a webcam to accurately match which user is wearing the device. We conduct a data collection and explore which type of user movement can be accurately detected using this approach, and investigate how varying the speed of the movement affects detection rates. Our results show that the approach achieves greater detection results for faster movements, and that it can differentiate the same movement across different participants with a detection rate of 86%, increasing to 92% when differentiating a movement against others.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n BimodalGaze: Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement.\n \n \n \n \n\n\n \n Sidenmark, L., Mardanbegi, D., Gomez, A. R., Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n In ACM Symposium on Eye Tracking Research and Applications, of ETRA '20 Full Papers, New York, NY, USA, 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"BimodalGaze:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3379155.3391312,\nauthor = {Sidenmark, Ludwig and Mardanbegi, Diako and Gomez, Argenis Ramirez and Clarke, Christopher and Gellersen, Hans},\ntitle = {BimodalGaze: Seamlessly Refined Pointing with Gaze and Filtered Gestural Head Movement},\nyear = {2020},\nisbn = {9781450371339},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3379155.3391312},\ndoi = {10.1145/3379155.3391312},\nabstract = {Eye gaze is a fast and ergonomic modality for pointing but limited in precision and\naccuracy. In this work, we introduce BimodalGaze, a novel technique for seamless head-based\nrefinement of a gaze cursor. The technique leverages eye-head coordination insights\nto separate natural from gestural head movement. This allows users to quickly shift\ntheir gaze to targets over larger fields of view with naturally combined eye-head\nmovement, and to refine the cursor position with gestural head movement. In contrast\nto an existing baseline, head refinement is invoked automatically, and only if a target\nis not already acquired by the initial gaze shift. Study results show that users reliably\nachieve fine-grained target selection, but we observed a higher rate of initial selection\nerrors affecting overall performance. An in-depth analysis of user performance provides\ninsight into the classification of natural versus gestural head movement, for improvement\nof BimodalGaze and other potential applications. },\nbooktitle = {ACM Symposium on Eye Tracking Research and Applications},\narticleno = {8},\nnumpages = {9},\nkeywords = {Gaze interaction, Eye tracking, Refinement, Virtual reality, Eye-head coordination},\nlocation = {Stuttgart, Germany},\nseries = {ETRA '20 Full Papers}\n}\n\n
\n
\n\n\n
\n Eye gaze is a fast and ergonomic modality for pointing but limited in precision and accuracy. In this work, we introduce BimodalGaze, a novel technique for seamless head-based refinement of a gaze cursor. The technique leverages eye-head coordination insights to separate natural from gestural head movement. This allows users to quickly shift their gaze to targets over larger fields of view with naturally combined eye-head movement, and to refine the cursor position with gestural head movement. In contrast to an existing baseline, head refinement is invoked automatically, and only if a target is not already acquired by the initial gaze shift. Study results show that users reliably achieve fine-grained target selection, but we observed a higher rate of initial selection errors affecting overall performance. An in-depth analysis of user performance provides insight into the classification of natural versus gestural head movement, for improvement of BimodalGaze and other potential applications. \n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Outline Pursuits: Gaze-Assisted Selection of Occluded Objects in Virtual Reality.\n \n \n \n \n\n\n \n Sidenmark, L., Clarke, C., Zhang, X., Phu, J., & Gellersen, H.\n\n\n \n\n\n\n In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, of CHI '20, pages 1–13, New York, NY, USA, 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"OutlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3313831.3376438,\nauthor = {Sidenmark, Ludwig and Clarke, Christopher and Zhang, Xuesong and Phu, Jenny and Gellersen, Hans},\ntitle = {Outline Pursuits: Gaze-Assisted Selection of Occluded Objects in Virtual Reality},\nyear = {2020},\nisbn = {9781450367080},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3313831.3376438},\ndoi = {10.1145/3313831.3376438},\nabstract = {In 3D environments, objects can be difficult to select when they overlap, as this\naffects available target area and increases selection ambiguity. We introduce Outline\nPursuits which extends a primary pointing modality for gaze-assisted selection of\noccluded objects. Candidate targets within a pointing cone are presented with an outline\nthat is traversed by a moving stimulus. This affords completion of the selection by\ngaze attention to the intended target's outline motion, detected by matching the user's\nsmooth pursuit eye movement. We demonstrate two techniques implemented based on the\nconcept, one with a controller as the primary pointer, and one in which Outline Pursuits\nare combined with head pointing for hands-free selection. Compared with conventional\nraycasting, the techniques require less movement for selection as users do not need\nto reposition themselves for a better line of sight, and selection time and accuracy\nare less affected when targets become highly occluded.},\nbooktitle = {Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},\npages = {1–13},\nnumpages = {13},\nkeywords = {eye tracking, occlusion, smooth pursuits, virtual reality},\nlocation = {Honolulu, HI, USA},\nseries = {CHI '20}\n}\n\n
\n
\n\n\n
\n In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Dynamic motion coupling of body movement for input control.\n \n \n \n\n\n \n Clarke, C.\n\n\n \n\n\n\n Lancaster University (United Kingdom), 2020.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@book{clarke2020dynamic,\n  title={Dynamic motion coupling of body movement for input control},\n  author={Clarke, Christopher},\n  year={2020},\n  publisher={Lancaster University (United Kingdom)}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Monocular Gaze Depth Estimation Using the Vestibulo-Ocular Reflex.\n \n \n \n \n\n\n \n Mardanbegi, D., Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, of ETRA '19, New York, NY, USA, 2019. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"MonocularPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3314111.3319822,\nauthor = {Mardanbegi, Diako and Clarke, Christopher and Gellersen, Hans},\ntitle = {Monocular Gaze Depth Estimation Using the Vestibulo-Ocular Reflex},\nyear = {2019},\nisbn = {9781450367097},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3314111.3319822},\ndoi = {10.1145/3314111.3319822},\nabstract = {Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates\na novel approach to the problem based on eye movement mediated by the vestibulo-ocular\nreflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement\nin the opposite direction, and the VOR gain increases the closer the fixated target\nis to the viewer. We present a theoretical analysis of the relationship between VOR\ngain and depth which we investigate with empirical data collected in a user study\n(N=10). We show that VOR gain can be captured using pupil centres, and propose and\nevaluate a practical method for gaze depth estimation based on a generic function\nof VOR gain and two-point depth calibration. The results show that VOR gain is comparable\nwith vergence in capturing depth while only requiring one eye, and provide insight\ninto open challenges in harnessing VOR gain as a robust measure.},\nbooktitle = {Proceedings of the 11th ACM Symposium on Eye Tracking Research &amp; Applications},\narticleno = {20},\nnumpages = {9},\nkeywords = {eye movement, eye tracking, 3D gaze estimation, fixation depth, VOR, gaze depth estimation},\nlocation = {Denver, Colorado},\nseries = {ETRA '19}\n}\n\n
\n
\n\n\n
\n Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates a novel approach to the problem based on eye movement mediated by the vestibulo-ocular reflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement in the opposite direction, and the VOR gain increases the closer the fixated target is to the viewer. We present a theoretical analysis of the relationship between VOR gain and depth which we investigate with empirical data collected in a user study (N=10). We show that VOR gain can be captured using pupil centres, and propose and evaluate a practical method for gaze depth estimation based on a generic function of VOR gain and two-point depth calibration. The results show that VOR gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n MatchPoint: Spontaneous Spatial Coupling of Body Movement for Touchless Pointing.\n \n \n \n \n\n\n \n Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, of UIST '17, pages 179–192, New York, NY, USA, 2017. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"MatchPoint:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/3126594.3126626,\nauthor = {Clarke, Christopher and Gellersen, Hans},\ntitle = {MatchPoint: Spontaneous Spatial Coupling of Body Movement for Touchless Pointing},\nyear = {2017},\nisbn = {9781450349819},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/3126594.3126626},\ndoi = {10.1145/3126594.3126626},\nabstract = {Pointing is a fundamental interaction technique where user movement is translated\nto spatial input on a display. Conventionally, this is based on a rigid configuration\nof a display coupled with a pointing device that determines the types of movement\nthat can be sensed, and the specific ways users can affect pointer input. Spontaneous\nspatial coupling is a novel input technique that instead allows any body movement,\nor movement of tangible objects, to be appropriated for touchless pointing on an ad\nhoc basis. Pointer acquisition is facilitated by the display presenting graphical\nobjects in motion, to which users can synchronise to define a temporary spatial coupling\nwith the body part or tangible object they used in the process. The technique can\nbe deployed using minimal hardware, as demonstrated by MatchPoint, a generic computer\nvision-based implementation of the technique that requires only a webcam. We explore\nthe design space of spontaneous spatial coupling, demonstrate the versatility of the\ntechnique with application examples, and evaluate MatchPoint performance using a multi-directional\npointing task.},\nbooktitle = {Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology},\npages = {179–192},\nnumpages = {14},\nkeywords = {vison-based interfaces, computer vision, gesture input, pointing, input techniques, touchless input, user input, bodily interaction, motion-matching},\nlocation = {Qu\\'{e}bec City, QC, Canada},\nseries = {UIST '17}\n}\n\n
\n
\n\n\n
\n Pointing is a fundamental interaction technique where user movement is translated to spatial input on a display. Conventionally, this is based on a rigid configuration of a display coupled with a pointing device that determines the types of movement that can be sensed, and the specific ways users can affect pointer input. Spontaneous spatial coupling is a novel input technique that instead allows any body movement, or movement of tangible objects, to be appropriated for touchless pointing on an ad hoc basis. Pointer acquisition is facilitated by the display presenting graphical objects in motion, to which users can synchronise to define a temporary spatial coupling with the body part or tangible object they used in the process. The technique can be deployed using minimal hardware, as demonstrated by MatchPoint, a generic computer vision-based implementation of the technique that requires only a webcam. We explore the design space of spontaneous spatial coupling, demonstrate the versatility of the technique with application examples, and evaluate MatchPoint performance using a multi-directional pointing task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Remote Control by Body Movement in Synchrony with Orbiting Widgets: An Evaluation of TraceMatch.\n \n \n \n \n\n\n \n Clarke, C., Bellino, A., Esteves, A., & Gellersen, H.\n\n\n \n\n\n\n Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 1(3). September 2017.\n \n\n\n\n
\n\n\n\n \n \n \"RemotePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1145/3130910,\nauthor = {Clarke, Christopher and Bellino, Alessio and Esteves, Augusto and Gellersen, Hans},\ntitle = {Remote Control by Body Movement in Synchrony with Orbiting Widgets: An Evaluation of TraceMatch},\nyear = {2017},\nissue_date = {September 2017},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {1},\nnumber = {3},\nurl = {https://doi.org/10.1145/3130910},\ndoi = {10.1145/3130910},\nabstract = {In this work we consider how users can use body movement for remote control with minimal\neffort and maximum flexibility. TraceMatch is a novel technique where the interface\ndisplays available controls as circular widgets with orbiting targets, and where users\ncan trigger a control by mimicking the displayed motion. The technique uses computer\nvision to detect circular motion as a uniform type of input, but is highly appropriable\nas users can produce matching motion with any part of their body. We present three\nstudies that investigate input performance with different parts of the body, user\npreferences, and spontaneous choice of movements for input in realistic application\nscenarios. The results show that users can provide effective input with their head,\nhands and while holding objects, that multiple controls can be effectively distinguished\nby the difference in presented phase and direction of movement, and that users choose\nand switch modes of input seamlessly.},\njournal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},\nmonth = sep,\narticleno = {45},\nnumpages = {22},\nkeywords = {Gesture input, Movement correlation, Motion matching, Computer vision, Input techniques, Remote control, User input, Motion correlation, Path mimicry, User evaluation, Vision-based interfaces}\n}\n\n
\n
\n\n\n
\n In this work we consider how users can use body movement for remote control with minimal effort and maximum flexibility. TraceMatch is a novel technique where the interface displays available controls as circular widgets with orbiting targets, and where users can trigger a control by mimicking the displayed motion. The technique uses computer vision to detect circular motion as a uniform type of input, but is highly appropriable as users can produce matching motion with any part of their body. We present three studies that investigate input performance with different parts of the body, user preferences, and spontaneous choice of movements for input in realistic application scenarios. The results show that users can provide effective input with their head, hands and while holding objects, that multiple controls can be effectively distinguished by the difference in presented phase and direction of movement, and that users choose and switch modes of input seamlessly.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motion Correlation: Selecting Objects by Matching Their Movement.\n \n \n \n \n\n\n \n Velloso, E., Carter, M., Newn, J., Esteves, A., Clarke, C., & Gellersen, H.\n\n\n \n\n\n\n ACM Trans. Comput.-Hum. Interact., 24(3). April 2017.\n \n\n\n\n
\n\n\n\n \n \n \"MotionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1145/3064937,\nauthor = {Velloso, Eduardo and Carter, Marcus and Newn, Joshua and Esteves, Augusto and Clarke, Christopher and Gellersen, Hans},\ntitle = {Motion Correlation: Selecting Objects by Matching Their Movement},\nyear = {2017},\nissue_date = {July 2017},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {24},\nnumber = {3},\nissn = {1073-0516},\nurl = {https://doi.org/10.1145/3064937},\ndoi = {10.1145/3064937},\nabstract = {Selection is a canonical task in user interfaces, commonly supported by presenting\nobjects for acquisition by pointing. In this article, we consider motion correlation\nas an alternative for selection. The principle is to represent available objects by\nmotion in the interface, have users identify a target by mimicking its specific motion,\nand use the correlation between the system’s output with the user’s input to determine\nthe selection. The resulting interaction has compelling properties, as users are guided\nby motion feedback, and only need to copy a presented motion. Motion correlation has\nbeen explored in earlier work but only recently begun to feature in holistic interface\ndesigns. We provide a first comprehensive review of the principle, and present an\nanalysis of five previously published works, in which motion correlation underpinned\nthe design of novel gaze and gesture interfaces for diverse application contexts.\nWe derive guidelines for motion correlation algorithms, motion feedback, choice of\nmodalities, overall design of motion correlation interfaces, and identify opportunities\nand challenges identified for future research and design.},\njournal = {ACM Trans. Comput.-Hum. Interact.},\nmonth = apr,\narticleno = {22},\nnumpages = {35},\nkeywords = {motion tracking, natural user interfaces, gesture interfaces, gaze interaction, Motion correlation, interaction techniques, eye tracking}\n}\n\n
\n
\n\n\n
\n Selection is a canonical task in user interfaces, commonly supported by presenting objects for acquisition by pointing. In this article, we consider motion correlation as an alternative for selection. The principle is to represent available objects by motion in the interface, have users identify a target by mimicking its specific motion, and use the correlation between the system’s output with the user’s input to determine the selection. The resulting interaction has compelling properties, as users are guided by motion feedback, and only need to copy a presented motion. Motion correlation has been explored in earlier work but only recently begun to feature in holistic interface designs. We provide a first comprehensive review of the principle, and present an analysis of five previously published works, in which motion correlation underpinned the design of novel gaze and gesture interfaces for diverse application contexts. We derive guidelines for motion correlation algorithms, motion feedback, choice of modalities, overall design of motion correlation interfaces, and identify opportunities and challenges identified for future research and design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n AURORA: autonomous real-time on-board video analytics.\n \n \n \n\n\n \n Angelov, P., Sadeghi Tehran, P., & Clarke, C.\n\n\n \n\n\n\n Neural Computing and Applications, 28(5): 855–865. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{angelov2017aurora,\n  title={AURORA: autonomous real-time on-board video analytics},\n  author={Angelov, Plamen and Sadeghi Tehran, Pouria and Clarke, Christopher},\n  journal={Neural Computing and Applications},\n  volume={28},\n  number={5},\n  pages={855--865},\n  year={2017}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n TraceMatch: A Computer Vision Technique for User Input by Tracing of Animated Controls.\n \n \n \n \n\n\n \n Clarke, C., Bellino, A., Esteves, A., Velloso, E., & Gellersen, H.\n\n\n \n\n\n\n In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, of UbiComp '16, pages 298–303, New York, NY, USA, 2016. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"TraceMatch:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/2971648.2971714,\nauthor = {Clarke, Christopher and Bellino, Alessio and Esteves, Augusto and Velloso, Eduardo and Gellersen, Hans},\ntitle = {TraceMatch: A Computer Vision Technique for User Input by Tracing of Animated Controls},\nyear = {2016},\nisbn = {9781450344616},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/2971648.2971714},\ndoi = {10.1145/2971648.2971714},\nabstract = {Recent works have explored the concept of movement correlation interfaces, in which\nmoving objects can be selected by matching the movement of the input device to that\nof the desired object. Previous techniques relied on a single modality (e.g. gaze\nor mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer\nvision technique that enables input by movement correlation while abstracting from\nany particular input modality. The technique relies only on a conventional webcam\nto enable users to produce matching gestures with any given body parts, even whilst\nholding objects. We describe an implementation of the technique for acquisition of\norbiting targets, evaluate algorithm performance for different target sizes and frequencies,\nand demonstrate use of the technique for remote control of graphical as well as physical\nobjects with different body parts.},\nbooktitle = {Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing},\npages = {298–303},\nnumpages = {6},\nkeywords = {vision-based interfaces, path mimicry, remote control, input techniques, motion matching, computer vision, ubiquitous computing, gesture input, user input},\nlocation = {Heidelberg, Germany},\nseries = {UbiComp '16}\n}\n\n
\n
\n\n\n
\n Recent works have explored the concept of movement correlation interfaces, in which moving objects can be selected by matching the movement of the input device to that of the desired object. Previous techniques relied on a single modality (e.g. gaze or mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer vision technique that enables input by movement correlation while abstracting from any particular input modality. The technique relies only on a conventional webcam to enable users to produce matching gestures with any given body parts, even whilst holding objects. We describe an implementation of the technique for acquisition of orbiting targets, evaluate algorithm performance for different target sizes and frequencies, and demonstrate use of the technique for remote control of graphical as well as physical objects with different body parts.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Self-Defining Memory Cues: Creative Expression and Emotional Meaning.\n \n \n \n \n\n\n \n Sas, C., Challioner, S., Clarke, C., Wilson, R., Coman, A., Clinch, S., Harding, M., & Davies, N.\n\n\n \n\n\n\n In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, of CHI EA '15, pages 2013–2018, New York, NY, USA, 2015. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"Self-DefiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{10.1145/2702613.2732842,\nauthor = {Sas, Corina and Challioner, Scott and Clarke, Christopher and Wilson, Ross and Coman, Alina and Clinch, Sarah and Harding, Mike and Davies, Nigel},\ntitle = {Self-Defining Memory Cues: Creative Expression and Emotional Meaning},\nyear = {2015},\nisbn = {9781450331463},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nurl = {https://doi.org/10.1145/2702613.2732842},\ndoi = {10.1145/2702613.2732842},\nabstract = {This paper explores how people generate cues for capturing personal meaningful daily\nevents, which can be used for later recall. Such understanding can be explored to\ninform the design and development of personal informatics systems, aimed to support\nreflection and increased self-awareness. We describe a diary study with six participants\nand discuss initial findings showing the qualities of daily meaningful events, the\nvalue of different types of cues and their distinct contents for supporting episodic\nrecall.},\nbooktitle = {Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems},\npages = {2013–2018},\nnumpages = {6},\nkeywords = {emotions, meaningful daily events, doodling, creativity, episodic memory recall, self-generated cues},\nlocation = {Seoul, Republic of Korea},\nseries = {CHI EA '15}\n}\n\n
\n
\n\n\n
\n This paper explores how people generate cues for capturing personal meaningful daily events, which can be used for later recall. Such understanding can be explored to inform the design and development of personal informatics systems, aimed to support reflection and increased self-awareness. We describe a diary study with six participants and discuss initial findings showing the qualities of daily meaningful events, the value of different types of cues and their distinct contents for supporting episodic recall.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n A real-time approach for autonomous detection and tracking of moving objects from UAV.\n \n \n \n\n\n \n Sadeghi-Tehran, P., Clarke, C., & Angelov, P.\n\n\n \n\n\n\n In 2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS), pages 43–49, 2014. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{sadeghi2014real,\n  title={A real-time approach for autonomous detection and tracking of moving objects from UAV},\n  author={Sadeghi-Tehran, Pouria and Clarke, Christopher and Angelov, Plamen},\n  booktitle={2014 IEEE Symposium on Evolving and Autonomous Learning Systems (EALS)},\n  pages={43--49},\n  year={2014},\n  organization={IEEE}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sariva: Smartphone app for real-time intelligent video analytics.\n \n \n \n\n\n \n Clarke, C., & Angelov, P.\n\n\n \n\n\n\n Journal of Automation, Mobile Robotics and Intelligent Systems,15–19. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{clarke2014sariva,\n  title={Sariva: Smartphone app for real-time intelligent video analytics},\n  author={Clarke, Christopher and Angelov, Plamen},\n  journal={Journal of Automation, Mobile Robotics and Intelligent Systems},\n  pages={15--19},\n  year={2014}\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);