var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fpaul-baxter.github.io%2Fbaxter-publications.bib&theme=side&group0=year&folding=1&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fpaul-baxter.github.io%2Fbaxter-publications.bib&theme=side&group0=year&folding=1&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fpaul-baxter.github.io%2Fbaxter-publications.bib&theme=side&group0=year&folding=1&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2021\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Robots in Education: An Introduction to High-tech Social Agents, Intelligent Tutors, and Curricular Tools.\n \n \n \n \n\n\n \n Alnajjar, F.; Bartneck, C.; Baxter, P.; Belpaeme, T.; Cappuccio, M. L; Di Dio, C.; Eyssel, F.; Handke, J.; Mubin, O.; Obaid, M.; and Reich-Stiebert, N.\n\n\n \n\n\n\n 2021.\n \n\n\n\n
\n\n\n\n \n \n \"RobotsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{alnajjar2021robots,\n      title={Robots in Education: An Introduction to High-tech Social Agents, Intelligent Tutors, and Curricular Tools},\n      author={Alnajjar, Fady and Bartneck, Christoph and Baxter, Paul and Belpaeme, Tony and Cappuccio, Massimiliano L and Di Dio, Cinzia and Eyssel, Friederike and Handke, J{\\"u}rgen and Mubin, Omar and Obaid, Mohammad and Reich-Stiebert, Natalia},\n      year={2021},\n      publisher={Routledge},      url={https://www.taylorfrancis.com/books/mono/10.4324/9781003142706/robots-education-fady-alnajjar-christoph-bartneck-paul-baxter-tony-belpaeme-massimiliano-cappuccio-cinzia-di-dio-friederike-eyssel-j%C3%BCrgen-handke-omar-mubin-mohammad-obaid-natalia-reich-stiebert},\n      abstract={Robots in Education is an accessible introduction to the use of robotics in formal learning, encompassing pedagogical and psychological theories as well as implementation in curricula. Today, a variety of communities across education are increasingly using robots as general classroom tutors, tools in STEM projects, and subjects of study. This volume explores how the unique physical and social-interactive capabilities of educational robots can generate bonds with students while freeing instructors to focus on their individualized approaches to teaching and learning. Authored by a uniquely interdisciplinary team of scholars, the book covers the basics of robotics and their supporting technologies; attitudes toward and ethical implications of robots in learning; research methods relevant to extending our knowledge of the field; and more.},\n      doi={10.4324/9781003142706},\n      isbn={9781003142706}\n}\n\n
\n
\n\n\n
\n Robots in Education is an accessible introduction to the use of robotics in formal learning, encompassing pedagogical and psychological theories as well as implementation in curricula. Today, a variety of communities across education are increasingly using robots as general classroom tutors, tools in STEM projects, and subjects of study. This volume explores how the unique physical and social-interactive capabilities of educational robots can generate bonds with students while freeing instructors to focus on their individualized approaches to teaching and learning. Authored by a uniquely interdisciplinary team of scholars, the book covers the basics of robotics and their supporting technologies; attitudes toward and ethical implications of robots in learning; research methods relevant to extending our knowledge of the field; and more.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Roboter in der Bildung: Wie Robotik das Lernen im digitalen Zeitalter bereichern kann.\n \n \n \n\n\n \n Alnajjar, F.; Bartneck, C.; Baxter, P.; Belpaeme, T.; Cappuccio, M. L; Di Dio, C.; Eyssel, F.; Handke, J.; Mubin, O.; Obaid, M.; and Reich-Stiebert, N.\n\n\n \n\n\n\n 2021.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{alnajjar2021roboter,\n      title={Roboter in der Bildung: Wie Robotik das Lernen im digitalen Zeitalter bereichern kann},\n      author={Alnajjar, Fady and Bartneck, Christoph and Baxter, Paul and Belpaeme, Tony and Cappuccio, Massimiliano L and Di Dio, Cinzia and Eyssel, Friederike and Handke, J{\\"u}rgen and Mubin, Omar and Obaid, Mohammad and Reich-Stiebert, Natalia},\n      year={2021},\n      publisher={Carl Hanser Verlag GmbH Co KG}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A bioinspired angular velocity decoding neural network model for visually guided flights.\n \n \n \n\n\n \n Wang, H.; Fu, Q.; Wang, H.; Baxter, P.; Peng, J.; and Yue, S.\n\n\n \n\n\n\n Neural Networks, 136: 180–193. 2021.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{wang2021bioinspired,\n      title={A bioinspired angular velocity decoding neural network model for visually guided flights},\n      author={Wang, Huatian and Fu, Qinbing and Wang, Hongxin and Baxter, Paul and Peng, Jigen and Yue, Shigang},\n      journal={Neural Networks},\n      volume={136},\n      pages={180--193},\n      year={2021},\n      publisher={Pergamon}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Are you still with me? Continuous engagement assessment from a robot's point of view.\n \n \n \n \n\n\n \n Del Duchetto, F.; Baxter, P.; and Hanheide, M.\n\n\n \n\n\n\n Frontiers in Robotics and AI, 7: 116. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"ArePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{del2020you,\n      title={Are you still with me? Continuous engagement assessment from a robot's point of view},\n      author={{Del Duchetto}, Francesco and Baxter, Paul and Hanheide, Marc},\n      journal={Frontiers in Robotics and AI},\n      volume={7},\n      pages={116},\n      year={2020},\n      publisher={Frontiers Media SA},\n      url={https://www.frontiersin.org/articles/10.3389/frobt.2020.00116/full},\n      doi={10.3389/frobt.2020.00116},\n      abstract={Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.}\n}\n\n
\n
\n\n\n
\n Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Requirements for robotic interpretation of social signals “in the wild”: Insights from diagnostic criteria of autism spectrum disorder.\n \n \n \n \n\n\n \n Bartlett, M. E; Costescu, C.; Baxter, P.; and Thill, S.\n\n\n \n\n\n\n Information, 11(2): 81. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"RequirementsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{bartlett2020requirements,\n      title={Requirements for robotic interpretation of social signals “in the wild”: Insights from diagnostic criteria of autism spectrum disorder},\n      author={Bartlett, Madeleine E and Costescu, Cristina and Baxter, Paul and Thill, Serge},\n      journal={Information},\n      volume={11},\n      number={2},\n      pages={81},\n      year={2020},\n      publisher={MDPI},\n      url={https://www.mdpi.com/2078-2489/11/2/81},\n      doi={10.3390/info11020081},\n      abstract={The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move “into the wild”. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.}\n}\n\n
\n
\n\n\n
\n The last few decades have seen widespread advances in technological means to characterise observable aspects of human behaviour such as gaze or posture. Among others, these developments have also led to significant advances in social robotics. At the same time, however, social robots are still largely evaluated in idealised or laboratory conditions, and it remains unclear whether the technological progress is sufficient to let such robots move “into the wild”. In this paper, we characterise the problems that a social robot in the real world may face, and review the technological state of the art in terms of addressing these. We do this by considering what it would entail to automate the diagnosis of Autism Spectrum Disorder (ASD). Just as for social robotics, ASD diagnosis fundamentally requires the ability to characterise human behaviour from observable aspects. However, therapists provide clear criteria regarding what to look for. As such, ASD diagnosis is a situation that is both relevant to real-world social robotics and comes with clear metrics. Overall, we demonstrate that even with relatively clear therapist-provided criteria and current technological progress, the need to interpret covert behaviour cannot yet be fully addressed. Our discussions have clear implications for ASD diagnosis, but also for social robotics more generally. For ASD diagnosis, we provide a classification of criteria based on whether or not they depend on covert information and highlight present-day possibilities for supporting therapists in diagnosis through technological means. For social robotics, we highlight the fundamental role of covert behaviour, show that the current state-of-the-art is unable to characterise this, and emphasise that future research should tackle this explicitly in realistic settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Intention Recognition for Human-Interacting Agricultural Robots.\n \n \n \n \n\n\n \n Gabriel, A.; and Baxter, P.\n\n\n \n\n\n\n In Proceedings of The 3rd UK-RAS Conference, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{gabriel2020towards,\n      title={Towards Intention Recognition for Human-Interacting Agricultural Robots},\n      author={Gabriel, Alexander and Baxter, Paul},\n      booktitle={Proceedings of The 3rd UK-RAS Conference},\n      year={2020},\n      abstract={Robots sharing a common working space with humans and interacting with them to accomplish some task should not only optimise task efficiency, but also consider the safety and comfort of their human collaborators. This requires the recognition of human intentions in order for the robot to anticipate behaviour and act accordingly. In this paper we propose a robot behavioural controller that incorporates both human behaviour and environment information as the basis of reasoning over the appropriate responses. Applied to Human- Robot Interaction in an agricultural context, we demonstrate in a series of simulations how this proposed method leads to the production of appropriate robot behaviour in a range of interaction scenarios. This work lays the foundation for the wider consideration of contextual intention recognition for the generation of interactive robot behaviour.},\n      doi={10.31256/Ye5Nz9W},\n      url={https://www.ukras.org.uk/publications/ras-proceedings/UKRAS20/pp18-20},\n}\n\n
\n
\n\n\n
\n Robots sharing a common working space with humans and interacting with them to accomplish some task should not only optimise task efficiency, but also consider the safety and comfort of their human collaborators. This requires the recognition of human intentions in order for the robot to anticipate behaviour and act accordingly. In this paper we propose a robot behavioural controller that incorporates both human behaviour and environment information as the basis of reasoning over the appropriate responses. Applied to Human- Robot Interaction in an agricultural context, we demonstrate in a series of simulations how this proposed method leads to the production of appropriate robot behaviour in a range of interaction scenarios. This work lays the foundation for the wider consideration of contextual intention recognition for the generation of interactive robot behaviour.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Assessment and Learning of Robot Social Abilities.\n \n \n \n \n\n\n \n Del Duchetto, F.; Baxter, P.; and Hanheide, M.\n\n\n \n\n\n\n In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 561–563, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{del2020automatic,\n      title={Automatic Assessment and Learning of Robot Social Abilities},\n      author={Del Duchetto, Francesco and Baxter, Paul and Hanheide, Marc},\n      booktitle={Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},\n      pages={561--563},\n      year={2020},\n      url={https://dl.acm.org/doi/abs/10.1145/3371382.3377430},\n      doi={10.1145/3371382.3377430},\n      abstract={One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].}\n}\n\n
\n
\n\n\n
\n One of the key challenges of current state-of-the-art robotic deployments in public spaces, where the robot is supposed to interact with humans, is the generation of behaviors that are engaging for the users. Eliciting engagement during an interaction, and maintaining it after the initial phase of the interaction, is still an issue to be overcome. There is evidence that engagement in learning activities is higher in the presence of a robot, particularly if novel [1], but after the initial engagement state, long and non-interactive behaviors are detrimental to the continued engagement of the users [5, 16]. Overcoming this limitation requires to design robots with enhanced social abilities that go past monolithic behaviours and introduces in-situ learning and adaptation to the specific users and situations. To do so, the robot must have the ability to perceive the state of the humans participating in the interaction and use this feedback for the selection of its own actions over time [27].\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Abstract visual programming of social robots for novice users.\n \n \n \n \n\n\n \n Brown, O.; Roberts-Elliott, L.; Del Duchetto, F.; Hanheide, M.; and Baxter, P.\n\n\n \n\n\n\n In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 154–156, 2020. \n \n\n\n\n
\n\n\n\n \n \n \"AbstractPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{brown2020abstract,\n      title={Abstract visual programming of social robots for novice users},\n      author={Brown, Onis and Roberts-Elliott, Laurence and Del Duchetto, Francesco and Hanheide, Marc and Baxter, Paul},\n      booktitle={Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},\n      pages={154--156},\n      year={2020},\n      url={https://dl.acm.org/doi/abs/10.1145/3371382.3378271},\n      doi={10.1145/3371382.3378271},\n      abstract={To facilitate interaction of robots with people in public spaces it would be beneficial for them to use social behaviours: i.e. low-level behaviours that suggest the robot is a social agent. However, the implementation of such social behaviours would be difficult for novice users - i.e. non-roboticists. In this contribution, we present a high-level visual programming system that enables novices to design robot tasks which already incorporate social behavioural cues appropriate for the robot being programmed. A pilot study of this system in a museum involving members of the public designing guided tours demonstrated that the addition of the low-level social cues improve the perception of the robot and the effectiveness of the designed task behaviour. A number of areas of further exploration and development are highlighted.}\n}\n\n
\n
\n\n\n
\n To facilitate interaction of robots with people in public spaces it would be beneficial for them to use social behaviours: i.e. low-level behaviours that suggest the robot is a social agent. However, the implementation of such social behaviours would be difficult for novice users - i.e. non-roboticists. In this contribution, we present a high-level visual programming system that enables novices to design robot tasks which already incorporate social behavioural cues appropriate for the robot being programmed. A pilot study of this system in a museum involving members of the public designing guided tours demonstrated that the addition of the low-level social cues improve the perception of the robot and the effectiveness of the designed task behaviour. A number of areas of further exploration and development are highlighted.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Teaching robots social autonomy from in situ human guidance.\n \n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Baxter, P. E.; Bartlett, M.; and Belpaeme, T.\n\n\n \n\n\n\n Science Robotics, 4(35): eaat1186. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"TeachingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Senft2019,\n    author = {Emmanuel Senft  and Séverin Lemaignan  and Paul E. Baxter  and Madeleine Bartlett  and Tony Belpaeme },\n    title = {Teaching robots social autonomy from in situ human guidance},\n    journal = {Science Robotics},\n    volume = {4},\n    number = {35},\n    pages = {eaat1186},\n    year = {2019},\n    doi = {10.1126/scirobotics.aat1186},\n    URL = {https://www.science.org/doi/abs/10.1126/scirobotics.aat1186},\n    eprint = {https://www.science.org/doi/pdf/10.1126/scirobotics.aat1186},\n    abstract = {A robot was programmed to progressively learn appropriate social autonomous behavior from in situ human demonstrations and guidance. Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.}\n}\n\n
\n
\n\n\n
\n A robot was programmed to progressively learn appropriate social autonomous behavior from in situ human demonstrations and guidance. Striking the right balance between robot autonomy and human control is a core challenge in social robotics, in both technical and ethical terms. On the one hand, extended robot autonomy offers the potential for increased human productivity and for the off-loading of physical and cognitive tasks. On the other hand, making the most of human technical and social expertise, as well as maintaining accountability, is highly desirable. This is particularly relevant in domains such as medical therapy and education, where social robots hold substantial promise, but where there is a high cost to poorly performing autonomous systems, compounded by ethical concerns. We present a field study in which we evaluate SPARC (supervised progressively autonomous robot competencies), an innovative approach addressing this challenge whereby a robot progressively learns appropriate autonomous behavior from in situ human demonstrations and guidance. Using online machine learning techniques, we demonstrate that the robot could effectively acquire legible and congruent social policies in a high-dimensional child-tutoring situation needing only a limited number of demonstrations while preserving human supervision whenever desirable. By exploiting human expertise, our technique enables rapid learning of autonomous social and domain-specific policies in complex and nondeterministic environments. Last, we underline the generic properties of SPARC and discuss how this paradigm is relevant to a broad range of difficult human-robot interaction scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy.\n \n \n \n\n\n \n Cao, H. L.; Esteban, P.; Bartlett, M.; Baxter, P.; Belpaeme, T.; Billing, E.; Cai, H.; Coeckelbergh, M.; Costescu, C.; David, D.; De Beir, A.; Hernandez Garcia, D.; Kennedy, J.; Liu, H.; Matu, S.; Mazel, A.; Pandey, A. K.; Richardson, K.; Senft, E.; Thill, S.; Van de Perre, G.; Vanderborght, B.; Vernon, D.; Wakanuma, K.; Yu, H.; Zhou, X.; and Ziemke, T.\n\n\n \n\n\n\n IEEE Robotics and Automation Magazine, 26(2): 49–58. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Cao2019,\n    author = {Cao, Hoang Long and Esteban, Pablo and Bartlett, Madeleine and Baxter, Paul and Belpaeme, Tony and Billing, Erik and Cai, Haibin and Coeckelbergh, Mark and Costescu, Cristina and David, Daniel and {De Beir}, Albert and {Hernandez Garcia}, Daniel and Kennedy, James and Liu, HONGHAI and Matu, Silviu and Mazel, Alexandre and Pandey, Amit Kumar and Richardson, Kathleen and Senft, Emmanuel and Thill, Serge and {Van de Perre}, Greet and Vanderborght, Bram and Vernon, David and Wakanuma, Kutoma and Yu, Hui and Zhou, Xiaolong and Ziemke, Tom},\n    doi = {10.1109/MRA.2019.2904121},\n    issn = {1558223X},\n    journal = {IEEE Robotics and Automation Magazine},\n    number = {2},\n    pages = {49--58},\n    title = {{Robot-Enhanced Therapy: Development and Validation of a Supervised Autonomous Robotic System for Autism Spectrum Disorders Therapy}},\n    volume = {26},\n    year = {2019}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lindsey the Tour Guide Robot - Usage Patterns in a Museum Long-Term Deployment.\n \n \n \n \n\n\n \n Duchetto, F.; Baxter, P.; and Hanheide, M.\n\n\n \n\n\n\n In 2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019, 2019. \n \n\n\n\n
\n\n\n\n \n \n \"LindseyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Duchetto2019,\n    abstract = {{\\textcopyright} 2019 IEEE. The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the 'Lindsey' tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors' behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.},\n    author = {Duchetto, F.D. and Baxter, P. and Hanheide, M.},\n    booktitle = {2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019},\n    doi = {10.1109/RO-MAN46459.2019.8956329},\n    isbn = {9781728126227},\n    keywords = {human-robot interactions,long-term autonomy,service robots},\n    title = {{Lindsey the Tour Guide Robot - Usage Patterns in a Museum Long-Term Deployment}},\n    url = {https://ieeexplore.ieee.org/document/8956329},\n    year = {2019}\n}\n\n
\n
\n\n\n
\n © 2019 IEEE. The long-term deployment of autonomous robots co-located with humans in real-world scenarios remains a challenging problem. In this paper, we present the 'Lindsey' tour guide robot system in which we attempt to increase the social capability of current state-of-the-art robotic technologies. The robot is currently deployed at a museum displaying local archaeology where it is providing guided tours and information to visitors. The robot is operating autonomously daily, navigating around the museum and engaging with the public, with on-site assistance from roboticists only in cases of hardware/software malfunctions. In a deployment lasting seven months up to now, it has travelled nearly 300km and has delivered more than 2300 guided tours. First, we describe the robot framework and the management interfaces implemented. We then analyse the data collected up to now with the goal of understanding and modelling the visitors' behavior in terms of their engagement with the technology. These data suggest that while short-term engagement is readily gained, continued engagement with the robot tour guide is likely to require more refined and robust socially interactive behaviours. The deployed system presents us with an opportunity to empirically address these issues.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Dataset for Action Recognition in the Wild.\n \n \n \n \n\n\n \n Gabriel, A.; Cosar, S.; Bellotto, N.; and Baxter, P.\n\n\n \n\n\n\n In Annual Conference Towards Autonomous Robotic Systems, pages 362–374, London, U.K., 2019. Springer\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Gabriel2019b,\n    address = {London, U.K.},\n    author = {Gabriel, Alexander and Cosar, Serhan and Bellotto, Nicola and Baxter, Paul},\n    booktitle = {Annual Conference Towards Autonomous Robotic Systems},\n    doi = {10.1007/978-3-030-23807-0_30},\n    keywords = {action recognition,agricultural robotics,dataset,human-robot interaction,intention recognition},\n    pages = {362--374},\n    publisher = {Springer},\n    title = {{A Dataset for Action Recognition in the Wild}},\n    url = {https://link.springer.com/chapter/10.1007/978-3-030-23807-0{\\_}30},\n    year = {2019}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards a Dataset of Activities for Action Recognition in Open Fields.\n \n \n \n \n\n\n \n Gabriel, A.; Bellotto, N.; and Baxter, P.\n\n\n \n\n\n\n In 2nd UK-RAS Robotics and Autonomous Systems Conference, pages 64–67, Loughborough, U.K., 2019. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Gabriel2019a,\n    address = {Loughborough, U.K.},\n    author = {Gabriel, Alexander and Bellotto, Nicola and Baxter, Paul},\n    booktitle = {2nd UK-RAS Robotics and Autonomous Systems Conference},\n    doi = {10.31256/ukras19.17},\n    pages = {64--67},\n    title = {{Towards a Dataset of Activities for Action Recognition in Open Fields}},\n    url = {http://eprints.lincoln.ac.uk/36201/},\n    year = {2019}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour.\n \n \n \n \n\n\n \n Wang, H.; Fu, Q.; Wang, H.; Peng, J.; Baxter, P.; Hu, C.; and Yue, S.\n\n\n \n\n\n\n In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-7, July 2019. \n \n\n\n\n
\n\n\n\n \n \n \"AngularPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{Wang2019,\n      author={Wang, Huatian and Fu, Qinbing and Wang, Hongxin and Peng, Jigen and Baxter, Paul and Hu, Cheng and Yue, Shigang},\n      booktitle={2019 International Joint Conference on Neural Networks (IJCNN)},\n      title={Angular Velocity Estimation of Image Motion Mimicking the Honeybee Tunnel Centring Behaviour},\n      year={2019},\n      pages={1-7},\n      abstract={Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of flight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not fulfilled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee flying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee's image motion detection mechanism guiding the tunnel centring behaviour.},\n      keywords={},\n      doi={10.1109/IJCNN.2019.8852321},\n      ISSN={2161-4407},\n      month={July},\n      url={https://ieeexplore.ieee.org/abstract/document/8852321}\n}\n\n
\n
\n\n\n
\n Insects use visual information to estimate angular velocity of retinal image motion, which determines a variety of flight behaviours including speed regulation, tunnel centring and visual navigation. For angular velocity estimation, honeybees show large spatial-independence against visual stimuli, whereas the previous models have not fulfilled such an ability. To address this issue, we propose a bio-plausible model for estimating the image motion velocity based on behavioural experiments of the honeybee flying through patterned tunnels. The proposed model contains mainly three parts, the texture estimation layer for spatial information extraction, the delay-and-correlate layer for temporal information extraction and the decoding layer for angular velocity estimation. This model produces responses that are largely independent of the spatial frequency in grating experiments. And the model has been implemented in a virtual bee for tunnel centring simulations. The results coincide with both electro-physiological neuron spike and behavioural path recordings, which indicates our proposed method provides a better explanation of the honeybee's image motion detection mechanism guiding the tunnel centring behaviour.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Engaging Learners in Dialogue Interactivity Development for Mobile Robots.\n \n \n \n \n\n\n \n Baxter, P.; Del Duchetto, F.; and Hanheide, M.\n\n\n \n\n\n\n In International Conference on Educational Robotics (EduRobotics), Rome, Italy, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EngagingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Baxter2018c,\nabstract = {The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain - a museum guide robot - our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.},\naddress = {Rome, Italy},\nauthor = {Baxter, Paul and {Del Duchetto}, Francesco and Hanheide, Marc},\nbooktitle = {International Conference on Educational Robotics (EduRobotics)},\nkeywords = {DialogFlow,Dialogue Interactivity,Mobile Robots,Museum Guide,Public Engagement},\ntitle = {{Engaging Learners in Dialogue Interactivity Development for Mobile Robots}},\nurl = {http://edurobotics2018.edumotiva.eu/},\nyear = {2018}\n}\n\n
\n
\n\n\n
\n The use of robots in educational and STEM engagement activities is widespread. In this paper we describe a system developed for engaging learners with the design of dialogue-based interactivity for mobile robots. With an emphasis on a web-based solution that is grounded in both a real robot system and a real application domain - a museum guide robot - our intent is to enhance the benefits to both driving research through potential user-group engagement, and enhancing motivation by providing a real application context for the learners involved. The proposed system is designed to be highly scalable to both many simultaneous users and to users of different age groups, and specifically enables direct deployment of implemented systems onto both real and simulated robots. Our observations from preliminary events, involving both children and adults, support the view that the system is both usable and successful in supporting engagement with the dialogue interactivity problem presented to the participants, with indications that this engagement can persist over an extended period of time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From Evaluating to Teaching : Rewards and Challenges of Human Control for Learning Robots.\n \n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Human/Robot in the loop Machine Learning Workshop at IROS (HRML), Madrid, Spain, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2018b,\nabstract = {Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot's behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in highstakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot's policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot's behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.},\naddress = {Madrid, Spain},\nauthor = {Senft, Emmanuel and Lemaignan, S{\\'{e}}verin and Baxter, Paul and Belpaeme, Tony},\nbooktitle = {Human/Robot in the loop Machine Learning Workshop at IROS (HRML)},\ntitle = {{From Evaluating to Teaching : Rewards and Challenges of Human Control for Learning Robots}},\nurl = {https://wp.doc.ic.ac.uk/bbl/iros-2018-workshop/},\nyear = {2018}\n}\n\n
\n
\n\n\n
\n Keeping a human in a robot learning cycle can provide many advantages to improve the learning process. However, most of these improvements are only available when the human teacher is in complete control of the robot's behaviour, and not just providing feedback. This human control can make the learning process safer, allowing the robot to learn in highstakes interaction scenarios especially social ones. Furthermore, it allows faster learning as the human guides the robot to the relevant parts of the state space and can provide additional information to the learner. This information can also enable the learning algorithms to learn for wider world representations, thus increasing the generalisability of a deployed system. Additionally, learning from end users improves the precision of the final policy as it can be specifically tailored to many situations. Finally, this progressive teaching might create trust between the learner and the teacher, easing the deployment of the autonomous robot. However, with such control comes a range of challenges. Firstly, the rich communication between the robot and the teacher needs to be handled by an interface, which may require complex features. Secondly, the teacher needs to be embedded within the robot action selection cycle, imposing time constraints, which increases the cognitive load on the teacher. Finally, given a cycle of interaction between the robot and the teacher, any mistakes made by the teacher can be propagated to the robot's policy. Nevertheless, we are are able to show that empowering the teacher with ways to control a robot's behaviour has the potential to drastically improve both the learning process (allowing robots to learn in a wider range of environments) and the experience of the teacher.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robots Providing Cognitive Assistance in Shared Workspaces.\n \n \n \n \n\n\n \n Baxter, P.; Lightbody, P.; and Hanheide, M.\n\n\n \n\n\n\n In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, of HRI '18, pages 57–58, New York, NY, USA, 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"RobotsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Baxter2018b,\n author = {Baxter, Paul and Lightbody, Peter and Hanheide, Marc},\n abstract = {Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.},\n title = {Robots Providing Cognitive Assistance in Shared Workspaces},\n booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction},\n series = {HRI '18},\n year = {2018},\n isbn = {978-1-4503-5615-2},\n location = {Chicago, IL, USA},\n pages = {57--58},\n numpages = {2},\n url = {http://doi.acm.org/10.1145/3173386.3177070},\n doi = {10.1145/3173386.3177070},\n acmid = {3177070},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {cognitive assistance, cognitive collaboration, shared workspace},\n}\n\n
\n
\n\n\n
\n Human-Robot Collaboration is an area of particular current interest, with the attempt to make robots more generally useful in contexts where they work side-by-side with humans. Currently, efforts typically focus on the sensory and motor aspects of the task on the part of the robot to enable them to function safely and effectively given an assigned task. In the present contribution, we rather focus on the cognitive faculties of the human worker by attempting to incorporate known (from psychology) properties of human cognition. In a proof-of-concept study, we demonstrate how applying characteristics of human categorical perception to the type of robot assistance impacts on task performance and experience of the participants. This lays the foundation for further developments in cognitive assistance and collaboration in side-by-side working for humans and robots.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Safe Human-Robot Interaction in Agriculture.\n \n \n \n \n\n\n \n Baxter, P.; Cielniak, G.; Hanheide, M.; and From, P.\n\n\n \n\n\n\n In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, of HRI '18, pages 59–60, New York, NY, USA, 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"SafePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Baxter2018a,\n author = {Baxter, Paul and Cielniak, Grzegorz and Hanheide, Marc and From, P{\\aa}l},\n abstract = {Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.},\n title = {Safe Human-Robot Interaction in Agriculture},\n booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction},\n series = {HRI '18},\n year = {2018},\n isbn = {978-1-4503-5615-2},\n location = {Chicago, IL, USA},\n pages = {59--60},\n numpages = {2},\n url = {http://doi.acm.org/10.1145/3173386.3177072},\n doi = {10.1145/3173386.3177072},\n acmid = {3177072},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {agricultural robotics, human-aware robot navigation, safe human-robot interaction, side-by-side working},\n}\n\n
\n
\n\n\n
\n Robots in agricultural contexts are finding increased numbers of applications with respect to (partial) automation for increased productivity. However, this presents complex technical problems to be overcome, which are magnified when these robots are intended to work side-by-side with human workers. In this contribution we present an exploratory pilot study to characterise interactions between a robot performing an in-field transportation task and human fruit pickers. Partly an effort to inform the development of a fully autonomous system, the emphasis is on involving the key stakeholders (i.e. the pickers themselves) in the process so as to maximise the potential impact of such an application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Studying Table-Top Manipulation Tasks: A Robust Framework for Object Tracking in Collaboration.\n \n \n \n \n\n\n \n Lightbody, P.; Baxter, P.; and Hanheide, M.\n\n\n \n\n\n\n In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, of HRI '18, pages 177–178, New York, NY, USA, 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"StudyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Lightbody2018,\n author = {Lightbody, Peter and Baxter, Paul and Hanheide, Marc},\n abstract = {Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.},\n title = {Studying Table-Top Manipulation Tasks: A Robust Framework for Object Tracking in Collaboration},\n booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction},\n series = {HRI '18},\n year = {2018},\n isbn = {978-1-4503-5615-2},\n location = {Chicago, IL, USA},\n pages = {177--178},\n numpages = {2},\n url = {http://doi.acm.org/10.1145/3173386.3177045},\n doi = {10.1145/3173386.3177045},\n acmid = {3177045},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {fiducial markers, human robot collaboration, visual tracking},\n}\n\n
\n
\n\n\n
\n Table-top object manipulation is a well-established test bed on which to study both basic foundations of general human-robot interaction and more specific collaborative tasks. A prerequisite, both for studies and for actual collaborative or assistive tasks, is the robust perception of any objects involved. This paper presents a real-time capable and ROS-integrated approach, bringing together state-of-the-art detection and tracking algorithms, integrating perceptual cues from multiple cameras and solving detection, sensor fusion and tracking in one framework. The highly scalable framework was tested in a HRI use-case scenario with 25 objects being reliably tracked under significant temporary occlusions. The use-case demonstrates the suitability of the approach when working with multiple objects in small table-top environments and highlights the versatility and range of analysis available with this framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robots in the classroom: Learning to be a Good Tutor.\n \n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Bartlett, M.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Robots for Learning Workshop at HRI 2018, Chicago, USA, 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobotsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2018a,\n  abstract = {To broaden the adoption and be more inclusive, robotic tutors need to tailor their behaviours to their audience. Traditional approaches, such as Bayesian Knowledge Tracing, try to adapt the content of lessons or the difficulty of tasks to the current estimated knowledge of the student. However, these variations only happen in a limited domain, predefined in advance, and are not able to tackle unexpected variation in a student's behaviours. We argue that robot adaptation needs to go beyond variations in preprogrammed behaviours and that robots should in effect learn online how to become better tutors. A study is currently being carried out to evaluate how human supervision can teach a robot to support child learning during an educational game using one implementation of this approach.},\n  address = {Chicago, USA},\n  author = {Senft, Emmanuel and Lemaignan, Severin and Bartlett, Madeleine and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Robots for Learning Workshop at HRI 2018},\n  title = {{Robots in the classroom: Learning to be a Good Tutor}},\n  url = {https://r4l.epfl.ch/files/content/sites/r4l/files/HRI2018/proceedings{\\_}2018/paper4.pdf},\n  year = {2018}\n}\n\n
\n
\n\n\n
\n To broaden the adoption and be more inclusive, robotic tutors need to tailor their behaviours to their audience. Traditional approaches, such as Bayesian Knowledge Tracing, try to adapt the content of lessons or the difficulty of tasks to the current estimated knowledge of the student. However, these variations only happen in a limited domain, predefined in advance, and are not able to tackle unexpected variation in a student's behaviours. We argue that robot adaptation needs to go beyond variations in preprogrammed behaviours and that robots should in effect learn online how to become better tutors. A study is currently being carried out to evaluate how human supervision can teach a robot to support child learning during an educational game using one implementation of this approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Social Robots in Therapy: Focusing on Autonomy and Ethical Challenges.\n \n \n \n \n\n\n \n G. Esteban, P.; Hernández García, D.; Lee, H. R.; Chevalier, P.; Baxter, P.; and Bethel, C.\n\n\n \n\n\n\n In Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, of HRI '18, pages 391–392, New York, NY, USA, 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"SocialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Esteban2018,\n author = {G. Esteban, Pablo and Hern\\'{a}ndez Garc\\'{\\i}a, Daniel and Lee, Hee Rin and Chevalier, Pauline and Baxter, Paul and Bethel, Cindy},\n abstract = {Robot-Assisted Therapy (RAT) has successfully been used in Human-Robot Interaction (HRI) research by including social robots in health-care interventions by virtue of their ability to engage human users in both social and emotional dimensions. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal of increasing the well-being of a vulnerable population. Typical, RAT is performed with the Wizard-of-Oz (WoZ) technique, where the robot is controlled, unbeknownst to the patient, by a human operator. However, WoZ has been demonstrated to not be a sustainable technique in the long-term. Providing the robots with autonomy (while remaining under the supervision of the therapist) has the potential to lighten the therapist»s burden, not only in the therapeutic session itself but also in longer-term diagnostic tasks. Therefore, there is a need for exploring several degrees of autonomy in social robots used in therapy. Increasing the autonomy of robots might also bring about a new set of challenges. In particular, there will be a need to answer new ethical questions regarding the use of robots with a vulnerable population, as well as a need to ensure ethically-compliant robot behaviors. Therefore, in this workshop we want to gather findings and explore which degree of autonomy might help to improve health-care interventions and how we can overcome the ethical challenges inherent to it.},\n title = {Social Robots in Therapy: Focusing on Autonomy and Ethical Challenges},\n booktitle = {Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction},\n series = {HRI '18},\n year = {2018},\n isbn = {978-1-4503-5615-2},\n location = {Chicago, IL, USA},\n pages = {391--392},\n numpages = {2},\n url = {http://doi.acm.org/10.1145/3173386.3173562},\n doi = {10.1145/3173386.3173562},\n acmid = {3173562},\n publisher = {ACM},\n address = {New York, NY, USA},\n keywords = {autonomous robots, ethics, human-robot interaction, robots in therapy},\n}\n\n
\n
\n\n\n
\n Robot-Assisted Therapy (RAT) has successfully been used in Human-Robot Interaction (HRI) research by including social robots in health-care interventions by virtue of their ability to engage human users in both social and emotional dimensions. Research projects on this topic exist all over the globe in the USA, Europe, and Asia. All of these projects have the overall ambitious goal of increasing the well-being of a vulnerable population. Typical, RAT is performed with the Wizard-of-Oz (WoZ) technique, where the robot is controlled, unbeknownst to the patient, by a human operator. However, WoZ has been demonstrated to not be a sustainable technique in the long-term. Providing the robots with autonomy (while remaining under the supervision of the therapist) has the potential to lighten the therapist»s burden, not only in the therapeutic session itself but also in longer-term diagnostic tasks. Therefore, there is a need for exploring several degrees of autonomy in social robots used in therapy. Increasing the autonomy of robots might also bring about a new set of challenges. In particular, there will be a need to answer new ethical questions regarding the use of robots with a vulnerable population, as well as a need to ensure ethically-compliant robot behaviors. Therefore, in this workshop we want to gather findings and explore which degree of autonomy might help to improve health-care interventions and how we can overcome the ethical challenges inherent to it.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Robot Education Peers in a Situated Primary School Study: Personalisation Promotes Child Learning.\n \n \n \n \n\n\n \n Baxter, P.; Ashurst, E.; Read, R.; Kennedy, J.; and Belpaeme, T.\n\n\n \n\n\n\n PLOS ONE. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"RobotPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Baxter2017,\n    abstract = {The benefit of social robots to support child learning in an educational context over an extended period of time is evaluated. Specifically, the effect of personalisation and adaptation of robot social behaviour is assessed. Two autonomous robots were embedded within two matched classrooms of a primary school for a continuous two week period without experimenter supervision to act as learning companions for the children for familiar and novel subjects. Results suggest that while children in both personalised and non-personalised conditions learned, there was increased child learning of a novel subject exhibited when interacting with a robot that personalised its behaviours, with indications that this benefit extended to other class-based performance. Additional evidence was obtained suggesting that there is increased acceptance of the personalised robot peer over a non-personalised version. These results provide the first evidence in support of peer-robot behavioural personalisation having a positive influence on learning when embedded in a learning environment for an extended period of time.},\n    author = {Baxter, Paul and Ashurst, Emily and Read, Robin and Kennedy, James and Belpaeme, Tony},\n    doi = {10.1371/journal.pone.0178126},\n    journal = {PLOS ONE},\n    title = {{Robot Education Peers in a Situated Primary School Study: Personalisation Promotes Child Learning}},\n    url = {http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0178126},\n    year = {2017}\n}\n\n
\n
\n\n\n
\n The benefit of social robots to support child learning in an educational context over an extended period of time is evaluated. Specifically, the effect of personalisation and adaptation of robot social behaviour is assessed. Two autonomous robots were embedded within two matched classrooms of a primary school for a continuous two week period without experimenter supervision to act as learning companions for the children for familiar and novel subjects. Results suggest that while children in both personalised and non-personalised conditions learned, there was increased child learning of a novel subject exhibited when interacting with a robot that personalised its behaviours, with indications that this benefit extended to other class-based performance. Additional evidence was obtained suggesting that there is increased acceptance of the personalised robot peer over a non-personalised version. These results provide the first evidence in support of peer-robot behavioural personalisation having a positive influence on learning when embedded in a learning environment for an extended period of time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Solve Memory to Solve Cognition.\n \n \n \n \n\n\n \n Baxter, P.\n\n\n \n\n\n\n In Chrisley, R.; Müller, V. C.; Sandamirskaya, Y.; and Vincze, M., editor(s), Proceedings of the EUCognition Meeting (European Association for Cognitive Systems) \"Cognitive Robot Architectures\", pages 58–59, Vienna, Austria, 2017. CEUR Workshop Proceedings\n \n\n\n\n
\n\n\n\n \n \n \"SolvePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2017a,\n  abstract = {The foundations of cognition and cognitive behaviour are consistently proposed to be built upon the capability to predict (at various levels of abstraction). For autonomous cognitive agents, this implicitly assumes a foundational role for memory, as a mechanism by which prior experience can be brought to bear in the service of present and future behaviour. In this contribution, this idea is extended to propose that an active process of memory provides the substrate for cognitive processing, particularly when considering it as fundamentally associative and from a developmental perspective. It is in this context that the claim is made that in order to solve the question of cognition, the role and function of memory must be fully resolved.},\n  address = {Vienna, Austria},\n  author = {Baxter, Paul},\n  booktitle = {Proceedings of the EUCognition Meeting (European Association for Cognitive Systems) "Cognitive Robot Architectures"},\n  editor = {Chrisley, Ron and M{\\"{u}}ller, Vincent C. and Sandamirskaya, Yulia and Vincze, Markus},\n  pages = {58--59},\n  publisher = {CEUR Workshop Proceedings},\n  title = {{Solve Memory to Solve Cognition}},\n  url = {http://ceur-ws.org/Vol-1855/},\n  year = {2017}\n}\n\n
\n
\n\n\n
\n The foundations of cognition and cognitive behaviour are consistently proposed to be built upon the capability to predict (at various levels of abstraction). For autonomous cognitive agents, this implicitly assumes a foundational role for memory, as a mechanism by which prior experience can be brought to bear in the service of present and future behaviour. In this contribution, this idea is extended to propose that an active process of memory provides the substrate for cognitive processing, particularly when considering it as fundamentally associative and from a developmental perspective. It is in this context that the claim is made that in order to solve the question of cognition, the role and function of memory must be fully resolved.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Toward Supervised Reinforcement Learning with Partial States for Social HRI .\n \n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In 4th AAAI FSS on Artificial Intelligence for Social Human-Robot Interaction (AI-HRI), Arlington, VA, USA, 2017. \n \n\n\n\n
\n\n\n\n \n \n \"TowardPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2017b,\n    abstract={Social interacting is a complex task for which machine learning holds particular promise. However, as no sufficiently accurate simulator of human interactions exists today, the learning of social interaction strategies has to happen online in the real world. Actions executed by the robot impact on humans, and as such have to be carefully selected, making it impossible to rely on random exploration. Additionally, no clear reward function exists for social interactions. This implies that traditional approaches used for Reinforcement Learning cannot be directly applied for learning how to interact with the social world. As such we argue that robots will profit from human expertise and guidance to learn social interactions. However, as the quantity of input a human can provide is limited, new methods have to be designed to use human input more efficiently. In this paper we describe a setup in which we combine a framework called Supervised Progressively Autonomous Robot Competencies (SPARC), which allows safer online learning with Reinforcement Learning, with the use of partial states rather than full states to accelerate generalisation and obtain a usable action policy more quickly},\n    address = {Arlington, VA, USA},\n    author={Senft, Emmanuel and Lemaignan, Severin and Baxter, Paul and Belpaeme, Tony},\n    booktitle={4th AAAI FSS on Artificial Intelligence for Social Human-Robot Interaction (AI-HRI)},\n    title={Toward Supervised Reinforcement Learning with Partial States for Social HRI },\n    url={https://ai-hri.github.io/2017/},\n    year={2017},\n}\n\n
\n
\n\n\n
\n Social interacting is a complex task for which machine learning holds particular promise. However, as no sufficiently accurate simulator of human interactions exists today, the learning of social interaction strategies has to happen online in the real world. Actions executed by the robot impact on humans, and as such have to be carefully selected, making it impossible to rely on random exploration. Additionally, no clear reward function exists for social interactions. This implies that traditional approaches used for Reinforcement Learning cannot be directly applied for learning how to interact with the social world. As such we argue that robots will profit from human expertise and guidance to learn social interactions. However, as the quantity of input a human can provide is limited, new methods have to be designed to use human input more efficiently. In this paper we describe a setup in which we combine a framework called Supervised Progressively Autonomous Robot Competencies (SPARC), which allows safer online learning with Reinforcement Learning, with the use of partial states rather than full states to accelerate generalisation and obtain a usable action policy more quickly\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Impact of Robot Tutor Nonverbal Social Behavior on Child Learning.\n \n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n Frontiers in ICT, 4. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Kennedy2017a,\n  author={Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  title={The Impact of Robot Tutor Nonverbal Social Behavior on Child Learning},\n  journal={Frontiers in ICT},\n  volume={4},\n  year={2017},\n  url={http://journal.frontiersin.org/article/10.3389/fict.2017.00006},\n  doi={10.3389/fict.2017.00006},\n  issn={2297-198X},\n  abstract={Several studies have indicated that interacting with social robots in educational contexts may lead to greater learning than interactions with computers or virtual agents. As such, an increasing amount of social human-robot interaction research is being conducted in the learning domain, particularly with children. However, it is unclear precisely what social behaviour a robot should employ in such interactions. Inspiration can be taken from human-human studies; this often leads to an assumption that the more social behaviour an agent utilises, the better the learning outcome will be. We apply a nonverbal behaviour metric to a series of studies in which children are taught how to identify prime numbers by a robot with various behavioural manipulations. We find a trend which generally agrees with the pedagogy literature, but also that overt nonverbal behaviour does not account for all learning differences. We discuss the impact of novelty, child expectations, and responses to social cues to further the understanding of the relationship between robot social behaviour and learning. We suggest that the combination of nonverbal behaviour and social cue congruency is necessary to facilitate learning.}\n}\n\n
\n
\n\n\n
\n Several studies have indicated that interacting with social robots in educational contexts may lead to greater learning than interactions with computers or virtual agents. As such, an increasing amount of social human-robot interaction research is being conducted in the learning domain, particularly with children. However, it is unclear precisely what social behaviour a robot should employ in such interactions. Inspiration can be taken from human-human studies; this often leads to an assumption that the more social behaviour an agent utilises, the better the learning outcome will be. We apply a nonverbal behaviour metric to a series of studies in which children are taught how to identify prime numbers by a robot with various behavioural manipulations. We find a trend which generally agrees with the pedagogy literature, but also that overt nonverbal behaviour does not account for all learning differences. We discuss the impact of novelty, child expectations, and responses to social cues to further the understanding of the relationship between robot social behaviour and learning. We suggest that the combination of nonverbal behaviour and social cue congruency is necessary to facilitate learning.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Supervised Autonomy for Online Learning in Human-Robot Interaction.\n \n \n \n \n\n\n \n Senft, E.; Baxter, P.; Kennedy, J.; Lemaignan, S.; and Belpaeme, T.\n\n\n \n\n\n\n Pattern Recognition Letters. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"SupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Senft2017,\n    abstract = {When a robot is learning it needs to explore its environment and how its environment responds on its actions. When the environment is large and there are a large number of possible actions the robot can take, this exploration phase can take prohibitively long. However, exploration can often be optimised by letting a human expert guide the robot during its learning. Interactive machine learning, in which a human user interactively guides the robot as it learns, has been shown to be an effective way to teach a robot. It requires an intuitive control mechanism to allow the human expert to provide feedback on the robot's progress. This paper presents a novel method which combines Reinforcement Learning and Supervised Progressively Autonomous Robot Competencies (SPARC). By allowing the user to fully control the robot and by treating rewards as implicit, SPARC aims to learn an action policy while maintaining human supervisory oversight of the robot's behaviour. This method is evaluated and compared to Interactive Reinforcement Learning in a robot teaching task. Qualitative and quantitative results indicate that SPARC allows for safer and faster learning by the robot, whilst not placing a high workload on the human teacher.},\n    author = {Senft, Emmanuel and Baxter, Paul and Kennedy, James and Lemaignan, S{\\'{e}}verin and Belpaeme, Tony},\n    doi = {10.1016/j.patrec.2017.03.015},\n    issn = {01678655},\n    journal = {Pattern Recognition Letters},\n    title = {{Supervised Autonomy for Online Learning in Human-Robot Interaction}},\n    url = {http://linkinghub.elsevier.com/retrieve/pii/S0167865517300892},\n    year = {2017}\n}\n\n
\n
\n\n\n
\n When a robot is learning it needs to explore its environment and how its environment responds on its actions. When the environment is large and there are a large number of possible actions the robot can take, this exploration phase can take prohibitively long. However, exploration can often be optimised by letting a human expert guide the robot during its learning. Interactive machine learning, in which a human user interactively guides the robot as it learns, has been shown to be an effective way to teach a robot. It requires an intuitive control mechanism to allow the human expert to provide feedback on the robot's progress. This paper presents a novel method which combines Reinforcement Learning and Supervised Progressively Autonomous Robot Competencies (SPARC). By allowing the user to fully control the robot and by treating rewards as implicit, SPARC aims to learn an action policy while maintaining human supervisory oversight of the robot's behaviour. This method is evaluated and compared to Interactive Reinforcement Learning in a robot teaching task. Qualitative and quantitative results indicate that SPARC allows for safer and faster learning by the robot, whilst not placing a high workload on the human teacher.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder.\n \n \n \n \n\n\n \n Gomez Esteban, P.; Baxter, P.; Belpaeme, T.; Billing, E.; Cai, H.; Cao, H. L.; Coeckelbergh, M.; Costescu, C.; David, D.; De Beir, A.; Fang, Y.; Ju, Z.; Kennedy, J.; Liu, H.; Mazel, A.; Pandey, A.; Richardson, K.; Senft, E.; Thill, S.; Van De Perre, G.; Vanderborght, B.; Vernon, D.; Hui, Y.; and Ziemke, T.\n\n\n \n\n\n\n Paladyn Journal of Behavioral Robotics, 8(1): 18–38. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"HowPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{GomezEsteban2017,\n    abstract = {Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms. However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy. We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.},\n    author = {{Gomez Esteban}, Pablo and Baxter, Paul and Belpaeme, Tony and Billing, Erik and Cai, Haibin and Cao, Hoang- Long and Coeckelbergh, Mark and Costescu, Cristina and David, Daniel and {De Beir}, Albert and Fang, Yinfeng and Ju, Zhaojie and Kennedy, James and Liu, Honghai and Mazel, Alexandre and Pandey, Amit and Richardson, Kathleen and Senft, Emmanuel and Thill, Serge and {Van De Perre}, Greet and Vanderborght, Bram and Vernon, David and Hui, Yu and Ziemke, Tom},\n    journal = {Paladyn Journal of Behavioral Robotics},\n    title = {{How to Build a Supervised Autonomous System for Robot-Enhanced Therapy for Children with Autism Spectrum Disorder}},\n    volume = {8},\n    number = {1},\n    pages = {18--38},\n    url = {https://www.degruyter.com/view/j/pjbr.2017.8.issue-1/pjbr-2017-0002/pjbr-2017-0002.xml?format=INT},\n    year = {2017}\n}\n\n
\n
\n\n\n
\n Robot-Assisted Therapy (RAT) has successfully been used to improve social skills in children with autism spectrum disorders (ASD) through remote control of the robot in so-called Wizard of Oz (WoZ) paradigms. However, there is a need to increase the autonomy of the robot both to lighten the burden on human therapists (who have to remain in control and, importantly, supervise the robot) and to provide a consistent therapeutic experience. This paper seeks to provide insight into increasing the autonomy level of social robots in therapy to move beyond WoZ. With the final aim of improved human-human social interaction for the children, this multidisciplinary research seeks to facilitate the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy. We introduce the clinical framework in which the developments are tested, alongside initial data obtained from patients in a first phase of the project using a WoZ set-up mimicking the targeted supervised-autonomy behaviour. We further describe the implemented system architecture capable of providing the robot with supervised autonomy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Leveraging Human Inputs in Interactive Machine Learning for Human Robot Interaction.\n \n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Baxter, P. E.; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17, pages 281–282, Vienna, Austria, 2017. \n \n\n\n\n
\n\n\n\n \n \n \"LeveragingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2017a,\n  abstract = {A key challenge of HRI is allowing robots to be adaptable, especially as robots are expected to penetrate society at large and to interact in unexpected environments with non- technical users. One way of providing this adaptability is to use Interactive Machine Learning, i.e. having a human supervisor included in the learning process who can steer the action selection and the learning in the desired direction. We ran a study exploring how people use numeric rewards to evaluate a robot's behaviour and guide its learning. From the results we derive a number of challenges when designing learning robots: what kind of input should the human provide? How should the robot communicate its state or its intention? And how can the teaching process by made easier for human supervisors?},\n  address = {Vienna, Austria},\n  author = {Senft, Emmanuel and Lemaignan, S{\\'{e}}verin and Baxter, Paul E. and Belpaeme, Tony},\n  booktitle = {Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17},\n  doi = {10.1145/3029798.3038385},\n  isbn = {9781450348850},\n  pages = {281--282},\n  title = {{Leveraging Human Inputs in Interactive Machine Learning for Human Robot Interaction}},\n  url = {http://dl.acm.org/citation.cfm?doid=3029798.3038385},\n  year = {2017}\n}\n\n
\n
\n\n\n
\n A key challenge of HRI is allowing robots to be adaptable, especially as robots are expected to penetrate society at large and to interact in unexpected environments with non- technical users. One way of providing this adaptability is to use Interactive Machine Learning, i.e. having a human supervisor included in the learning process who can steer the action selection and the learning in the desired direction. We ran a study exploring how people use numeric rewards to evaluate a robot's behaviour and guide its learning. From the results we derive a number of challenges when designing learning robots: what kind of input should the human provide? How should the robot communicate its state or its intention? And how can the teaching process by made easier for human supervisors?\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Nonverbal Immediacy as a Characterisation of Social Behaviour for Human-Robot Interaction.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n International Journal of Social Robotics, 9(1): 109–128. 2017.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{Kennedy2017,\n    abstract = {An increasing amount of research has started to explore the impact of robot social behaviour on the outcome of a goal for a human interaction partner, such as cognitive learning gains. However, it remains unclear from what principles the social behaviour for such robots should be derived. Human models are often used, but in this paper an alternative approach is proposed. First, the concept of nonverbal immediacy from the communication literature is introduced, with a focus on how it can provide a characterisation of social behaviour, and the subsequent outcomes of such behaviour. A literature review is conducted to explore the impact on learning of the social cues which form the nonverbal immediacy measure. This leads to the production of a series of guidelines for social robot behaviour. The resulting behaviour is evaluated in a more general context, where both children and adults judge the immediacy of humans and robots in a similar manner, and their recall of a short story is tested. Children recall more of the story when the robot is more immediate, which demonstrates an effect predicted by the literature. This study provides validation for the application of nonverbal immediacy to child–robot interaction. It is proposed that nonverbal immediacy measures could be used as a means of characterising robot social behaviour for human–robot interaction.},\n    author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n    doi = {10.1007/s12369-016-0378-3},\n    journal = {International Journal of Social Robotics},\n    keywords = {human-robot,nonverbal immediacy,robots for education,social behaviour,social cues},\n    number = {1},\n    pages = {109--128},\n    title = {{Nonverbal Immediacy as a Characterisation of Social Behaviour for Human-Robot Interaction}},\n    volume = {9},\n    year = {2017}\n}\n\n
\n
\n\n\n
\n An increasing amount of research has started to explore the impact of robot social behaviour on the outcome of a goal for a human interaction partner, such as cognitive learning gains. However, it remains unclear from what principles the social behaviour for such robots should be derived. Human models are often used, but in this paper an alternative approach is proposed. First, the concept of nonverbal immediacy from the communication literature is introduced, with a focus on how it can provide a characterisation of social behaviour, and the subsequent outcomes of such behaviour. A literature review is conducted to explore the impact on learning of the social cues which form the nonverbal immediacy measure. This leads to the production of a series of guidelines for social robot behaviour. The resulting behaviour is evaluated in a more general context, where both children and adults judge the immediacy of humans and robots in a similar manner, and their recall of a short story is tested. Children recall more of the story when the robot is more immediate, which demonstrates an effect predicted by the literature. This study provides validation for the application of nonverbal immediacy to child–robot interaction. It is proposed that nonverbal immediacy measures could be used as a means of characterising robot social behaviour for human–robot interaction.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (13)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n New Frontiers in Human-Robot Interaction: Special Section.\n \n \n \n \n\n\n \n Salem, M.; Weiss, A.; and Baxter, P.\n\n\n \n\n\n\n Interaction Studies, 17(3): 405–407. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Salem2016,\n    author = {Salem, Maha and Weiss, Astrid and Baxter, Paul},\n    doi = {10.1075/is.17.3.05sal},\n    issn = {1572-0373},\n    journal = {Interaction Studies},\n    number = {3},\n    pages = {405--407},\n    title = {{New Frontiers in Human-Robot Interaction: Special Section}},\n    url = {http://www.jbe-platform.com/content/journals/10.1075/is.17.3.05sal},\n    volume = {17},\n    year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Memory-Centred Cognitive Architectures for Robots Interacting Socially with Humans.\n \n \n \n \n\n\n \n Baxter, P.\n\n\n \n\n\n\n In 2nd Workshop on Cognitive Architectures for Social Human-Robot Interaction at HRI'16, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Memory-CentredPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2016,\n  abstract = {The Memory-Centred Cognition perspective places an active association substrate at the heart of cognition, rather than as a passive adjunct. Consequently, it places prediction and priming on the basis of prior experience to be inherent and fundamental aspects of processing. Social interaction is taken here to minimally require contingent and co-adaptive behaviours from the interacting parties. In this contribution, I seek to show how the memory-centred cognition approach to cognitive architectures can provide an means of addressing these functions. A number of example implementations are briefly reviewed, particularly focusing on multi-modal alignment as a function of experience-based priming. While there is further refinement required to the theory, and implementations based thereon, this approach provides an interesting alternative perspective on the foundations of cognitive architectures to support robots engage in social interactions with humans.},\n  address = {Christchurch, New Zealand},\n  archivePrefix = {arXiv},\n  arxivId = {1602.05638},\n  author = {Baxter, Paul},\n  booktitle = {2nd Workshop on Cognitive Architectures for Social Human-Robot Interaction at HRI'16},\n  eprint = {1602.05638},\n  title = {{Memory-Centred Cognitive Architectures for Robots Interacting Socially with Humans}},\n  url = {http://arxiv.org/abs/1602.05638},\n  year = {2016}\n}\n\n
\n
\n\n\n
\n The Memory-Centred Cognition perspective places an active association substrate at the heart of cognition, rather than as a passive adjunct. Consequently, it places prediction and priming on the basis of prior experience to be inherent and fundamental aspects of processing. Social interaction is taken here to minimally require contingent and co-adaptive behaviours from the interacting parties. In this contribution, I seek to show how the memory-centred cognition approach to cognitive architectures can provide an means of addressing these functions. A number of example implementations are briefly reviewed, particularly focusing on multi-modal alignment as a function of experience-based priming. While there is further refinement required to the theory, and implementations based thereon, this approach provides an interesting alternative perspective on the foundations of cognitive architectures to support robots engage in social interactions with humans.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies.\n \n \n \n \n\n\n \n Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In 2nd Workshop on Evaluating Child-Robot Interaction at HRI'16, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2016a,\n  address = {Christchurch, New Zealand},\n  author = {Baxter, Paul and Belpaeme, Tony},\n  booktitle = {2nd Workshop on Evaluating Child-Robot Interaction at HRI'16},\n  title = {{A Cautionary Note on Personality (Extroversion) Assessments in Child-Robot Interaction Studies}},\n  url = {https://childrobotinteraction.org/proceedings-2nd-workshop-hri-2016/},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Effect of Repeating Tasks on Performance Levels in Mediated Child-Robot Interactions.\n \n \n \n \n\n\n \n Baxter, P.; Kennedy, J.; Ashurst, E.; and Belpaeme, T.\n\n\n \n\n\n\n In Workshop on Robots for Learning at RoMAN 2016, New York, USA, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2016b,\n  abstract = {That “practice makes perfect” is a powerful heuris- tic for improving performance through repetition. This is widely used in educational contexts, and as such it provides a poten- tially useful feature for application to child-robot educational interactions. While this effect may intuitively appear to be present, we here describe data to provide evidence in support of this supposition. Conducting a descriptive analysis of data from a wider study, we specifically examine the effect on child performance of repeating a previously performed collaborative task with a peer robot (i.e. not an expert agent), if initial performance is low. The results generally indicate a positive effect on performance through repetition, and a number of other correlation effects that highlight the role of individual differences. This outcome provides evidence for the variable utility of repetition between individuals, but also indicates that this is driven by the individual, which can nevertheless result in performance improvements even in the context of peer-peer interactions with relatively sparse feedback.},\n  address = {New York, USA},\n  author = {Baxter, Paul and Kennedy, James and Ashurst, Emily and Belpaeme, Tony},\n  booktitle = {Workshop on Robots for Learning at RoMAN 2016},\n  title = {{The Effect of Repeating Tasks on Performance Levels in Mediated Child-Robot Interactions}},\n  url = {http://r4l.epfl.ch/page-129796.html},\n  year = {2016}\n}\n\n
\n
\n\n\n
\n That “practice makes perfect” is a powerful heuris- tic for improving performance through repetition. This is widely used in educational contexts, and as such it provides a poten- tially useful feature for application to child-robot educational interactions. While this effect may intuitively appear to be present, we here describe data to provide evidence in support of this supposition. Conducting a descriptive analysis of data from a wider study, we specifically examine the effect on child performance of repeating a previously performed collaborative task with a peer robot (i.e. not an expert agent), if initial performance is low. The results generally indicate a positive effect on performance through repetition, and a number of other correlation effects that highlight the role of individual differences. This outcome provides evidence for the variable utility of repetition between individuals, but also indicates that this is driven by the individual, which can nevertheless result in performance improvements even in the context of peer-peer interactions with relatively sparse feedback.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From Characterising Three Years of HRI to Methodology and Reporting Recommendations.\n \n \n \n \n\n\n \n Baxter, P.; Kennedy, J.; Senft, E.; Lemaignan, S.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2016, pages 391–398, Christchurch, New Zealand, 2016. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2016c,\n  address = {Christchurch, New Zealand},\n  author = {Baxter, Paul and Kennedy, James and Senft, Emmanuel and Lemaignan, S{\\'{e}}verin and Belpaeme, Tony},\n  booktitle = {HRI 2016},\n  isbn = {9781467383707},\n  pages = {391--398},\n  publisher = {ACM Press},\n  title = {{From Characterising Three Years of HRI to Methodology and Reporting Recommendations}},\n  url = {http://dl.acm.org/citation.cfm?id=2906897},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Workshop: Cognitive Architectures for Social Human-Robot Interaction.\n \n \n \n \n\n\n \n Baxter, P.; Lemaignan, S.; and Trafton, J. G.\n\n\n \n\n\n\n In HRI 2016, pages 579–580, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Workshop:Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2016d,\n  address = {Christchurch, New Zealand},\n  author = {Baxter, Paul and Lemaignan, Severin and Trafton, J. Gregory},\n  booktitle = {HRI 2016},\n  pages = {579--580},\n  title = {{Workshop: Cognitive Architectures for Social Human-Robot Interaction}},\n  url = {http://dl.acm.org/citation.cfm?id=2906989},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users.\n \n \n \n \n\n\n \n Coninx, A.; Baxter, P.; Oleari, E.; Bellini, S.; Bierman, B.; Henkemans, O. B.; Canamero, L.; Cosi, P.; Enescu, V.; Espinoza, R. R.; Hiolle, A.; Humbert, R.; Kiefer, B.; Kruijff-korbayova, I.; Looije, R.; Mosconi, M.; Neerincx, M.; Paci, G.; Patsis, G.; Pozzi, C.; Sacchitelli, F.; Sahli, H.; Sanna, A.; Sommavilla, G.; Tesser, F.; Demiris, Y.; and Belpaeme, T.\n\n\n \n\n\n\n Journal of Human-Robot Interaction, 5(1): 32–67. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{Coninx2015,\n  author = {Coninx, Alexandre and Baxter, Paul and Oleari, Elettra and Bellini, Sara and Bierman, Bert and Henkemans, Olivier Blanson and Canamero, Lola and Cosi, Piero and Enescu, Valentin and Espinoza, Raquel Ros and Hiolle, Antoine and Humbert, Remi and Kiefer, Bernd and Kruijff-korbayova, Ivana and Looije, Rosemarijn and Mosconi, Marco and Neerincx, Mark and Paci, Giulio and Patsis, Georgios and Pozzi, Clara and Sacchitelli, Francesca and Sahli, Hichem and Sanna, Alberto and Sommavilla, Giacomo and Tesser, Fabio and Demiris, Yiannis and Belpaeme, Tony},\n  doi = {10.5898/JHRI.5.1.Coninx},\n  journal = {Journal of Human-Robot Interaction},\n  keywords = {Case study,child-robot interaction,integrated system,knowledge gain,long-term interaction,motivation,multi-objective support,multiple activities},\n  number = {1},\n  pages = {32--67},\n  title = {{Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users}},\n  volume = {5},\n  year = {2016},\n  url={http://humanrobotinteraction.org/journal/index.php/HRI/article/view/248},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heart vs Hard Drive: Children Learn More From a Human Tutor Than a Social Robot.\n \n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; Senft, E.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2016, pages 451–452, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"HeartPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2016a,\n  address = {Christchurch, New Zealand},\n  author = {Kennedy, James and Baxter, Paul and Senft, Emmanuel and Belpaeme, Tony},\n  booktitle = {HRI 2016},\n  isbn = {9781467383707},\n  pages = {451--452},\n  title = {{Heart vs Hard Drive: Children Learn More From a Human Tutor Than a Social Robot}},\n  url = {http://dl.acm.org/citation.cfm?id=2906922},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Social Robot Tutoring for Child Second Language Learning.\n \n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; Senft, E.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2016, pages 231–238, Christchurch, New Zealand, 2016. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"SocialPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2016,\n  address = {Christchurch, New Zealand},\n  author = {Kennedy, James and Baxter, Paul and Senft, Emmanuel and Belpaeme, Tony},\n  booktitle = {HRI 2016},\n  isbn = {9781467383707},\n  pages = {231--238},\n  publisher = {ACM Press},\n  title = {{Social Robot Tutoring for Child Second Language Learning}},\n  url = {http://dl.acm.org/citation.cfm?id=2906873},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards ``Machine-Learnable'' Child-Robot Interactions: the PInSoRo Dataset.\n \n \n \n \n\n\n \n Lemaignan, S.; Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Workshop on Long-Term Child-Robot Interaction at RoMAN 2016, New York, USA, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Lemaignan2016,\n  abstract = {Child-robot interactions are increasingly being explored in domains which require longer-term application, such as healthcare and education. In order for a robot to behave in an appropriate manner over longer timescales, its behaviours should be coterminous with that of the interacting children. Generating such sustained and engaging social behaviours is an on-going research challenge, and we argue here that the recent progress of deep machine learning opens new perspectives that the HRI community should embrace. As an initial step in that direction, we propose the creation of a large open dataset of child-robot social interactions. We detail our proposed methodology for data acquisition: children interact with a robot puppeted by an expert adult during a range of playful face-to- face social tasks. By doing so, we seek to capture a rich set of human-like behaviours occurring in natural social interactions, that are explicitly mapped to the robot's embodiment and affordances.},\n  address = {New York, USA},\n  author = {Lemaignan, Severin and Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Workshop on Long-Term Child-Robot Interaction at RoMAN 2016},\n  title = {{Towards ``Machine-Learnable'' Child-Robot Interactions: the PInSoRo Dataset}},\n  url = {http://web.media.mit.edu/{~}haewon/Roman-LTCRI/},\n  year = {2016}\n}\n\n
\n
\n\n\n
\n Child-robot interactions are increasingly being explored in domains which require longer-term application, such as healthcare and education. In order for a robot to behave in an appropriate manner over longer timescales, its behaviours should be coterminous with that of the interacting children. Generating such sustained and engaging social behaviours is an on-going research challenge, and we argue here that the recent progress of deep machine learning opens new perspectives that the HRI community should embrace. As an initial step in that direction, we propose the creation of a large open dataset of child-robot social interactions. We detail our proposed methodology for data acquisition: children interact with a robot puppeted by an expert adult during a range of playful face-to- face social tasks. By doing so, we seek to capture a rich set of human-like behaviours occurring in natural social interactions, that are explicitly mapped to the robot's embodiment and affordances.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n SPARC: an efficient way to combine reinforcement learning and supervised autonomy.\n \n \n \n\n\n \n Senft, E.; Lemaignan, S.; Baxter, P. E; and Belpaeme, T.\n\n\n \n\n\n\n In Future of Interactive Learning Machines Workshop at NIPS'16, Los Angeles, USA, 2016. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2016a,\n    abstract = {Shortcomings of reinforcement learning for robot control include the sparsity of the environmental reward function, the high number of trials required before reaching an efficient action policy and the reliance on exploration to gather information about the environment, potentially resulting in undesired actions. These limits can be overcome by adding a human in the loop to provide additional information during the learning phase. In this paper, we propose a novel way to combine human inputs and reinforcement by following the Supervised Progressively Autonomous Robot Competencies (SPARC) approach. We compare this method to the principles of Interactive Reinforcement Learning as proposed by Thomaz and Breazeal. Results from a study involving 40 participants show that using SPARC increases the performance of the learning, reduces the time and number of inputs required for teaching and faces fewer errors during the learning process. These results support the use of SPARC as an efficient method to teach a robot to interact with humans.},\n    address = {Los Angeles, USA},\n    author = {Senft, Emmanuel and Lemaignan, S{\\'{e}}verin and Baxter, Paul E and Belpaeme, Tony},\n    booktitle = {Future of Interactive Learning Machines Workshop at NIPS'16},\n    title = {{SPARC: an efficient way to combine reinforcement learning and supervised autonomy}},\n    year = {2016}\n}\n\n
\n
\n\n\n
\n Shortcomings of reinforcement learning for robot control include the sparsity of the environmental reward function, the high number of trials required before reaching an efficient action policy and the reliance on exploration to gather information about the environment, potentially resulting in undesired actions. These limits can be overcome by adding a human in the loop to provide additional information during the learning phase. In this paper, we propose a novel way to combine human inputs and reinforcement by following the Supervised Progressively Autonomous Robot Competencies (SPARC) approach. We compare this method to the principles of Interactive Reinforcement Learning as proposed by Thomaz and Breazeal. Results from a study involving 40 participants show that using SPARC increases the performance of the learning, reduces the time and number of inputs required for teaching and faces fewer errors during the learning process. These results support the use of SPARC as an efficient method to teach a robot to interact with humans.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Providing a Robot with Learning Abilities Improves its Perception by Users.\n \n \n \n \n\n\n \n Senft, E.; Baxter, P.; Kennedy, J.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2016, pages 513–514, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ProvidingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2016,\n  address = {Christchurch, New Zealand},\n  author = {Senft, Emmanuel and Baxter, Paul and Kennedy, James and Belpaeme, Tony},\n  booktitle = {HRI 2016},\n  isbn = {9781467383707},\n  pages = {513--514},\n  title = {{Providing a Robot with Learning Abilities Improves its Perception by Users}},\n  url = {http://dl.acm.org/citation.cfm?id=2906953},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Socially Contingent Humanoid Robot Head Behaviour Results in Increased Charity Donations.\n \n \n \n \n\n\n \n Wills, P.; Baxter, P.; Kennedy, J.; Senft, E.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2016, pages 533–534, Christchurch, New Zealand, 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SociallyPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Wills2016,\n  address = {Christchurch, New Zealand},\n  author = {Wills, Paul and Baxter, Paul and Kennedy, James and Senft, Emmanuel and Belpaeme, Tony},\n  booktitle = {HRI 2016},\n  isbn = {9781467383707},\n  keywords = {behaviour results in increased,centre for robotics and,charity donations,emmanuel senft,head,ially contingent humanoid robot,james kennedy,neural systems,paul baxter,paul wills,tony belpaeme},\n  pages = {533--534},\n  title = {{Socially Contingent Humanoid Robot Head Behaviour Results in Increased Charity Donations}},\n  url = {http://dl.acm.org/citation.cfm?id=2906963},\n  year = {2016}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Wider Supportive Role of Social Robots in the Classroom for Teachers.\n \n \n \n \n\n\n \n Baxter, P.; Ashurst, E.; Kennedy, J.; Senft, E.; Lemaignan, S.; and Belpaeme, T.\n\n\n \n\n\n\n In 1st Int. Workshop on Educational Robotics at the Int. Conf. Social Robotics, Paris, France, 2015. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Baxter2015,\n  address = {Paris, France},\n  author = {Baxter, Paul and Ashurst, Emily and Kennedy, James and Senft, Emmanuel and Lemaignan, Severin and Belpaeme, Tony},\n  booktitle = {1st Int. Workshop on Educational Robotics at the Int. Conf. Social Robotics},\n  keywords = {Education,Ethics,Methodology,Pedagogy,Social HRI,Teacher Support},\n  title = {{The Wider Supportive Role of Social Robots in the Classroom for Teachers}},\n  url = {https://icsrwonder2015.wordpress.com/},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n L2TOR - Second Language Tutoring using Social Robots.\n \n \n \n \n\n\n \n Belpaeme, T.; Kennedy, J.; Baxter, P.; Vogt, P.; Krahmer, E. E J; Kopp, S.; Bergmann, K.; Leseman, P.; Küntay, A. C; Göksun, T.; Pandey, A. K; Gelin, R.; Koudelkova, P.; and Deblieck, T.\n\n\n \n\n\n\n In 1st Int. Workshop on Educational Robotics at the Int. Conf. Social Robotics, Paris, France, 2015. \n \n\n\n\n
\n\n\n\n \n \n \"L2TORPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Belpaeme2015,\n  address = {Paris, France},\n  author = {Belpaeme, Tony and Kennedy, James and Baxter, Paul and Vogt, Paul and Krahmer, Emiel E J and Kopp, Stefan and Bergmann, Kirsten and Leseman, Paul and K{\\"{u}}ntay, Aylin C and G{\\"{o}}ksun, Tilbe and Pandey, Amit K and Gelin, Rodolphe and Koudelkova, Petra and Deblieck, Tommy},\n  booktitle = {1st Int. Workshop on Educational Robotics at the Int. Conf. Social Robotics},\n  number = {January},\n  title = {{L2TOR - Second Language Tutoring using Social Robots}},\n  url = {https://icsrwonder2015.wordpress.com/},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Can Less be More? The Impact of Robot Social Behaviour on Human Learning.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In 4th International Symposium on New Frontiers in Human-Robot Interaction, Canterbury, U.K., 2015. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2015a,\n  address = {Canterbury, U.K.},\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {4th International Symposium on New Frontiers in Human-Robot Interaction},\n  title = {{Can Less be More? The Impact of Robot Social Behaviour on Human Learning}},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Head Pose Estimation is an Inadequate Replacement for Eye Gaze in Child-Robot Interaction.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI'15, pages in press, Portland, Oregon, USA, 2015. ACM Press\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2015b,\n  address = {Portland, Oregon, USA},\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {HRI'15},\n  doi = {10.1145/2701973.2701988},\n  isbn = {9781450333184},\n  pages = {in press},\n  publisher = {ACM Press},\n  title = {{Head Pose Estimation is an Inadequate Replacement for Eye Gaze in Child-Robot Interaction}},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Comparing Robot Embodiments in a Guided Discovery Learning Interaction with Children.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n International Journal of Social Robotics, 7(2): 293–308. 2015.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Kennedy2015c,\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  doi = {10.1007/s12369-014-0277-4},\n  journal = {International Journal of Social Robotics},\n  number = {2},\n  pages = {293--308},\n  title = {{Comparing Robot Embodiments in a Guided Discovery Learning Interaction with Children}},\n  volume = {7},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction, pages 67–74, Portland, Oregon, USA, 2015. ACM Press\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2015d,\n  address = {Portland, Oregon, USA},\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Proceedings of the 10th Annual ACM/IEEE International Conference on Human-Robot Interaction},\n  doi = {10.1145/2696454.2696457},\n  isbn = {9781450328838},\n  pages = {67--74},\n  publisher = {ACM Press},\n  title = {{The Robot Who Tried Too Hard: Social Behaviour of a Robot Tutor Can Negatively Affect Child Learning}},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Human-Guided Learning of Social Action Selection for Robot-Assisted Therapy.\n \n \n \n \n\n\n \n Senft, E.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In 4th Workshop on Machine Learning for Interactive Systems, pages 15–20, Lille, France, 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Human-GuidedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2015,\n  abstract = {This paper presents a method for progressively increasing autonomous action selection capabilities in sensitive environments, where random exploration-based learning is not desirable, using guidance provided by a human supervisor. We describe the global framework and a simulation case study based on a scenario in Robot Assisted Therapy for children with Autism Spectrum Disorder. This simulation illustrates the functional features of our proposed approach, and demonstrates how a system following these principles adapts to different interaction contexts while maintaining an appropriate behaviour for the system at all times.},\n  address = {Lille, France},\n  author = {Senft, Emmanuel and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {4th Workshop on Machine Learning for Interactive Systems},\n  pages = {15--20},\n  title = {{Human-Guided Learning of Social Action Selection for Robot-Assisted Therapy}},\n  url = {http://www.jmlr.org/proceedings/papers/v43/senft15.html},\n  year = {2015}\n}\n\n
\n
\n\n\n
\n This paper presents a method for progressively increasing autonomous action selection capabilities in sensitive environments, where random exploration-based learning is not desirable, using guidance provided by a human supervisor. We describe the global framework and a simulation case study based on a scenario in Robot Assisted Therapy for children with Autism Spectrum Disorder. This simulation illustrates the functional features of our proposed approach, and demonstrates how a system following these principles adapts to different interaction contexts while maintaining an appropriate behaviour for the system at all times.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n When is it better to give up? Towards Autonomous Action Selection for Robot Assisted ASD Therapy.\n \n \n \n\n\n \n Senft, E.; Baxter, P.; Kennedy, J.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI 2015, Portland, OR, USA, 2015. ACM Press\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Senft2015a,\n  address = {Portland, OR, USA},\n  author = {Senft, Emmanuel and Baxter, Paul and Kennedy, James and Belpaeme, Tony},\n  booktitle = {HRI 2015},\n  doi = {10.1145/2701973.2702715},\n  isbn = {9781450333184},\n  publisher = {ACM Press},\n  title = {{When is it better to give up? Towards Autonomous Action Selection for Robot Assisted ASD Therapy}},\n  year = {2015}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n What a robotic companion could do for a diabetic child.\n \n \n \n \n\n\n \n Baroni, I.; Nalin, M.; Baxter, P.; Pozzi, C.; Oleari, E.; Sanna, A.; and Belpaeme, T.\n\n\n \n\n\n\n In The 23rd IEEE International Symposium on Robot and Human Interactive Communication (RoMAN'14), pages 936–941, Edinburgh, U.K., aug 2014. IEEE Press\n \n\n\n\n
\n\n\n\n \n \n \"WhatPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baroni2014,\n  abstract = {Being a child with diabetes is challenging: apart from the emotional difficulties of dealing with the disease, there are multiple physical aspects that need to be dealt with on a daily basis. Furthermore, as the children grow older, it becomes necessary to self-manage their condition without the explicit supervision of parents or carers. This process requires that the children overcome a steep learning curve. Previous work hypothesized that a robot could provide a supporting role in this process. In this paper, we characterise this potential support in greater detail through a structured collection of perspectives from all stakeholders, namely the diabetic children, their siblings and parents, and the healthcare professionals involved in their diabetes education and care. A series of brain-storming sessions were conducted with 22 families with a diabetic child (32 children and 38 adults in total) to explore areas in which they expected that a robot could provide support and/or assistance. These perspectives were then reviewed, validated and extended by healthcare professionals to provide a medical grounding. The results of these analyses suggested a number of specific functions that a companion robot could fulfil to support diabetic children in their daily lives.},\n  address = {Edinburgh, U.K.},\n  author = {Baroni, Ilaria and Nalin, Marco and Baxter, Paul and Pozzi, Clara and Oleari, Elettra and Sanna, Alberto and Belpaeme, Tony},\n  booktitle = {The 23rd IEEE International Symposium on Robot and Human Interactive Communication (RoMAN'14)},\n  doi = {10.1109/ROMAN.2014.6926373},\n  isbn = {978-1-4799-6765-0},\n  month = {aug},\n  pages = {936--941},\n  publisher = {IEEE Press},\n  title = {{What a robotic companion could do for a diabetic child}},\n  url = {http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=6926373},\n  year = {2014}\n}\n\n
\n
\n\n\n
\n Being a child with diabetes is challenging: apart from the emotional difficulties of dealing with the disease, there are multiple physical aspects that need to be dealt with on a daily basis. Furthermore, as the children grow older, it becomes necessary to self-manage their condition without the explicit supervision of parents or carers. This process requires that the children overcome a steep learning curve. Previous work hypothesized that a robot could provide a supporting role in this process. In this paper, we characterise this potential support in greater detail through a structured collection of perspectives from all stakeholders, namely the diabetic children, their siblings and parents, and the healthcare professionals involved in their diabetes education and care. A series of brain-storming sessions were conducted with 22 families with a diabetic child (32 children and 38 adults in total) to explore areas in which they expected that a robot could provide support and/or assistance. These perspectives were then reviewed, validated and extended by healthcare professionals to provide a medical grounding. The results of these analyses suggested a number of specific functions that a companion robot could fulfil to support diabetic children in their daily lives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Pervasive Memory: the Future of Long-Term Social HRI Lies in the Past.\n \n \n \n\n\n \n Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Salem, M.; and Dautenhahn, K., editor(s), Third International Symposium on New Frontiers in Human-Robot Interaction at AISB 2014, London, 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2014,\n  address = {London},\n  author = {Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Third International Symposium on New Frontiers in Human-Robot Interaction at AISB 2014},\n  editor = {Salem, Maha and Dautenhahn, Kerstin},\n  title = {{Pervasive Memory: the Future of Long-Term Social HRI Lies in the Past}},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tracking gaze over time in HRI as a proxy for engagement and attribution of social agency.\n \n \n \n \n\n\n \n Baxter, P.; Kennedy, J.; Vollmer, A.; de Greeff, J.; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14, pages 126–127, Bielefeld, Germany, 2014. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"TrackingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2014a,\n  address = {Bielefeld, Germany},\n  author = {Baxter, Paul and Kennedy, James and Vollmer, Anna-Lisa and de Greeff, Joachim and Belpaeme, Tony},\n  booktitle = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  doi = {10.1145/2559636.2559829},\n  isbn = {9781450326582},\n  pages = {126--127},\n  publisher = {ACM Press},\n  title = {{Tracking gaze over time in HRI as a proxy for engagement and attribution of social agency}},\n  url = {http://dl.acm.org/citation.cfm?doid=2559636.2559829},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive architectures for human-robot interaction.\n \n \n \n \n\n\n \n Baxter, P.; and Trafton, J. G.\n\n\n \n\n\n\n In pages 504–505, 2014. \n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2014b,\n  author = {Baxter, Paul and Trafton, J. Gregory},\n  doi = {10.1145/2559636.2560026},\n  isbn = {9781450326582},\n  journal = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  pages = {504--505},\n  title = {{Cognitive architectures for human-robot interaction}},\n  url = {http://dl.acm.org/citation.cfm?doid=2559636.2560026},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive Architectures for Human-Robot Interaction.\n \n \n \n \n\n\n \n Baxter, P.; and Trafton, J. G.\n\n\n \n\n\n\n In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14, pages 504–505, Bielefeld, Germany, 2014. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2014c,\n  address = {Bielefeld, Germany},\n  author = {Baxter, Paul and Trafton, J. Gregory},\n  booktitle = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  doi = {10.1145/2559636.2560026},\n  isbn = {9781450326582},\n  pages = {504--505},\n  publisher = {ACM Press},\n  title = {{Cognitive Architectures for Human-Robot Interaction}},\n  url = {http://dl.acm.org/citation.cfm?doid=2559636.2560026},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Child-Robot Interaction in the Wild: Field Testing Activities of the ALIZ-E Project.\n \n \n \n\n\n \n Greeff, J. D.; Blanson-Henkemans, O.; Fraaije, A.; Solms, L.; Wigdor, N.; Bierman, B.; Janssen, J. B.; Looije, R.; Baxter, P.; Neerincx, M. A; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14, pages 148–149, Bielefeld, Germany, 2014. ACM Press\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Greeff2014,\n  address = {Bielefeld, Germany},\n  author = {Greeff, Joachim De and Blanson-Henkemans, Olivier and Fraaije, Aafke and Solms, Lara and Wigdor, Noel and Bierman, Bert and Janssen, Joris B. and Looije, Rosemarijn and Baxter, Paul and Neerincx, Mark A and Belpaeme, Tony},\n  booktitle = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  isbn = {9781450326582},\n  pages = {148--149},\n  publisher = {ACM Press},\n  title = {{Child-Robot Interaction in the Wild: Field Testing Activities of the ALIZ-E Project}},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Children comply with a robot's indirect requests.\n \n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14, pages 198–199, Bielefeld, Germany, 2014. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"ChildrenPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2014,\n  address = {Bielefeld, Germany},\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  doi = {10.1145/2559636.2559820},\n  isbn = {9781450326582},\n  pages = {198--199},\n  publisher = {ACM Press},\n  title = {{Children comply with a robot's indirect requests}},\n  url = {http://dl.acm.org/citation.cfm?doid=2559636.2559820},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The chatbot strikes back.\n \n \n \n \n\n\n \n Kennedy, J.; de Greeff, J.; Read, R.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14, volume 1, pages 103–103, Bielefeld, Germany, 2014. ACM Press\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2014a,\n  address = {Bielefeld, Germany},\n  author = {Kennedy, James and de Greeff, Joachim and Read, Robin and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14},\n  doi = {10.1145/2559636.2559650},\n  isbn = {9781450326582},\n  number = {2},\n  pages = {103--103},\n  publisher = {ACM Press},\n  title = {{The chatbot strikes back}},\n  url = {http://dl.acm.org/citation.cfm?doid=2559636.2559650},\n  volume = {1},\n  year = {2014}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Cognitive architecture for human–robot interaction: Towards behavioural alignment.\n \n \n \n \n\n\n \n Baxter, P. E.; de Greeff, J.; and Belpaeme, T.\n\n\n \n\n\n\n Biologically Inspired Cognitive Architectures, 6: 30–39. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Baxter2013,\n  author = {Baxter, Paul E. and de Greeff, Joachim and Belpaeme, Tony},\n  doi = {10.1016/j.bica.2013.07.002},\n  issn = {2212683X},\n  journal = {Biologically Inspired Cognitive Architectures},\n  pages = {30--39},\n  publisher = {Elsevier B.V.},\n  title = {{Cognitive architecture for human–robot interaction: Towards behavioural alignment}},\n  url = {http://linkinghub.elsevier.com/retrieve/pii/S2212683X1300056X},\n  volume = {6},\n  year = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Touchscreens as Mediators for Social Human-Robot Interactions: A Focus Group Evaluation involving Diabetic Children.\n \n \n \n\n\n \n Baxter, P.; Baroni, I.; Nalin, M.; Sanna, A.; and Belpaeme, T.\n\n\n \n\n\n\n In CmIS workshop at ITS'13, St Andrews, U.K., 2013. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2013a,\naddress = {St Andrews, U.K.},\nauthor = {Baxter, Paul and Baroni, Ilaria and Nalin, Marco and Sanna, Alberto and Belpaeme, Tony},\nbooktitle = {CmIS workshop at ITS'13},\ntitle = {{Touchscreens as Mediators for Social Human-Robot Interactions: A Focus Group Evaluation involving Diabetic Children}},\nyear = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Do children behave differently with a social robot if with peers?.\n \n \n \n\n\n \n Baxter, P.; Greeff, J. D.; and Belpaeme, T.\n\n\n \n\n\n\n In ICSR 2013, volume 3, pages 567–568, Bristol, U.K., 2013. LNCS\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2013b,\n  address = {Bristol, U.K.},\n  author = {Baxter, Paul and Greeff, Joachim De and Belpaeme, Tony},\n  booktitle = {ICSR 2013},\n  pages = {567--568},\n  publisher = {LNCS},\n  title = {{Do children behave differently with a social robot if with peers?}},\n  volume = {3},\n  year = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Emergence of Turn-taking in Unstructured Child-Robot Social Interactions.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; Baroni, I.; Kennedy, J.; Nalin, M.; and Belpaeme, T.\n\n\n \n\n\n\n In HRI'13, pages 77–78, Tokyo, Japan, 2013. ACM Press\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2013c,\n  address = {Tokyo, Japan},\n  author = {Baxter, Paul and Wood, Rachel and Baroni, Ilaria and Kennedy, James and Nalin, Marco and Belpaeme, Tony},\n  booktitle = {HRI'13},\n  number = {1},\n  pages = {77--78},\n  publisher = {ACM Press},\n  title = {{Emergence of Turn-taking in Unstructured Child-Robot Social Interactions}},\n  year = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Child-Robot Interaction: Perspectives and Challenges.\n \n \n \n\n\n \n Belpaeme, T.; Baxter, P.; Greeff, J. D.; Kennedy, J.; Looije, R.; Neerincx, M.; Baroni, I.; and Coti, M.\n\n\n \n\n\n\n In International Conference on Social Robotics, pages 452–459, Bristol, U.K., 2013. Springer\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Belpaeme2013,\n  address = {Bristol, U.K.},\n  author = {Belpaeme, Tony and Baxter, Paul and Greeff, Joachim De and Kennedy, James and Looije, Rosemarijn and Neerincx, Mark and Baroni, Ilaria and Coti, Mattia},\n  booktitle = {International Conference on Social Robotics},\n  pages = {452--459},\n  publisher = {Springer},\n  title = {{Child-Robot Interaction: Perspectives and Challenges}},\n  year = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Constraining Content in Mediated Unstructured Social Interactions: Studies in the Wild.\n \n \n \n\n\n \n Kennedy, J.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In 5th International Workshop on Affective Interaction in Natural Environments (AFFINE) at ACII 2013, pages 728–733, Geneva, Switzerland, 2013. IEEE Press\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Kennedy2013,\n  address = {Geneva, Switzerland},\n  author = {Kennedy, James and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {5th International Workshop on Affective Interaction in Natural Environments (AFFINE) at ACII 2013},\n  pages = {728--733},\n  publisher = {IEEE Press},\n  title = {{Constraining Content in Mediated Unstructured Social Interactions: Studies in the Wild}},\n  year = {2013}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Towards Augmenting Dialogue Strategy Management with Multimodal Sub-Symbolic Context.\n \n \n \n\n\n \n Baxter, P.; Cuayahuitl, H.; Wood, R.; Kruijff-Korbayova, I.; and Belpaeme, T.\n\n\n \n\n\n\n In KI 2012, pages 49–53, Saarbruecken, Germany, 2012. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2012,\n  abstract = {A synthetic agent requires the coordinated use of multiple sensory and effector modalities in order to achieve a social human-robot interaction (HRI). While systems in which such a concatenation of multi- ple modalities exist, the issue of information coordination across modal- ities to identify relevant context information remains problematic. A system-wide information formalism is typically used to address the issue, which requires a re-encoding of all information into the system ontology. We propose a general approach to this information coordination issue, focussing particularly on a potential application to a dialogue strategy learning and selection system embedded within a wider architecture for social HRI. Rather than making use of a common system ontology, we rather emphasise a sub-symbolic association-driven architecture which has the capacity to influence the ‘internal' processing of all individual system modalities, without requiring the explicit processing or interpre- tation of modality-specific information.},\n  address = {Saarbruecken, Germany},\n  author = {Baxter, Paul and Cuayahuitl, Heriberto and Wood, Rachel and Kruijff-Korbayova, Ivana and Belpaeme, Tony},\n  booktitle = {KI 2012},\n  pages = {49--53},\n  title = {{Towards Augmenting Dialogue Strategy Management with Multimodal Sub-Symbolic Context}},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n A synthetic agent requires the coordinated use of multiple sensory and effector modalities in order to achieve a social human-robot interaction (HRI). While systems in which such a concatenation of multi- ple modalities exist, the issue of information coordination across modal- ities to identify relevant context information remains problematic. A system-wide information formalism is typically used to address the issue, which requires a re-encoding of all information into the system ontology. We propose a general approach to this information coordination issue, focussing particularly on a potential application to a dialogue strategy learning and selection system embedded within a wider architecture for social HRI. Rather than making use of a common system ontology, we rather emphasise a sub-symbolic association-driven architecture which has the capacity to influence the ‘internal' processing of all individual system modalities, without requiring the explicit processing or interpre- tation of modality-specific information.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling Concept Prototype Competencies using a Developmental Memory Model.\n \n \n \n \n\n\n \n Baxter, P.; De Greeff, J.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n Paladyn Journal of Behavioral Robotics, 3(4): 200–208. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"ModellingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Baxter2012a,\n  abstract = {The use of concepts is fundamental to human-level cognition, but there remain a number of open questions as to the structures supporting this competence. Specifically, it has been shown that humans use concept prototypes, a flexible means of representing concepts such that it can be used both for categorisation and for similarity judgements. In the context of autonomous robotic agents, the processes by which such concept functionality could be acquired would be particularly useful, enabling flexible knowledge representation and application. This paper seeks to explore this issue of autonomous concept acquisition. By applying a set of structural and operational principles, that support a wide range of cognitive competencies, within a developmental framework, the intention is to explicitly embed the development of concepts into a wider framework of cognitive processing. Comparison with a benchmark concept modelling system shows that the proposed approach can account for a number of features, namely concept-based classification, and its extension to prototype-like functionality.},\n  author = {Baxter, Paul and {De Greeff}, Joachim and Wood, Rachel and Belpaeme, Tony},\n  doi = {10.2478/s13230-013-0105-9},\n  journal = {Paladyn Journal of Behavioral Robotics},\n  number = {4},\n  pages = {200--208},\n  title = {{Modelling Concept Prototype Competencies using a Developmental Memory Model}},\n  url = {http://link.springer.com/article/10.2478/s13230-013-0105-9},\n  volume = {3},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n The use of concepts is fundamental to human-level cognition, but there remain a number of open questions as to the structures supporting this competence. Specifically, it has been shown that humans use concept prototypes, a flexible means of representing concepts such that it can be used both for categorisation and for similarity judgements. In the context of autonomous robotic agents, the processes by which such concept functionality could be acquired would be particularly useful, enabling flexible knowledge representation and application. This paper seeks to explore this issue of autonomous concept acquisition. By applying a set of structural and operational principles, that support a wide range of cognitive competencies, within a developmental framework, the intention is to explicitly embed the development of concepts into a wider framework of cognitive processing. Comparison with a benchmark concept modelling system shows that the proposed approach can account for a number of features, namely concept-based classification, and its extension to prototype-like functionality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n “And what is a Seasnake?”: Modelling the Acquisition of Concept Prototypes in a Developmental Framework.\n \n \n \n\n\n \n Baxter, P.; Greeff, J. D.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n In International Conference on Development and Learning and Epigenetic Robotics, pages 1–6, San Diego, USA, 2012. IEEE Press\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2012b,\n  address = {San Diego, USA},\n  author = {Baxter, Paul and Greeff, Joachim De and Wood, Rachel and Belpaeme, Tony},\n  booktitle = {International Conference on Development and Learning and Epigenetic Robotics},\n  doi = {10.1109/DevLrn.2012.6400814},\n  pages = {1--6},\n  publisher = {IEEE Press},\n  title = {{“And what is a Seasnake?”: Modelling the Acquisition of Concept Prototypes in a Developmental Framework}},\n  year = {2012}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Social Behaviour from Prior Experience: a memory-centred cognitive system for social human-robot interaction.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n In Vincze, M.; and Krenn, B., editor(s), 5th Int. Conf. on Cognitive Systems, pages Poster #9, Vienna, Austria, 2012. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2012c,\n  abstract = {The EU FP7 ALIZ-E project seeks to develop the theory and practice of long-term (i.e. Involving multiple interaction episodes) embodied Human-Robot Interaction (HRI), with a specific focus on application to child companion robots. One key aspect of this effort is the use of long-term memory to facilitate the adaptive behaviour of the robot in social interaction over time, in response to prior experience. Existing work on memory systems have typically emphasised the semantic storage functions of memory. Here, we rather re-cast memory as an active process in its own right, and indeed one that forms the substrate of the cognitive system. This emphasises memory as a fundamentally associative, sub-symbolic structure constructed and adapted through experience that acts a coordinator of multi-modal information. The application of this approach enables not only a flexible constructivist/enactive perspective on cognitive architecture, but allows a coherent approach to multi-modal social HRI inherently based on the prior experience of the robotic interactant.},\n  address = {Vienna, Austria},\n  author = {Baxter, Paul and Wood, Rachel and Belpaeme, Tony},\n  booktitle = {5th Int. Conf. on Cognitive Systems},\n  editor = {Vincze, Markus and Krenn, Brigitte},\n  pages = {Poster {\\#}9},\n  title = {{Social Behaviour from Prior Experience: a memory-centred cognitive system for social human-robot interaction}},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n The EU FP7 ALIZ-E project seeks to develop the theory and practice of long-term (i.e. Involving multiple interaction episodes) embodied Human-Robot Interaction (HRI), with a specific focus on application to child companion robots. One key aspect of this effort is the use of long-term memory to facilitate the adaptive behaviour of the robot in social interaction over time, in response to prior experience. Existing work on memory systems have typically emphasised the semantic storage functions of memory. Here, we rather re-cast memory as an active process in its own right, and indeed one that forms the substrate of the cognitive system. This emphasises memory as a fundamentally associative, sub-symbolic structure constructed and adapted through experience that acts a coordinator of multi-modal information. The application of this approach enables not only a flexible constructivist/enactive perspective on cognitive architecture, but allows a coherent approach to multi-modal social HRI inherently based on the prior experience of the robotic interactant.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Touchscreen-Based ‘Sandtray' to Facilitate, Mediate and Contextualise Human-Robot Social Interaction.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n In 7th ACM/IEEE International Conference on Human-Robot Interaction, pages 105–106, Boston, MA, U.S.A., 2012. IEEE Press\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2012d,\n  abstract = {In the development of companion robots capable of any- depth, long-term interaction, social scenarios enable explo- ration of the robot's capacity to engage a human interactant. These scenarios are typically constrained to structured task- based interactions, to enable the quantification of results for the comparison of differing experimental conditions. This paper introduces a hardware setup to facilitate and medi- ate human-robot social interaction, simplifying the robot control task while enabling an equalised degree of environ- mental manipulation for the human and robot, but without implicitly imposing an a priori interaction structure.},\n  address = {Boston, MA, U.S.A.},\n  author = {Baxter, Paul and Wood, Rachel and Belpaeme, Tony},\n  booktitle = {7th ACM/IEEE International Conference on Human-Robot Interaction},\n  pages = {105--106},\n  publisher = {IEEE Press},\n  title = {{A Touchscreen-Based ‘Sandtray' to Facilitate, Mediate and Contextualise Human-Robot Social Interaction}},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n In the development of companion robots capable of any- depth, long-term interaction, social scenarios enable explo- ration of the robot's capacity to engage a human interactant. These scenarios are typically constrained to structured task- based interactions, to enable the quantification of results for the comparison of differing experimental conditions. This paper introduces a hardware setup to facilitate and medi- ate human-robot social interaction, simplifying the robot control task while enabling an equalised degree of environ- mental manipulation for the human and robot, but without implicitly imposing an a priori interaction structure.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Multimodal Child-Robot Interaction : Building Social Bonds.\n \n \n \n\n\n \n Belpaeme, T.; Baxter, P.; Read, R.; Wood, R.; Cuayahuitl, H.; Kiefer, B.; Racioppa, S.; Kruiff-Korbayova, I.; Athanasopoulos, G.; Enescu, V.; Looije, R.; Neerincx, M.; Demiris, Y.; Ros-Espinoza, R.; Beck, A.; Canamero, L.; Hiolle, A.; Lewis, M.; Baroni, I.; Nalin, M.; Cosi, P.; Paci, G.; Tesser, F.; Sommavilla, G.; and Humbert, R.\n\n\n \n\n\n\n Journal of Human-Robot Interaction, 1(2): 33–53. 2012.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Belpaeme2012,\n  abstract = {For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competen- cies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.},\n  author = {Belpaeme, Tony and Baxter, Paul and Read, Robin and Wood, Rachel and Cuayahuitl, Heriberto and Kiefer, Bernd and Racioppa, Stefania and Kruiff-Korbayova, Ivana and Athanasopoulos, Georgios and Enescu, Valentin and Looije, Rosemarijn and Neerincx, Mark and Demiris, Yiannis and Ros-Espinoza, Raquel and Beck, Aryel and Canamero, Lola and Hiolle, Antione and Lewis, Matthew and Baroni, Ilaria and Nalin, Marco and Cosi, Piero and Paci, Giulio and Tesser, Fabio and Sommavilla, Giacomo and Humbert, Remi},\n  doi = {10.5898/JHRI.1.2.Belpaeme},\n  journal = {Journal of Human-Robot Interaction},\n  number = {2},\n  pages = {33--53},\n  title = {{Multimodal Child-Robot Interaction : Building Social Bonds}},\n  volume = {1},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competen- cies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n From Penguins to Parakeets: a developmental approach to modelling conceptual prototypes.\n \n \n \n\n\n \n De Greeff, J.; Baxter, P.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n In Szufnarowska, J., editor(s), PG Conference on Robotics and Development of Cognition at ICANN 2012, pages 8–11, Lausanne, Switzerland, 2012. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{DeGreeff2012,\n  abstract = {The use of concepts is a fundamental capacity underlying com- plex, human-level cognition. A number of theories have ex- plored the means of concept representation and their links to lower-level features, with one notable example being the Con- ceptual Spaces theory. While these provide an account for such essential functional processes as prototypes and typicality, it is not entirely clear how these aspects of human cognition can arise in a system undergoing continuous development - pos- tulated to be a necessity from the developmental systems per- spective. This paper seeks to establish the foundation of an ap- proach to this question by showing that a distributed, associa- tive and continuous development mechanism, founded on prin- ciples of biological memory, can achieve classification perfor- mance comparable to the Conceptual Spaces model. We show how qualitatively similar prototypes are formed by both systems when exposed to the same dataset, which illustrates how both models can account for the development of conceptual primi- tives.},\n  address = {Lausanne, Switzerland},\n  author = {{De Greeff}, Joachim and Baxter, Paul and Wood, Rachel and Belpaeme, Tony},\n  booktitle = {PG Conference on Robotics and Development of Cognition at ICANN 2012},\n  editor = {Szufnarowska, Joanna},\n  pages = {8--11},\n  title = {{From Penguins to Parakeets: a developmental approach to modelling conceptual prototypes}},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n The use of concepts is a fundamental capacity underlying com- plex, human-level cognition. A number of theories have ex- plored the means of concept representation and their links to lower-level features, with one notable example being the Con- ceptual Spaces theory. While these provide an account for such essential functional processes as prototypes and typicality, it is not entirely clear how these aspects of human cognition can arise in a system undergoing continuous development - pos- tulated to be a necessity from the developmental systems per- spective. This paper seeks to establish the foundation of an ap- proach to this question by showing that a distributed, associa- tive and continuous development mechanism, founded on prin- ciples of biological memory, can achieve classification perfor- mance comparable to the Conceptual Spaces model. We show how qualitatively similar prototypes are formed by both systems when exposed to the same dataset, which illustrates how both models can account for the development of conceptual primi- tives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Review of long-term memory in natural and synthetic systems.\n \n \n \n\n\n \n Wood, R.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n Adaptive Behavior, 20(2): 81–103. 2012.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{Wood2012,\n  abstract = {Memory may be broadly regarded as information gained from past experi- ence which is available in the service of ongoing and future adaptive behavior. The biological implementation ofmemory shares little with memory in synthetic cognitive systems where it is typically regarded as a passive storage structure. Neurophysiological evidence indicates that memory is neither passive nor cen- tralised. A review of the relevant literature in the biological and computer sciences is conducted and a novel methodology is applied that incorporates neuroethological approaches with general biological inspiration in the design of synthetic cognitive systems: a case study regarding episodic memory provides an illustration of the utility of this methodology. As a consequence of applying this approach to the reinterpretation of the implementation of memory in syn- thetic systems, four fundamental functional principles are derived that are in accordance with neuroscientific theory, and which may be applied to the design of more adaptive and robust synthetic cognitive systems: priming, cross-modal associations, cross-modal coordination without semantic information transfer, and global system behavior resulting from activation dynamics within the mem- ory system.},\n  author = {Wood, Rachel and Baxter, Paul and Belpaeme, Tony},\n  journal = {Adaptive Behavior},\n  number = {2},\n  pages = {81--103},\n  title = {{A Review of long-term memory in natural and synthetic systems}},\n  volume = {20},\n  year = {2012}\n}\n\n
\n
\n\n\n
\n Memory may be broadly regarded as information gained from past experi- ence which is available in the service of ongoing and future adaptive behavior. The biological implementation ofmemory shares little with memory in synthetic cognitive systems where it is typically regarded as a passive storage structure. Neurophysiological evidence indicates that memory is neither passive nor cen- tralised. A review of the relevant literature in the biological and computer sciences is conducted and a novel methodology is applied that incorporates neuroethological approaches with general biological inspiration in the design of synthetic cognitive systems: a case study regarding episodic memory provides an illustration of the utility of this methodology. As a consequence of applying this approach to the reinterpretation of the implementation of memory in syn- thetic systems, four fundamental functional principles are derived that are in accordance with neuroscientific theory, and which may be applied to the design of more adaptive and robust synthetic cognitive systems: priming, cross-modal associations, cross-modal coordination without semantic information transfer, and global system behavior resulting from activation dynamics within the mem- ory system.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2011\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Long-term human-robot interaction with young users.\n \n \n \n\n\n \n Baxter, P.; Belpaeme, T.; Canamero, L.; Cosi, P.; Demiris, Y.; Enescu, V.; Hiolle, A.; Kruijff-Korbayova, I.; Looije, R.; Nalin, M.; Neerincx, M.; Sommavilla, G.; Tesser, F.; and Wood, R.\n\n\n \n\n\n\n In Miyake, N.; Ishiguro, H.; Dautenhahn, K.; and Nomura, T., editor(s), Proceedings of the Workshop Robots with Children: Practices for Human-Robot Symbiosis, at the 6th ACM/IEEE International Conference on Human-Robot Interaction, Lausanne, Switzerland, 2011. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2011,\n  abstract = {Artificial companion agents have the potential to combine novel means for effective health communication with young patients sup- port and entertainment. However, the theory and practice of long- term child-robot interaction is currently an under-developed area of research. This paper introduces an approach that integrates multi- ple functional aspects necessary to implement temporally extended human-robot interaction in the setting of a paediatric ward. We present our methodology for the implementation of a companion robot which will be used to support young patients in hospital as they learn to manage a lifelong metabolic disorder (diabetes). The robot will interact with patients over an extended period of time. The necessary functional aspects are identified and introduced, and a review of the technical challenges involved is presented.},\n  address = {Lausanne, Switzerland},\n  annote = {Alphabetical order of authors},\n  author = {Baxter, Paul and Belpaeme, Tony and Canamero, Lola and Cosi, Piero and Demiris, Yiannis and Enescu, Valentin and Hiolle, Antoine and Kruijff-Korbayova, Ivana and Looije, Rosemarijn and Nalin, Marco and Neerincx, M. and Sommavilla, Giacomo and Tesser, Fabio and Wood, Rachel},\n  booktitle = {Proceedings of the Workshop Robots with Children: Practices for Human-Robot Symbiosis, at the 6th ACM/IEEE International Conference on Human-Robot Interaction},\n  editor = {Miyake, N. and Ishiguro, H. and Dautenhahn, K. and Nomura, T.},\n  title = {{Long-term human-robot interaction with young users}},\n  year = {2011}\n}\n\n
\n
\n\n\n
\n Artificial companion agents have the potential to combine novel means for effective health communication with young patients sup- port and entertainment. However, the theory and practice of long- term child-robot interaction is currently an under-developed area of research. This paper introduces an approach that integrates multi- ple functional aspects necessary to implement temporally extended human-robot interaction in the setting of a paediatric ward. We present our methodology for the implementation of a companion robot which will be used to support young patients in hospital as they learn to manage a lifelong metabolic disorder (diabetes). The robot will interact with patients over an extended period of time. The necessary functional aspects are identified and introduced, and a review of the technical challenges involved is presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Memory-Based Cognitive Framework: a Low-Level Association Approach to Cognitive Architectures.\n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In Kampis, G.; Karsai, I.; and Szathmáry, E., editor(s), European Conference on Artificial Life (ECAL 09), pages 402–409, Budapest, Hungary, 2011. LNCS 5777, Springer-Verlag\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2011a,\n  address = {Budapest, Hungary},\n  author = {Baxter, Paul and Browne, Will},\n  booktitle = {European Conference on Artificial Life (ECAL 09)},\n  editor = {Kampis, G. and Karsai, I. and Szathm{\\'{a}}ry, E.},\n  pages = {402--409},\n  publisher = {LNCS 5777, Springer-Verlag},\n  title = {{Memory-Based Cognitive Framework: a Low-Level Association Approach to Cognitive Architectures}},\n  year = {2011}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n On Memory Systems for Companion Robots: Implementation Methodologies and Legal Implications.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; Belpaeme, T.; and Nalin, M.\n\n\n \n\n\n\n In Ho, W. C.; Lim, M. Y.; and Brom, C., editor(s), Proceedings of the 2nd AISB Symposium on Human Memory for Artificial Agents, pages 30–34, York, U.K., 2011. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2011b,\n  abstract = {Companion robots are becoming increasingly prevalent in a wide variety of domains. The development of realistic long-term human-robot interaction is desirable and this entails the extension of interactions over multiple episodes. Memory systems are thus required in support of this goal. While current memory systems for artificial agents (and companion robots in particular) are currently restricted to symbolic database structures, this is not guaranteed to remain the case, with an increasing number of approaches using sub-symbolic representation schemes. This position paper explores the legal and ethical consequences of this shift of perspective by examining a range of solutions to the problem of data removal from artificial memory systems, specifically in the context of healthcare applications, and concludes that the current legislative provisions for data processing and protection may be inadequate for the next generations of companion robots.},\n  address = {York, U.K.},\n  author = {Baxter, Paul and Wood, Rachel and Belpaeme, Tony and Nalin, Marco},\n  booktitle = {Proceedings of the 2nd AISB Symposium on Human Memory for Artificial Agents},\n  editor = {Ho, Wan Ching and Lim, Mei Yii and Brom, Cyril},\n  pages = {30--34},\n  title = {{On Memory Systems for Companion Robots: Implementation Methodologies and Legal Implications}},\n  year = {2011}\n}\n\n
\n
\n\n\n
\n Companion robots are becoming increasingly prevalent in a wide variety of domains. The development of realistic long-term human-robot interaction is desirable and this entails the extension of interactions over multiple episodes. Memory systems are thus required in support of this goal. While current memory systems for artificial agents (and companion robots in particular) are currently restricted to symbolic database structures, this is not guaranteed to remain the case, with an increasing number of approaches using sub-symbolic representation schemes. This position paper explores the legal and ethical consequences of this shift of perspective by examining a range of solutions to the problem of data removal from artificial memory systems, specifically in the context of healthcare applications, and concludes that the current legislative provisions for data processing and protection may be inadequate for the next generations of companion robots.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Memory-Centred Architectures: Perspectives on Human-level Cognitive Competencies.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; Morse, A.; and Belpaeme, T.\n\n\n \n\n\n\n In Langley, P., editor(s), Proceedings of the AAAI Fall 2011 symposium on Advances in Cognitive Systems, pages 26–33, Arlington, Virginia, U.S.A., 2011. AAAI Press\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2011c,\n  abstract = {In the context of cognitive architectures, memory is typ- ically considered as a passive storage device with the sole purpose of maintaining and retrieving information relevant to ongoing cognitive processing. If memory is instead considered to be a fundamentally active aspect of cognition, as increasingly suggested by empirically- derived neurophysiological theory, this passive role must be reinterpreted. In this perspective, memory is the distributed substrate of cognition, forming the foun- dation for cross-modal priming, and hence soft cross- modal coordination. This paper seeks to describe what a cognitive architecture based on this perspective must involve, and initiates an exploration into how human- level cognitive competencies (namely episodic memory, word label conjunction learning, and social behaviour) can be accounted for in such a low-level framework. This proposal of a memory-centred cognitive architec- ture presents new insights into the nature of cognition, with benefits for computational implementations such as generality and robustness that have only begun to be exploited.},\n  address = {Arlington, Virginia, U.S.A.},\n  author = {Baxter, Paul and Wood, Rachel and Morse, Anthony and Belpaeme, Tony},\n  booktitle = {Proceedings of the AAAI Fall 2011 symposium on Advances in Cognitive Systems},\n  editor = {Langley, Pat},\n  pages = {26--33},\n  publisher = {AAAI Press},\n  title = {{Memory-Centred Architectures: Perspectives on Human-level Cognitive Competencies}},\n  year = {2011}\n}\n\n
\n
\n\n\n
\n In the context of cognitive architectures, memory is typ- ically considered as a passive storage device with the sole purpose of maintaining and retrieving information relevant to ongoing cognitive processing. If memory is instead considered to be a fundamentally active aspect of cognition, as increasingly suggested by empirically- derived neurophysiological theory, this passive role must be reinterpreted. In this perspective, memory is the distributed substrate of cognition, forming the foun- dation for cross-modal priming, and hence soft cross- modal coordination. This paper seeks to describe what a cognitive architecture based on this perspective must involve, and initiates an exploration into how human- level cognitive competencies (namely episodic memory, word label conjunction learning, and social behaviour) can be accounted for in such a low-level framework. This proposal of a memory-centred cognitive architec- ture presents new insights into the nature of cognition, with benefits for computational implementations such as generality and robustness that have only begun to be exploited.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Power of Words.\n \n \n \n \n\n\n \n Morse, A. F.; Baxter, P.; Belpaeme, T.; Smith, L. B.; and Cangelosi, A.\n\n\n \n\n\n\n In Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, pages 1–6, Frankfurt am Main, Germany, 2011. IEEE Press\n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Morse2011,\n  abstract = {Language is special, yet its power to facilitate communication may have distracted researchers from the power of another, potential precursor ability: the ability to label things, and the effect this can have in transforming or extending cognitive abilities. In this paper we present a simple robotic model, using the iCub robot, demonstrating the effects of spatial grouping, binding, and linguistic tagging in extending our cognitive abilities.},\n  address = {Frankfurt am Main, Germany},\n  author = {Morse, Anthony F. and Baxter, Paul and Belpaeme, Tony and Smith, Linda B. and Cangelosi, Angelo},\n  booktitle = {Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics},\n  doi = {10.1109/DEVLRN.2011.6037349},\n  issn = {1098-1861},\n  pages = {1--6},\n  publisher = {IEEE Press},\n  title = {{The Power of Words}},\n  url = {http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp={\\&}arnumber=6037349{\\&}searchWithin{\\%}3Dp{\\_}Authors{\\%}3A.QT.Baxter{\\%}2C+Paul.QT.},\n  year = {2011}\n}\n\n
\n
\n\n\n
\n Language is special, yet its power to facilitate communication may have distracted researchers from the power of another, potential precursor ability: the ability to label things, and the effect this can have in transforming or extending cognitive abilities. In this paper we present a simple robotic model, using the iCub robot, demonstrating the effects of spatial grouping, binding, and linguistic tagging in extending our cognitive abilities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Child-Robot Interaction in The Wild : Advice to the Aspiring Experimenter.\n \n \n \n\n\n \n Ros, R.; Nalin, M.; Wood, R.; Baxter, P.; Looiije, R.; Demiris, Y.; Belpaeme, T.; Giusti, A.; and Pozzi, C.\n\n\n \n\n\n\n In ICMI, pages 335–342, Alicante, Spain, 2011. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Ros2011,\n  abstract = {We present insights gleaned from a series of child-robot in- teraction experiments carried out in a hospital paediatric department. Our aim here is to share good practice in ex- perimental design and lessons learned about the implemen- tation of systems for social HRI with child users towards application in $\\backslash$the wild", rather than in tightly controlled and constrained laboratory environments: a trade-o be- tween the structures imposed by experimental design and the desire for removal of such constraints that inhibit in- teraction depth, and hence engagement, requires a careful balance.},\n  address = {Alicante, Spain},\n  author = {Ros, Raquel and Nalin, Marco and Wood, Rachel and Baxter, Paul and Looiije, Rosemarijn and Demiris, Yiannis and Belpaeme, Tony and Giusti, Alessio and Pozzi, Clara},\n  booktitle = {ICMI},\n  doi = {10.1145/2070481.2070545},\n  isbn = {9781450306416},\n  pages = {335--342},\n  title = {{Child-Robot Interaction in The Wild : Advice to the Aspiring Experimenter}},\n  year = {2011}\n}\n\n
\n
\n\n\n
\n We present insights gleaned from a series of child-robot in- teraction experiments carried out in a hospital paediatric department. Our aim here is to share good practice in ex- perimental design and lessons learned about the implemen- tation of systems for social HRI with child users towards application in $\\$the wild\", rather than in tightly controlled and constrained laboratory environments: a trade-o be- tween the structures imposed by experimental design and the desire for removal of such constraints that inhibit in- teraction depth, and hence engagement, requires a careful balance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2010\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Foundations of a Constructivist Memory- Based approach to Cognitive Robotics.\n \n \n \n\n\n \n Baxter, P.\n\n\n \n\n\n\n Ph.D. Thesis, University of Reading, U.K., 2010.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{Baxter2010,\n  abstract = {Cognitive robotics are applicable to many aspects of modern society. These artificial agents may also be used as platforms to investigate of the nature and function of cognition itself through the creation and manipulation of biologically-inspired cognitive architectures. However, the flexibility and robustness of current systems are limited by the restricted use of previous experience. Memory thus has a clear role in cognitive architectures, as a means of linking past experience to present and future behaviour. Current cognitive robotics architectures typically implement a version of Working Memory - a functionally separable system that forms the link between long-term memory (information storage) and cognition (information processing). However, this division of function gives rise to practical and theoretical problems, particularly regarding the nature, origin and use of the information held in memory and used in the service of ongoing behaviour. The aim of this work is to address these problems by synthesising a new approach to cognitive robotics, based on the perspective that cognition is fundamentally concerned with the manipulation and utilisation of memory. A novel theoretical framework is proposed that unifies memory and control into a single structure: the Memory-Based Cognitive Framework (MBCF). It is shown that this account of cognitive functionality requires the mechanism of constructivist knowledge formation through ongoing environmental interaction, the explicit integration of agent embodiment, and a value system to drive the development of coherent behaviours. A novel robotic implementation - the Embodied MBCF Agent (EMA) - is introduced to illustrate and explore the central features of the MBCF. By encompassing aspects of both network structures and explicit representation schemes, neural and non-neural inspired processes are integrated to an extent not possible in current approaches. This research validates the memory-based approach to cognitive robotics, providing the foundation for the application of these principles to higher-level cognitive competencies.},\n  author = {Baxter, Paul},\n  number = {December},\n  school = {University of Reading, U.K.},\n  title = {{Foundations of a Constructivist Memory- Based approach to Cognitive Robotics}},\n  year = {2010}\n}\n\n
\n
\n\n\n
\n Cognitive robotics are applicable to many aspects of modern society. These artificial agents may also be used as platforms to investigate of the nature and function of cognition itself through the creation and manipulation of biologically-inspired cognitive architectures. However, the flexibility and robustness of current systems are limited by the restricted use of previous experience. Memory thus has a clear role in cognitive architectures, as a means of linking past experience to present and future behaviour. Current cognitive robotics architectures typically implement a version of Working Memory - a functionally separable system that forms the link between long-term memory (information storage) and cognition (information processing). However, this division of function gives rise to practical and theoretical problems, particularly regarding the nature, origin and use of the information held in memory and used in the service of ongoing behaviour. The aim of this work is to address these problems by synthesising a new approach to cognitive robotics, based on the perspective that cognition is fundamentally concerned with the manipulation and utilisation of memory. A novel theoretical framework is proposed that unifies memory and control into a single structure: the Memory-Based Cognitive Framework (MBCF). It is shown that this account of cognitive functionality requires the mechanism of constructivist knowledge formation through ongoing environmental interaction, the explicit integration of agent embodiment, and a value system to drive the development of coherent behaviours. A novel robotic implementation - the Embodied MBCF Agent (EMA) - is introduced to illustrate and explore the central features of the MBCF. By encompassing aspects of both network structures and explicit representation schemes, neural and non-neural inspired processes are integrated to an extent not possible in current approaches. This research validates the memory-based approach to cognitive robotics, providing the foundation for the application of these principles to higher-level cognitive competencies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Memory as the substrate of cognition: a developmental cognitive robotics perspective.\n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In Johansson, B.; Sahin, E.; and Balkenius, C., editor(s), Proceedings of the Tenth International Conference on Epigenetic Robotics, pages 19–26, Örenäs Slott, Sweden, 2010. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2010a,\n  abstract = {Recent developments in neuroscientific theory have suggested that cognition is inherently memory-based, where memory is fundamentally associative. The application of this perspective to cognitive robotics is not well developed, especially in the context of the constraints and structure afforded by the embodiment of the agent. This paper seeks to describe the foundation of an approach that incorporates this memory-based perspective, by presenting a theoretical framework that incorporates the necessary aspects. A computational implementation of this framework is described, and a low-level application case study is discussed, which validates this memory-based approach. This implementation emphasises the necessity of environmental interaction in the ongoing development of behavioural competencies, and the central role that a value system plays in this process.},\n  address = {{\\"{O}}ren{\\"{a}}s Slott, Sweden},\n  author = {Baxter, Paul and Browne, Will},\n  booktitle = {Proceedings of the Tenth International Conference on Epigenetic Robotics},\n  editor = {Johansson, B. and Sahin, E. and Balkenius, C.},\n  pages = {19--26},\n  title = {{Memory as the substrate of cognition: a developmental cognitive robotics perspective}},\n  year = {2010}\n}\n\n
\n
\n\n\n
\n Recent developments in neuroscientific theory have suggested that cognition is inherently memory-based, where memory is fundamentally associative. The application of this perspective to cognitive robotics is not well developed, especially in the context of the constraints and structure afforded by the embodiment of the agent. This paper seeks to describe the foundation of an approach that incorporates this memory-based perspective, by presenting a theoretical framework that incorporates the necessary aspects. A computational implementation of this framework is described, and a low-level application case study is discussed, which validates this memory-based approach. This implementation emphasises the necessity of environmental interaction in the ongoing development of behavioural competencies, and the central role that a value system plays in this process.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Episodic memory for social interaction.\n \n \n \n\n\n \n Baxter, P.; Wood, R.; and Belpaeme, T.\n\n\n \n\n\n\n In Workshop on Synthetic Neuroethology, Brighton, U.K., 2010. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2010b,\n  abstract = {ALIZ-E (Adaptive Strategies for Sustainable Long-term Social Interaction) is a project aiming to explore the implementation of meaningful, temporally extended social interactions between humans and robots. A core component of the systems devised to supply this functionality will be episodic memory allowing the robot to recall prior encounters with a user and to adapt its behaviour on the basis of previous events. Episodic memory is generally implemented as a discrete component in current artificial cognitive systems. In biological systems the functions of memory are evidently more complex, and episodic memory can provide a useful example of the benefits of a neuroethological perspective, where a behavioural definition has been advanced (dispensing with the conscious recall requirement from human-based research). A commitment to a developmental, memory-centred approach will allow episodic memory to be considered as part of an integrated system. Episodic memory can thus be fundamentally linked to the cognitive system as a whole, enabling it to play a central part in the on-going development of interactive behaviour.},\n  address = {Brighton, U.K.},\n  author = {Baxter, Paul and Wood, Rachel and Belpaeme, Tony},\n  booktitle = {Workshop on Synthetic Neuroethology},\n  title = {{Episodic memory for social interaction}},\n  year = {2010}\n}\n\n
\n
\n\n\n
\n ALIZ-E (Adaptive Strategies for Sustainable Long-term Social Interaction) is a project aiming to explore the implementation of meaningful, temporally extended social interactions between humans and robots. A core component of the systems devised to supply this functionality will be episodic memory allowing the robot to recall prior encounters with a user and to adapt its behaviour on the basis of previous events. Episodic memory is generally implemented as a discrete component in current artificial cognitive systems. In biological systems the functions of memory are evidently more complex, and episodic memory can provide a useful example of the benefits of a neuroethological perspective, where a behavioural definition has been advanced (dispensing with the conscious recall requirement from human-based research). A commitment to a developmental, memory-centred approach will allow episodic memory to be considered as part of an integrated system. Episodic memory can thus be fundamentally linked to the cognitive system as a whole, enabling it to play a central part in the on-going development of interactive behaviour.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A developmental perspective on memory-centred cognition for social interaction.\n \n \n \n\n\n \n Wood, R.; Baxter, P.; and Belpaeme, T.\n\n\n \n\n\n\n In Johansson, B.; Sahin, E.; and Balkenius, C., editor(s), Proceedings of the Tenth International Conference on Epigenetic Robotics, pages 183–184, Örenäs Slott, Sweden, 2010. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Wood2010,\n  abstract = {Summary: ALIZ-E (Adaptive Strategies for Sustainable Longterm Social Interaction) is a project aiming to explore the implementation of meaningful, temporally extended social interactions between humans and robots. A core component of the systems devised to supply this functionality will be episodic memory allowing the robot to recall prior encounters with a user and to adapt its behaviour on the basis of previous events. The commitment to a developmental memory-centred approach allows episodic memory to be considered as part of an integrated system, rather than a discrete component. Episodic memory is thus fundamentally linked to the cognitive system as a whole, enabling it to play a central part in the on-going development of interactive behaviour.},\n  address = {{\\"{O}}ren{\\"{a}}s Slott, Sweden},\n  author = {Wood, Rachel and Baxter, Paul and Belpaeme, Tony},\n  booktitle = {Proceedings of the Tenth International Conference on Epigenetic Robotics},\n  editor = {Johansson, B. and Sahin, E. and Balkenius, C.},\n  pages = {183--184},\n  title = {{A developmental perspective on memory-centred cognition for social interaction}},\n  year = {2010}\n}\n\n
\n
\n\n\n
\n Summary: ALIZ-E (Adaptive Strategies for Sustainable Longterm Social Interaction) is a project aiming to explore the implementation of meaningful, temporally extended social interactions between humans and robots. A core component of the systems devised to supply this functionality will be episodic memory allowing the robot to recall prior encounters with a user and to adapt its behaviour on the basis of previous events. The commitment to a developmental memory-centred approach allows episodic memory to be considered as part of an integrated system, rather than a discrete component. Episodic memory is thus fundamentally linked to the cognitive system as a whole, enabling it to play a central part in the on-going development of interactive behaviour.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2009\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n The role of internal value systems for a memory-based robotic architecture.\n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In Cañamero, L.; Oudeyer, P.; and Balkenius, C., editor(s), Proceedings of the Ninth International Conference on Epigenetic Robotics, pages 195–196, Venice, Italy, 2009. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2009,\n  address = {Venice, Italy},\n  author = {Baxter, Paul and Browne, Will},\n  booktitle = {Proceedings of the Ninth International Conference on Epigenetic Robotics},\n  editor = {Ca{\\~{n}}amero, L. and Oudeyer, P.-Y. and Balkenius, C.},\n  pages = {195--196},\n  title = {{The role of internal value systems for a memory-based robotic architecture}},\n  year = {2009}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Perspectives on Robotic Embodiment from a Developmental Cognitive Architecture.\n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In International Conference on Adaptive and Intelligent Systems (ICAIS'09), pages 3–8, Klagenfurt, Austria, 2009. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2009a,\n  address = {Klagenfurt, Austria},\n  author = {Baxter, Paul and Browne, Will},\n  booktitle = {International Conference on Adaptive and Intelligent Systems (ICAIS'09)},\n  doi = {10.1109/ICAIS.2009.11},\n  pages = {3--8},\n  title = {{Perspectives on Robotic Embodiment from a Developmental Cognitive Architecture}},\n  year = {2009}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n P-Controller as an Expert System for Manoeuvring Rudderless Sail Boats.\n \n \n \n\n\n \n Benatar, N.; Qadir, O.; Owen, J.; Baxter, P.; and Neal, M.\n\n\n \n\n\n\n In UK Workshop on Computational Intelligence (UKCI'09), Nottingham, U.K., 2009. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Benatar2009,\n  address = {Nottingham, U.K.},\n  author = {Benatar, Naisan and Qadir, Omer and Owen, Jenny and Baxter, Paul and Neal, Mark},\n  booktitle = {UK Workshop on Computational Intelligence (UKCI'09)},\n  title = {{P-Controller as an Expert System for Manoeuvring Rudderless Sail Boats}},\n  year = {2009}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2008\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Towards a developmental memory-based and embodied cognitive architecture.\n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In Schlesinger, M.; Berthouze, L.; and Balkenius, C., editor(s), Proceedings of the Eighth International Conference on Epigenetic Robotics, pages 137–138, Brighton, U.K., 2008. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Baxter2008,\n  address = {Brighton, U.K.},\n  author = {Baxter, Paul and Browne, Will},\n  booktitle = {Proceedings of the Eighth International Conference on Epigenetic Robotics},\n  editor = {Schlesinger, M. and Berthouze, L. and Balkenius, C.},\n  pages = {137--138},\n  title = {{Towards a developmental memory-based and embodied cognitive architecture}},\n  year = {2008}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Memory-based Embodied Cognition: a computational architecture.\n \n \n \n \n\n\n \n Baxter, P.; and Browne, W.\n\n\n \n\n\n\n Technical Report University of Reading, Reading, U.K., 2008.\n \n\n\n\n
\n\n\n\n \n \n \"Memory-basedPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@techreport{Baxter2008a,\n  abstract = {At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture the Memory-Based Cognitive (MBC) architecture based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.},\n  address = {Reading, U.K.},\n  annote = {Prepared for, and presented at, the 1st School of Systems Engineering Research Conference (2008)},\n  author = {Baxter, Paul and Browne, Will},\n  institution = {University of Reading},\n  title = {{Memory-based Embodied Cognition: a computational architecture}},\n  url = {http://centaur.reading.ac.uk/1069/1/SSE{\\_}paper{\\_}v2.doc},\n  year = {2008}\n}\n\n
\n
\n\n\n
\n At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture the Memory-Based Cognitive (MBC) architecture based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2006\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n An Autonomous Explore/Exploit Strategy.\n \n \n \n\n\n \n McMahon, A.; Scott, D.; Baxter, P.; and Browne, W.\n\n\n \n\n\n\n In Proceedings of AISB'06, pages 192–201, Bristol, U.K., 2006. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{McMahon2006,\n  abstract = {In reinforcement learning problems it has been considered that neither exploitation nor exploration can be pursued exclusively without failing at the task. The optimal balance between exploring and exploiting changes as the training progresses due to the increasing amount of learnt knowledge. This shift in balance is not known a priori so an autonomous online adjustment is sought. Human beings manage this balance through logic and emotions based on feedback from the environment. The XCS learning classifier system uses a fixed explore/exploit balance, but does keep multiple statistics about its performance and interaction in an environment. Utilizing these statistics in a manner analogous to logic/emotion, autonomous adjustment of the explore/exploit balance was achieved. This resulted in reduced exploration in simple environments, which increased with the complexity of the problem domain. It also prevented unsuccessful 'loop' exploit trials and suggests a method of dynamic choice in goal setting.},\n  address = {Bristol, U.K.},\n  author = {McMahon, Alex and Scott, Dan and Baxter, Paul and Browne, Will},\n  booktitle = {Proceedings of AISB'06},\n  pages = {192--201},\n  title = {{An Autonomous Explore/Exploit Strategy}},\n  year = {2006}\n}\n
\n
\n\n\n
\n In reinforcement learning problems it has been considered that neither exploitation nor exploration can be pursued exclusively without failing at the task. The optimal balance between exploring and exploiting changes as the training progresses due to the increasing amount of learnt knowledge. This shift in balance is not known a priori so an autonomous online adjustment is sought. Human beings manage this balance through logic and emotions based on feedback from the environment. The XCS learning classifier system uses a fixed explore/exploit balance, but does keep multiple statistics about its performance and interaction in an environment. Utilizing these statistics in a manner analogous to logic/emotion, autonomous adjustment of the explore/exploit balance was achieved. This resulted in reduced exploration in simple environments, which increased with the complexity of the problem domain. It also prevented unsuccessful 'loop' exploit trials and suggests a method of dynamic choice in goal setting.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);