var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/service/mendeley/7ff3d559-34c5-3dc7-a15e-4809d39e6685/group/8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90?jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/service/mendeley/7ff3d559-34c5-3dc7-a15e-4809d39e6685/group/8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90?jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/service/mendeley/7ff3d559-34c5-3dc7-a15e-4809d39e6685/group/8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90?jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Reinforcement Learning with Temporal Logic Specifications for Regression Testing NPCs in Video Games.\n \n \n \n\n\n \n Gutierrez-Sanchez, P.; Gomez-Martin, M., A.; Gonzalez-Calero, P., A.; and Gomez-Martin, P., P.\n\n\n \n\n\n\n In IEEE Conference on Computatonal Intelligence and Games, CIG, 2023. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Reinforcement Learning with Temporal Logic Specifications for Regression Testing NPCs in Video Games},\n type = {inproceedings},\n year = {2023},\n id = {edc8ea34-fd53-3169-b3a0-c0593e7d9138},\n created = {2024-04-30T05:15:13.057Z},\n file_attached = {false},\n profile_id = {7ff3d559-34c5-3dc7-a15e-4809d39e6685},\n group_id = {8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90},\n last_modified = {2024-04-30T05:15:13.057Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {Reinforcement learning (RL) is a promising strategy for the development of autonomous agents in various control and optimization contexts, including the generation of autonomous players in video games. However, designing these agents, and in particular their reward functions to perform sequential decision-making, can be challenging for most users and often require tedious trial-and-error processes until a satisfactory result is obtained. Consequently, these strategies are generally beyond reach for designers and quality control teams, who could potentially make use of them to generate automatic testing agents. This paper presents the application of reinforcement learning and behavioral descriptions given through a formal temporal logic task specification language (TLTL) for the design of NPCs that can be employed as surrogates for the player in such contexts. We argue that these techniques enable designers to naturally specify the way in which they would expect the final player to interact with a level and then generate a test that automatically verifies whether this strategy continues to be feasible throughout the development of the game. We include a series of experiments conducted on a custom 3D test environment developed in Unity3D that show that the proposed methodology provides a simple mechanism for training NPCs in settings that are commonly encountered in modern video games.},\n bibtype = {inproceedings},\n author = {Gutierrez-Sanchez, Pablo and Gomez-Martin, Marco A. and Gonzalez-Calero, Pedro A. and Gomez-Martin, Pedro P.},\n doi = {10.1109/CoG57401.2023.10333208},\n booktitle = {IEEE Conference on Computatonal Intelligence and Games, CIG}\n}
\n
\n\n\n
\n Reinforcement learning (RL) is a promising strategy for the development of autonomous agents in various control and optimization contexts, including the generation of autonomous players in video games. However, designing these agents, and in particular their reward functions to perform sequential decision-making, can be challenging for most users and often require tedious trial-and-error processes until a satisfactory result is obtained. Consequently, these strategies are generally beyond reach for designers and quality control teams, who could potentially make use of them to generate automatic testing agents. This paper presents the application of reinforcement learning and behavioral descriptions given through a formal temporal logic task specification language (TLTL) for the design of NPCs that can be employed as surrogates for the player in such contexts. We argue that these techniques enable designers to naturally specify the way in which they would expect the final player to interact with a level and then generate a test that automatically verifies whether this strategy continues to be feasible throughout the development of the game. We include a series of experiments conducted on a custom 3D test environment developed in Unity3D that show that the proposed methodology provides a simple mechanism for training NPCs in settings that are commonly encountered in modern video games.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Diseño iterativo y colaborativo de Educational Escape Rooms en el Museo Nacional de Ciencias Naturales.\n \n \n \n\n\n \n Gutiérrez-Sánchez, P.; Camps-Ortueta, I.; Gómez-Martín, P., P.; and González-Calero, P., A.\n\n\n \n\n\n\n In CEUR Workshop Proceedings, volume 3599, 2023. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Diseño iterativo y colaborativo de Educational Escape Rooms en el Museo Nacional de Ciencias Naturales},\n type = {inproceedings},\n year = {2023},\n volume = {3599},\n id = {32271a4e-e2e7-3347-97f7-501cad72dfb7},\n created = {2024-04-30T05:15:13.318Z},\n file_attached = {false},\n profile_id = {7ff3d559-34c5-3dc7-a15e-4809d39e6685},\n group_id = {8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90},\n last_modified = {2024-04-30T05:15:13.318Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {This article describes the 3 iterations that took place during the development of an Educational Escape Room at the National Museum of Natural Sciences that aimed to draw students’ attention to the impact of our consumption habits on the loss of Biodiversity. The game designers collaborated with primary and secondary school teachers and museum educators to find good design solutions to bring students closer to the acquisition of concepts such as Biodiversity and connect them to their daily lives, understanding that their actions have an impact.},\n bibtype = {inproceedings},\n author = {Gutiérrez-Sánchez, Pablo and Camps-Ortueta, Irene and Gómez-Martín, Pedro P. and González-Calero, Pedro A.},\n booktitle = {CEUR Workshop Proceedings}\n}
\n
\n\n\n
\n This article describes the 3 iterations that took place during the development of an Educational Escape Room at the National Museum of Natural Sciences that aimed to draw students’ attention to the impact of our consumption habits on the loss of Biodiversity. The game designers collaborated with primary and secondary school teachers and museum educators to find good design solutions to bring students closer to the acquisition of concepts such as Biodiversity and connect them to their daily lives, understanding that their actions have an impact.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n AI Behavior Graphs: A Visual Toolkit for Defining NPC Specifications for Regression Testing.\n \n \n \n\n\n \n Gutiérrez-Sánchez, P.; Gómez-Martín, M., A.; González-Calero, P., A.; and Gómez-Martín, P., P.\n\n\n \n\n\n\n In CEUR Workshop Proceedings, volume 3599, 2023. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {AI Behavior Graphs: A Visual Toolkit for Defining NPC Specifications for Regression Testing},\n type = {inproceedings},\n year = {2023},\n volume = {3599},\n id = {29e49577-e292-3290-bd12-adf15b2a8b58},\n created = {2024-04-30T05:15:13.600Z},\n file_attached = {false},\n profile_id = {7ff3d559-34c5-3dc7-a15e-4809d39e6685},\n group_id = {8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90},\n last_modified = {2024-04-30T05:15:13.600Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {Reinforcement learning (RL) offers a promising approach for developing autonomous agents in various domains, including the creation of in-game characters. However, crafting these agents, and particularly designing reward functions for sequential decision-making, remains a significant challenge, often involving iterative trial-and-error processes until achieving satisfactory results. Consequently, these strategies often elude game designers and quality control teams, who could otherwise use them to automate testing procedures. This paper extends our prior work by introducing “AI Behavior Graphs,” a visual toolkit designed to simplify the creation of behavior specifications for NPCs (Non-Player Characters). Our approach provides an intuitive graphical interface that enables designers to express their expectations for player-NPC interactions within a game level. These specifications are automatically translated into both Linear Temporal Logic (LTL) and Rabin automata, which can in turn be leveraged to dynamically generate reward functions during agent training. This not only expedites NPC development but also makes RL-based methodologies more accessible to a broader audience of game designers and quality assurance teams. Furthermore, it underscores a critical aspect of our approach: the ability to utilize these agents for playtesting game levels. This application ensures continuous validation of designer expectations throughout the development cycle, enhancing the overall game design process.},\n bibtype = {inproceedings},\n author = {Gutiérrez-Sánchez, Pablo and Gómez-Martín, Marco A. and González-Calero, Pedro A. and Gómez-Martín, Pedro P.},\n booktitle = {CEUR Workshop Proceedings}\n}
\n
\n\n\n
\n Reinforcement learning (RL) offers a promising approach for developing autonomous agents in various domains, including the creation of in-game characters. However, crafting these agents, and particularly designing reward functions for sequential decision-making, remains a significant challenge, often involving iterative trial-and-error processes until achieving satisfactory results. Consequently, these strategies often elude game designers and quality control teams, who could otherwise use them to automate testing procedures. This paper extends our prior work by introducing “AI Behavior Graphs,” a visual toolkit designed to simplify the creation of behavior specifications for NPCs (Non-Player Characters). Our approach provides an intuitive graphical interface that enables designers to express their expectations for player-NPC interactions within a game level. These specifications are automatically translated into both Linear Temporal Logic (LTL) and Rabin automata, which can in turn be leveraged to dynamically generate reward functions during agent training. This not only expedites NPC development but also makes RL-based methodologies more accessible to a broader audience of game designers and quality assurance teams. Furthermore, it underscores a critical aspect of our approach: the ability to utilize these agents for playtesting game levels. This application ensures continuous validation of designer expectations throughout the development cycle, enhancing the overall game design process.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Liquid Snake: a test environment for video game testing agents.\n \n \n \n\n\n \n Gutiérrez-Sánchez, P.; Gómez-Martín, M., A.; González-Calero, P., A.; and Gómez-Martín, P., P.\n\n\n \n\n\n\n In CEUR Workshop Proceedings, volume 3305, 2022. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{\n title = {Liquid Snake: a test environment for video game testing agents},\n type = {inproceedings},\n year = {2022},\n volume = {3305},\n id = {0b3fd9d1-6933-39b5-8146-2f0f2c4eab4f},\n created = {2024-04-30T05:15:13.858Z},\n file_attached = {false},\n profile_id = {7ff3d559-34c5-3dc7-a15e-4809d39e6685},\n group_id = {8d2b17fe-88c2-3a9d-8e3e-d28b5b0a4c90},\n last_modified = {2024-04-30T05:15:13.858Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n private_publication = {false},\n abstract = {In recent years, a number of benchmarks and test environments have been proposed for research on AI algorithms that have made it possible to evaluate and accelerate development in this field. There exists, however, an absence of environments in which to evaluate the feasibility of such algorithms in the context of games intended for continuous development, in particular in regression testing and automatic error detection tasks in commercial video games. In this paper we propose a new test-bed - Liquid Snake: a 3D third-person stealth game prototype, designed to conveniently integrate autonomous agent-driven quality control mechanisms into the development life cycle of a video game, based on the open source ML-Agents library in Unity3D. Focusing on the problem of regression testing on the potential unexpected changes induced in a game by altering the AI of enemies, we argue that this environment lends itself to be used as a sample test environment for automated QA methodologies thanks to the complexity and variety in the behaviors of NPCs naturally present in stealth titles.},\n bibtype = {inproceedings},\n author = {Gutiérrez-Sánchez, Pablo and Gómez-Martín, Marco A. and González-Calero, Pedro A. and Gómez-Martín, Pedro P.},\n booktitle = {CEUR Workshop Proceedings}\n}
\n
\n\n\n
\n In recent years, a number of benchmarks and test environments have been proposed for research on AI algorithms that have made it possible to evaluate and accelerate development in this field. There exists, however, an absence of environments in which to evaluate the feasibility of such algorithms in the context of games intended for continuous development, in particular in regression testing and automatic error detection tasks in commercial video games. In this paper we propose a new test-bed - Liquid Snake: a 3D third-person stealth game prototype, designed to conveniently integrate autonomous agent-driven quality control mechanisms into the development life cycle of a video game, based on the open source ML-Agents library in Unity3D. Focusing on the problem of regression testing on the potential unexpected changes induced in a game by altering the AI of enemies, we argue that this environment lends itself to be used as a sample test environment for automated QA methodologies thanks to the complexity and variety in the behaviors of NPCs naturally present in stealth titles.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);