var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https://maskor.fh-aachen.de/biblio/MASKOR.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https://maskor.fh-aachen.de/biblio/MASKOR.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https://maskor.fh-aachen.de/biblio/MASKOR.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Look: AI at Work! - Analysing Key Aspects of AI-support at the Work Place.\n \n \n \n\n\n \n Schiffer, S.; Rothermel, A. M.; Ferrein, A.; and Rosenthal-von der Pütten, A.\n\n\n \n\n\n\n In Yamshchikov, I.; Meißner, P.; and Rezagholi, S., editor(s), Workshop on Human-Machine Interaction (HuMaIn) held at KI 2024, 2024. \n to appear\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{ Schiffer-etAl_KI2024HuMaIn_Look-AI-at-Work, \n  author       = {Stefan Schiffer and Anna Milena Rothermel and Alexander Ferrein and Astrid {Rosenthal-von der P{\\"u}tten}},\n  title        = {Look: AI at Work! - Analysing Key Aspects of AI-support at the Work Place},\n  booktitle    = {Workshop on Human-Machine Interaction (HuMaIn) held at KI 2024},\n  location     = {W{\\"u}rzburg, Germany},\n  OPTpages        = {--},\n  year         = {2024},\n  editor       = {Ivan Yamshchikov and Pascal Mei{\\ss}ner and Sharwin Rezagholi},\n  keywords     = {WIRKsam, artificial intelligence, AI, Work, Social Psychology},\n  abstract     = {In this paper we present an analysis of\n                  technological and psychological factors of applying\n                  artificial intelligence (AI) at the work place. We\n                  do so for a number of twelve application cases in\n                  the context of a project where AI is integrated at\n                  work places and in work systems of the future. From\n                  a technological point of view we mainly look at the\n                  areas of AI that the applications are concerned\n                  with. This allows to formulate recommendations in\n                  terms of what to look at in developing an AI\n                  application and what to pay attention to with\n                  regards to building AI literacy with different\n                  stakeholders using the system. This includes the\n                  importance of high-quality data for training\n                  learning-based systems as well as the integration of\n                  human expertise, especially with knowledge- based\n                  systems. In terms of the psychological factors we\n                  derive research questions to investigate in the\n                  development of AI supported work systems and to\n                  consider in future work, mainly concerned with\n                  topics such as acceptance, openness, and trust in an\n                  AI system.},\n  note         = {to appear},\n}\n
\n
\n\n\n
\n In this paper we present an analysis of technological and psychological factors of applying artificial intelligence (AI) at the work place. We do so for a number of twelve application cases in the context of a project where AI is integrated at work places and in work systems of the future. From a technological point of view we mainly look at the areas of AI that the applications are concerned with. This allows to formulate recommendations in terms of what to look at in developing an AI application and what to pay attention to with regards to building AI literacy with different stakeholders using the system. This includes the importance of high-quality data for training learning-based systems as well as the integration of human expertise, especially with knowledge- based systems. In terms of the psychological factors we derive research questions to investigate in the development of AI supported work systems and to consider in future work, mainly concerned with topics such as acceptance, openness, and trust in an AI system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Conceptually Elevating Modern Concepts of Operational Design Domains and Implications for Operating in Unstructured Environments.\n \n \n \n \n\n\n \n Eichenbaum, J.; Bracht, L.; Schulte-Tigges, J.; Reke, M.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In Yilmaz, M.; Clarke, P.; Riel, A.; Messnarz, R.; Greiner, C.; and Peisl, T., editor(s), Systems, Software and Services Process Improvement (EuroSPI), pages 172–185, Cham, 2024. Springer Nature Switzerland\n \n\n\n\n
\n\n\n\n \n \n \"Towards springer\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{ Eichenbaum-etAl_EuroSPI2024_Towards-Conceptually-Elevating-ODDs,\n  author       = {Eichenbaum, Julian and Bracht, Leonard and Schulte-Tigges, Joschua and\n                  Reke, Michael and Ferrein, Alexander and Scholl, Ingrid},\n  editor       = "Yilmaz, Murat and Clarke, Paul and Riel, Andreas and\n                  Messnarz, Richard and Greiner, Christian and Peisl, Thomas",\n  title        = {{Towards Conceptually Elevating Modern Concepts of {O}perational {D}esign {D}omains\n                  and Implications for Operating in Unstructured Environments}},\n  booktitle    = "Systems, Software and Services Process Improvement (EuroSPI)",\n  year         = "2024",\n  publisher    = "Springer Nature Switzerland",\n  address      = "Cham",\n  pages        = "172--185",\n  doi          = {10.1007/978-3-031-71142-8_13},\n  url_springer = {https://link.springer.com/chapter/10.1007/978-3-031-71142-8_13},\n  abstract     = "This paper explores the basic concepts of\n                  Operational Design Domains (ODDs) in the field of\n                  autonomous driving. We address the intricacies of\n                  different scenario descriptions and promote the\n                  communication of system requirements and operational\n                  constraints in the context of Automated Driving\n                  Systems (ADSs).\n                  Ongoing standardization efforts highlight\n\t\t  the recognition of the importance of ODDs\n                  as a tool to manage the complexity of an ADS in\n                  accurately defining operational boundaries and\n                  conditions, particularly in structured\n                  environments. In line with this, our work explores\n                  the conceptual integration of multiple ODDs within\n                  an ADS to enable operation across different\n                  domains. Drawing on the existing literature on ODD\n                  extension concepts and leveraging insights from our\n                  research efforts, we strive for exemplary adaptation\n                  of operations in unstructured environments such as\n                  hybrid mines. A key focus is the translation of a\n                  solution that has been successfully tested in the\n                  context of hybrid mines and structured terrain into\n                  modern ODD frameworks. In particular, we focus on\n                  taxonomy as a fundamental element of an ODD\n                  framework. Through comparative analysis and\n                  evaluation of existing taxonomies, we aim to provide\n                  insights into the configuration of ODDs for both\n                  structured and unstructured environments, thereby\n                  contributing to their broader implementation in the\n                  dynamic landscape of autonomous driving\n                  technologies.",\n  isbn         = "978-3-031-71142-8",\n}\n\n
\n
\n\n\n
\n This paper explores the basic concepts of Operational Design Domains (ODDs) in the field of autonomous driving. We address the intricacies of different scenario descriptions and promote the communication of system requirements and operational constraints in the context of Automated Driving Systems (ADSs). Ongoing standardization efforts highlight the recognition of the importance of ODDs as a tool to manage the complexity of an ADS in accurately defining operational boundaries and conditions, particularly in structured environments. In line with this, our work explores the conceptual integration of multiple ODDs within an ADS to enable operation across different domains. Drawing on the existing literature on ODD extension concepts and leveraging insights from our research efforts, we strive for exemplary adaptation of operations in unstructured environments such as hybrid mines. A key focus is the translation of a solution that has been successfully tested in the context of hybrid mines and structured terrain into modern ODD frameworks. In particular, we focus on taxonomy as a fundamental element of an ODD framework. Through comparative analysis and evaluation of existing taxonomies, we aim to provide insights into the configuration of ODDs for both structured and unstructured environments, thereby contributing to their broader implementation in the dynamic landscape of autonomous driving technologies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Conceptualization of Demonstrators for Human-Technology Interaction with a Three-Layer Model.\n \n \n \n\n\n \n Altepost, A.; Elaroussi, F.; Hirsch, L.; Merx, W.; Oppermann, L.; Rosenthal-von der Pütten, A.; Rothermel, A. M.; and Schiffer, S.\n\n\n \n\n\n\n In Proceedings of the 22nd Triennial Congress of the International Ergonomics Association (IEA 2024), of Springer Series in Design and Innovation, 2024. Springer Cham\n to appear\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Altepost-etAl_IEA2024_Conceptualization-of-Demonstrators,\n  author       = {Andrea Altepost and Farah Elaroussi and Linda Hirsch and Wolfgang Merx and Leif Oppermann and Astrid {Rosenthal-von der P{\\"u}tten} and Anna Milena Rothermel and Stefan Schiffer},\n  title        = {Conceptualization of Demonstrators for Human-Technology Interaction with a Three-Layer Model},\n  booktitle    = {Proceedings of the 22nd Triennial Congress of the International Ergonomics Association (IEA 2024)},\n  OPTpages        = {},\n  series       = {Springer Series in Design and Innovation},\n  publisher    = {Springer Cham},\n  OPTaddress      = {},\n  year         = 2024,\n  keywords     = {WIRKsam, Stakeholder Involvement, Demonstrators, Human-Technology Interaction, Artificial Intelligence, AI},\n  abstract     = {We present a three-layer model of stakeholder\n                  involvement, developed as part of the ongoing\n                  WIRKsam project. WIRKsam creates or modifies\n                  socio-technical work systems by integrating\n                  artificial intelligence (AI) in a participatory\n                  fashion and in such a way that all stakeholders\n                  benefit from better conditions of labor. While the\n                  technological elements of these changes are easy to\n                  highlight to others, it is difficult to convey the\n                  human-related and organizational changes and their\n                  benefits. Therefore, we aim to develop demonstrators\n                  in the field of human factors which should showcase\n                  the transformation of work and not just technology,\n                  using extended reality (XR) in a transdisciplinary\n                  setting.},\n  note         = {to appear},\n}\n\n\n
\n
\n\n\n
\n We present a three-layer model of stakeholder involvement, developed as part of the ongoing WIRKsam project. WIRKsam creates or modifies socio-technical work systems by integrating artificial intelligence (AI) in a participatory fashion and in such a way that all stakeholders benefit from better conditions of labor. While the technological elements of these changes are easy to highlight to others, it is difficult to convey the human-related and organizational changes and their benefits. Therefore, we aim to develop demonstrators in the field of human factors which should showcase the transformation of work and not just technology, using extended reality (XR) in a transdisciplinary setting.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n VR-Enhanced Teleoperation System for a Semi-autonomous Mulitplatform All-Terrain Exploration Vehicle.\n \n \n \n \n\n\n \n Scholl, D.; Meier, J.; and Reke, M.\n\n\n \n\n\n\n In Chen, J. Y. C.; and Fragomeni, G., editor(s), Virtual, Augmented and Mixed Reality, pages 268–285, Cham, 2024. Springer Nature Switzerland\n \n\n\n\n
\n\n\n\n \n \n \"VR-Enhanced springer\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{ Scholl:etAl_HCII2024_VR-Enhanced-TeleOp,\n  author       = "Scholl, Daniel and Meier, Jannis and Reke, Michael",\n  editor       = "Chen, Jessie Y. C. and Fragomeni, Gino",\n  title        = "{VR-Enhanced Teleoperation System for a Semi-autonomous Mulitplatform All-Terrain Exploration Vehicle}",\n  booktitle    = "Virtual, Augmented and Mixed Reality",\n  year         = "2024",\n  publisher    = "Springer Nature Switzerland",\n  address      = "Cham",\n  pages        = "268--285",\n  doi          = "10.1007/978-3-031-61047-9_18",\n  url_springer = {https://link.springer.com/chapter/10.1007/978-3-031-61047-9_18},\n  isbn         = "978-3-031-61047-9",\n  abstract     = "This paper presents a research on the integration of\n                  a virtual reality (VR) enhanced teleoperation system\n                  within the semi-autonomous multi-platform\n                  all-terrain exploration vehicle (MAEV). The main\n                  objective is to improve the operator's control and\n                  situational awareness when exploring difficult\n                  terrain. The methodology includes an in-depth\n                  analysis of the MAEV project, describing the\n                  configuration of the teleoperation system and\n                  careful VR integration processes.",\n}\n\n
\n
\n\n\n
\n This paper presents a research on the integration of a virtual reality (VR) enhanced teleoperation system within the semi-autonomous multi-platform all-terrain exploration vehicle (MAEV). The main objective is to improve the operator's control and situational awareness when exploring difficult terrain. The methodology includes an in-depth analysis of the MAEV project, describing the configuration of the teleoperation system and careful VR integration processes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approach for the Identification of Requirements on the Design of AI-supported Work Systems (in Problem-based Projects).\n \n \n \n \n\n\n \n Harlacher, M.; Altepost, A.; Ferrein, A.; Hansen-Ampah, A.; Merx, W.; Niehues, S.; Schiffer, S.; and Shahinfar, F. N.\n\n\n \n\n\n\n In Lausberg, I.; and Vogelsang, M., editor(s), AI in Business and Economics, 7, pages 87–100. De Gruyter, Berlin, Boston, 2024.\n \n\n\n\n
\n\n\n\n \n \n \"Approach doi\n  \n \n \n \"Approach pdf\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{ Harlacher-etAl_EPAI2023_Identification-of-Requirements,\r\n  chapter      = {7},\r\n  title        = {Approach for the Identification of Requirements on the Design of AI-supported Work Systems (in Problem-based Projects)},\r\n  author       = {Markus Harlacher and Andrea Altepost and Alexander Ferrein and Adjan Hansen-Ampah and Wolfgang Merx and Sina Niehues and Stefan Schiffer and Fatemeh Nasim Shahinfar},\r\n  booktitle    = {AI in Business and Economics},\r\n  editor       = {Isabel Lausberg and Michael Vogelsang},\r\n  publisher    = {De Gruyter},\r\n  address      = {Berlin, Boston},\r\n  pages        = {87--100},\r\n  doi          = {10.1515/9783110790320-007},\r\n  url_doi      = {https://doi.org/10.1515/9783110790320-007},\r\n  url_pdf      = {https://www.degruyter.com/document/doi/10.1515/9783110790320-007/pdf?licenseType=open-access},\r\n  isbn         = {9783110790320},\r\n  year         = {2024},\r\n  keywords     = {WIRKsam, business understanding, requirements, process model, participation, implementation of AI-systems, Artificial Intelligence, AI},\r\n  abstract     = {To successfully develop and introduce concrete\r\n                  artificial intelligence (AI) solutions in\r\n                  operational practice, a comprehensive process model\r\n                  is being tested in the WIRKsam joint project. It is\r\n                  based on a methodical approach that integrates\r\n                  human, technical and organisational aspects and\r\n                  involves employees in the process. The chapter\r\n                  focuses on the procedure for identifying\r\n                  requirements for a work system that is implementing\r\n                  AI in problem-driven projects and for selecting\r\n                  appropriate AI methods. This means that the use case\r\n                  has already been narrowed down at the beginning of\r\n                  the project and must be completely defined in the\r\n                  following. Initially, the existing preliminary work\r\n                  is presented. Based on this, an overview of all\r\n                  procedural steps and methods is given. All methods\r\n                  are presented in detail and good practice approaches\r\n                  are shown. Finally, a reflection of the developed\r\n                  procedure based on the application in nine companies\r\n                  is given.},\r\n}\r\n
\n
\n\n\n
\n To successfully develop and introduce concrete artificial intelligence (AI) solutions in operational practice, a comprehensive process model is being tested in the WIRKsam joint project. It is based on a methodical approach that integrates human, technical and organisational aspects and involves employees in the process. The chapter focuses on the procedure for identifying requirements for a work system that is implementing AI in problem-driven projects and for selecting appropriate AI methods. This means that the use case has already been narrowed down at the beginning of the project and must be completely defined in the following. Initially, the existing preliminary work is presented. Based on this, an overview of all procedural steps and methods is given. All methods are presented in detail and good practice approaches are shown. Finally, a reflection of the developed procedure based on the application in nine companies is given.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (21)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Towards a Lifelong Mapping Approach Using Lanelet 2 for Autonomous Open-Pit Mine Operations.\n \n \n \n \n\n\n \n Eichenbaum, J.; Nikolovski, G.; Mülhens, L.; Reke, M.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In 19th IEEE International Conference on Automation Science and Engineering (CASE), pages 1–8, Aug 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Towards ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{Eichenbaum-etAl_CASE2023_Towards-Lifelong-Mapping,\n  author       = {Eichenbaum, Julian and Nikolovski, Gjorgji and M{\\"u}lhens, Leon\n                  and Reke, Michael and Ferrein, Alexander and Scholl, Ingrid},\n  title        = {Towards a Lifelong Mapping Approach Using Lanelet 2 for Autonomous Open-Pit Mine Operations}, \n  booktitle    = {19th IEEE International Conference on Automation Science and Engineering (CASE)}, \n  year         = {2023},\n  month        = {Aug},\n  day          = {26-30},\n  location     = {Auckland, New Zealand},\n  pages        = {1--8},\n  doi          = {10.1109/CASE56687.2023.10260526},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/10260526},\n  ISSN         = {2161-8089},\n  keywords     = {Geometry;Shape;Navigation;Roads;Operating systems;Semantics;Object detection},\n  abstract     = {Autonomous agents require rich environment models\n                  for fulfilling their missions. High-definition maps\n                  are a well-established map format which allows for\n                  representing semantic information besides the usual\n                  geometric information of the environment. These are,\n                  for instance, road shapes, road markings, traffic\n                  signs or barriers. The geometric resolution of HD\n                  maps can be as precise as of centimetre level. In\n                  this paper, we report on our approach of using HD\n                  maps as a map representation for autonomous\n                  load-haul-dump vehicles in open-pit mining\n                  operations. As the mine undergoes constant change,\n                  we also need to constantly update the\n                  map. Therefore, we follow a lifelong mapping\n                  approach for updating the HD maps based on\n                  camera-based object detection and GPS data. We show\n                  our mapping algorithm based on the Lanelet 2 map\n                  format and show our integration with the navigation\n                  stack of the Robot Operating System. We present\n                  experimental results on our lifelong mapping\n                  approach from a real open-pit mine.},\n}\n
\n
\n\n\n
\n Autonomous agents require rich environment models for fulfilling their missions. High-definition maps are a well-established map format which allows for representing semantic information besides the usual geometric information of the environment. These are, for instance, road shapes, road markings, traffic signs or barriers. The geometric resolution of HD maps can be as precise as of centimetre level. In this paper, we report on our approach of using HD maps as a map representation for autonomous load-haul-dump vehicles in open-pit mining operations. As the mine undergoes constant change, we also need to constantly update the map. Therefore, we follow a lifelong mapping approach for updating the HD maps based on camera-based object detection and GPS data. We show our mapping algorithm based on the Lanelet 2 map format and show our integration with the navigation stack of the Robot Operating System. We present experimental results on our lifelong mapping approach from a real open-pit mine.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extraction of Semantically Rich High-Definition Maps from Spatial Representations of an Open Pit Mine.\n \n \n \n \n\n\n \n Braining, A.; Nikolovski, G.; Reke, M.; and Ferrein, A.\n\n\n \n\n\n\n In 26th IEEE International Conference on Intelligent Transportation Systems (ITSC), pages 4032–4039, Sep. 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Extraction ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{Braining-etAl_ITSC2023_Extraction-HD-Maps,\n  author       = {Braining, Andreas and Nikolovski, Gjorgji and Reke, Michael and Ferrein, Alexander},\n  title        = {Extraction of Semantically Rich High-Definition Maps from Spatial Representations of an Open Pit Mine}, \n  booktitle    = {26th IEEE International Conference on Intelligent Transportation Systems (ITSC)}, \n  pages        = {4032--4039},\n  year         = {2023},\n  month        = {Sep.},\n  day          = {24-28},\n  location     = {Bilbao, Spain},\n  doi          = {10.1109/ITSC57777.2023.10422269},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/10422269},\n  ISSN         = {2153-0017},\n  keywords     = {Point cloud compression;Measurement;Navigation;Urban areas;Spatial databases;Optimization;Testing},\n  abstract     = {Hauling of material by automated vehicles can be one\n                  of the sustainable solutions for the increasing\n                  economic and environmental challenges in the mining\n                  industry. For this, vehicles of various sizes must\n                  drive through an ever-changing environment, due to\n                  the destructive nature of resource\n                  extraction. Therefore, we show our solution to\n                  create automatically semantically rich\n                  high-definition maps, which can be used for\n                  efficient and safe navigation within the mine. In\n                  contrast to navigation on pure spatial\n                  geometry-based data like point clouds,\n                  high-definition maps have the advantage, that\n                  semantic information is recognised by the navigation\n                  system. But manually created high-definition maps\n                  need frequent updates due to the terrain changes,\n                  which makes them inflexible for most\n                  applications. In this paper, we will show, how\n                  Lanelet2- maps can be generated automatically from\n                  point clouds with a multistep algorithm and how\n                  these maps are adjusted to the long-term changes of\n                  the environment. For evaluation, we present our\n                  findings in some real-world examples and\n                  synthetically generated point clouds of the main\n                  edge-case situations.},\n}\n
\n
\n\n\n
\n Hauling of material by automated vehicles can be one of the sustainable solutions for the increasing economic and environmental challenges in the mining industry. For this, vehicles of various sizes must drive through an ever-changing environment, due to the destructive nature of resource extraction. Therefore, we show our solution to create automatically semantically rich high-definition maps, which can be used for efficient and safe navigation within the mine. In contrast to navigation on pure spatial geometry-based data like point clouds, high-definition maps have the advantage, that semantic information is recognised by the navigation system. But manually created high-definition maps need frequent updates due to the terrain changes, which makes them inflexible for most applications. In this paper, we will show, how Lanelet2- maps can be generated automatically from point clouds with a multistep algorithm and how these maps are adjusted to the long-term changes of the environment. For evaluation, we present our findings in some real-world examples and synthetically generated point clouds of the main edge-case situations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Using V2X Communications for Smart ODD Management of Highly Automated Vehicles.\n \n \n \n \n\n\n \n Schulte-Tigges, J.; Rondinone, M.; Reke, M.; Wachenfeld, J.; and Kaszner, D.\n\n\n \n\n\n\n In 26th IEEE International Conference on Intelligent Transportation Systems (ITSC), pages 3317–3322, Sep. 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Using ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{ Schulte-Tigges-etAl_ITSC2023_Using-V2X-Comm,\n  author       = {Schulte-Tigges, Joschua and Rondinone, Michele and Reke, Michael and Wachenfeld, Jan and Kaszner, Daniel},\n  booktitle    = {26th IEEE International Conference on Intelligent Transportation Systems (ITSC)}, \n  title        = {Using {V2X} Communications for Smart {ODD} Management of Highly Automated Vehicles}, \n  year         = {2023},\n  month        = {Sep.},\n  day          = {24-28},\n  location     = {Bilbao, Spain},\n  pages        = {3317--3322},\n  doi          = {10.1109/ITSC57777.2023.10422043},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/10422043},\n  ISSN         = {2153-0017},\n  keywords     = {Software architecture;Roads;Prototypes;Vehicle-to-everything;\n                  Standards;Intelligent transportation systems;Vehicles},\n  abstract     = {Hazardous events like stationary vehicles on the\n                  carriageway, being in most cases unforeseeable and\n                  not always easy to detect, pose serious challenges\n                  to automated vehicles (AVs). When such events occur,\n                  AVs have to determine within limited time and space\n                  if permanence in their Operational Design Domain\n                  (ODD) will be guaranteed or not, and how to react to\n                  ensure passengers' safety and comfort. To cope with\n                  such events more effectively and efficiently, in\n                  this paper we present a software architecture and\n                  logic for Connected AVs (CAVs) that takes into\n                  account hazard notification and road signage\n                  information from available standard V2X messages to\n                  manage ODD-related decisions and reactions in an\n                  anticipated way. Differently from earlier works,\n                  focusing more on automated compliance to traffic\n                  management suggestions by the connected road\n                  infrastructure, the presented solution emphasises\n                  the active role of the CAV logic in taking suitable\n                  decisions based on individual and local\n                  situations. We introduce a manoeuvre planner\n                  implementing distinct state machines to react to\n                  different types of received V2X information. In the\n                  resulting procedures, where the driver can be also\n                  involved, step goals for a motion planner and path\n                  controller are generated. By means of simulations,\n                  we demonstrate the benefits of the presented CAV\n                  solution against a baseline AV model only relying on\n                  on-board sensors. To prove its real-world\n                  feasibility, we also report the results of\n                  integrating the proposed logic into a CAV prototype\n                  and running real-world test-track experiments.},\n}\n
\n
\n\n\n
\n Hazardous events like stationary vehicles on the carriageway, being in most cases unforeseeable and not always easy to detect, pose serious challenges to automated vehicles (AVs). When such events occur, AVs have to determine within limited time and space if permanence in their Operational Design Domain (ODD) will be guaranteed or not, and how to react to ensure passengers' safety and comfort. To cope with such events more effectively and efficiently, in this paper we present a software architecture and logic for Connected AVs (CAVs) that takes into account hazard notification and road signage information from available standard V2X messages to manage ODD-related decisions and reactions in an anticipated way. Differently from earlier works, focusing more on automated compliance to traffic management suggestions by the connected road infrastructure, the presented solution emphasises the active role of the CAV logic in taking suitable decisions based on individual and local situations. We introduce a manoeuvre planner implementing distinct state machines to react to different types of received V2X information. In the resulting procedures, where the driver can be also involved, step goals for a motion planner and path controller are generated. By means of simulations, we demonstrate the benefits of the presented CAV solution against a baseline AV model only relying on on-board sensors. To prove its real-world feasibility, we also report the results of integrating the proposed logic into a CAV prototype and running real-world test-track experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Demonstrating a V2X Enabled System for Transition of Control and Minimum Risk Manoeuvre When Leaving the Operational Design Domain.\n \n \n \n \n\n\n \n Schulte-Tigges, J.; Matheis, D.; Reke, M.; Walter, T.; and Kaszner, D.\n\n\n \n\n\n\n In Krömker, H., editor(s), HCI in Mobility, Transport, and Automotive Systems (HCII 2023), volume 14048, of Lecture Notes in Computer Science, pages 200–210, Cham, 2023. Springer Nature Switzerland\n \n\n\n\n
\n\n\n\n \n \n \"Demonstrating springer\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{ Schulte-Tigges-etAl_HCII2023_Demonstrating-V2X,\n  author       = "Schulte-Tigges, Joschua and Matheis, Dominik and Reke, Michael and Walter, Thomas and Kaszner, Daniel",\n  editor       = "Kr{\\"o}mker, Heidi",\n  title        = "Demonstrating a {V2X} Enabled System for Transition of Control and\n                  Minimum Risk Manoeuvre When Leaving the Operational Design Domain",\n  booktitle    = "HCI in Mobility, Transport, and Automotive Systems (HCII 2023)",\n  year         = "2023",\n  publisher    = "Springer Nature Switzerland",\n  address      = "Cham",\n  pages        = "200--210",\n  series       = {Lecture Notes in Computer Science},\n  volume       = {14048},\n  url_springer = {https://link.springer.com/chapter/10.1007/978-3-031-35678-0_12},\n  doi          = {10.1007/978-3-031-35678-0_12},\n  abstract     = "Modern implementations of driver assistance systems\n                  are evolving from a pure driver assistance to a\n                  independently acting automation system. Still these\n                  systems are not covering the full vehicle usage\n                  range, also called operational design domain, which\n                  require the human driver as fall-back\n                  mechanism. Transition of control and potential\n                  minimum risk manoeuvres are currently research\n                  topics and will bridge the gap until full autonomous\n                  vehicles are available. The authors showed in a\n                  demonstration that the transition of control\n                  mechanisms can be further improved by usage of\n                  communication technology. Receiving the incident\n                  type and position information by usage of\n                  standardised vehicle to everything (V2X) messages\n                  can improve the driver safety and comfort level. The\n                  connected and automated vehicle's software framework\n                  can take this information to plan areas where the\n                  driver should take back control by initiating a\n                  transition of control which can be followed by a\n                  minimum risk manoeuvre in case of an unresponsive\n                  driver. This transition of control has been\n                  implemented in a test vehicle and was presented to\n                  the public during the IEEE IV2022 (IEEE Intelligent\n                  Vehicle Symposium) in Aachen, Germany.",\n  isbn         = "978-3-031-35678-0"\n}\n\n
\n
\n\n\n
\n Modern implementations of driver assistance systems are evolving from a pure driver assistance to a independently acting automation system. Still these systems are not covering the full vehicle usage range, also called operational design domain, which require the human driver as fall-back mechanism. Transition of control and potential minimum risk manoeuvres are currently research topics and will bridge the gap until full autonomous vehicles are available. The authors showed in a demonstration that the transition of control mechanisms can be further improved by usage of communication technology. Receiving the incident type and position information by usage of standardised vehicle to everything (V2X) messages can improve the driver safety and comfort level. The connected and automated vehicle's software framework can take this information to plan areas where the driver should take back control by initiating a transition of control which can be followed by a minimum risk manoeuvre in case of an unresponsive driver. This transition of control has been implemented in a test vehicle and was presented to the public during the IEEE IV2022 (IEEE Intelligent Vehicle Symposium) in Aachen, Germany.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Controlling a Fleet of Autonomous LHD Vehicles in Mining Operation.\n \n \n \n \n\n\n \n Ferrein, A.; Nikolovski, G.; Limpert, N.; Reke, M.; Schiffer, S.; and Scholl, I.\n\n\n \n\n\n\n In Küçük, S., editor(s), Multi-Robot Systems - New Advances, 4. IntechOpen, Rijeka, 2023.\n \n\n\n\n
\n\n\n\n \n \n \"ControllingPaper\n  \n \n \n \"Controlling intech\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@incollection{Ferrein:etAl_INTECH2023_Controlling-a-Fleet,\n  author       = {Alexander Ferrein and Gjorgji Nikolovski and Nicolas Limpert\n                  and Michael Reke and Stefan Schiffer and Ingrid Scholl},\n  title        = {{Controlling a Fleet of Autonomous LHD Vehicles in Mining Operation}},\n  booktitle    = {Multi-Robot Systems - New Advances},\n  publisher    = {IntechOpen},\n  address      = {Rijeka},\n  year         = {2023},\n  editor       = {Serdar K{\\"u}{\\c{c}}{\\"u}k},\n  chapter      = {4},\n  doi          = {10.5772/intechopen.113044},\n  url          = {https://doi.org/10.5772/intechopen.113044},\n  url_intech   = {https://www.intechopen.com/chapters/88580},\n  abstract     = {In this chapter, we report on our activities to\n                  create and maintain a fleet of autonomous load haul\n                  dump (LHD) vehicles for mining operations. The ever\n                  increasing demand for sustainable solutions and\n                  economic pressure causes innovation in the mining\n                  industry just like in any other branch. In this\n                  chapter, we present our approach to create a fleet\n                  of autonomous special purpose vehicles and to\n                  control these vehicles in mining operations. After\n                  an initial exploration of the site we deploy the\n                  fleet. Every vehicle is running an instance of our\n                  ROS 2-based architecture. The fleet is then\n                  controlled with a dedicated planning module. We also\n                  use continuous environment monitoring to implement a\n                  life-long mapping approach. In our experiments, we\n                  show that a combination of synthetic, augmented and\n                  real training data improves our classifier based on\n                  the deep learning network Yolo v5 to detect our\n                  vehicles, persons and navigation beacons. The\n                  classifier was successfully installed on the NVidia\n                  AGX-Drive platform, so that the abovementioned\n                  objects can be recognised during the dumper\n                  drive. The 3D poses of the detected beacons are\n                  assigned to lanelets and transferred to an existing\n                  map.},\n}\n
\n
\n\n\n
\n In this chapter, we report on our activities to create and maintain a fleet of autonomous load haul dump (LHD) vehicles for mining operations. The ever increasing demand for sustainable solutions and economic pressure causes innovation in the mining industry just like in any other branch. In this chapter, we present our approach to create a fleet of autonomous special purpose vehicles and to control these vehicles in mining operations. After an initial exploration of the site we deploy the fleet. Every vehicle is running an instance of our ROS 2-based architecture. The fleet is then controlled with a dedicated planning module. We also use continuous environment monitoring to implement a life-long mapping approach. In our experiments, we show that a combination of synthetic, augmented and real training data improves our classifier based on the deep learning network Yolo v5 to detect our vehicles, persons and navigation beacons. The classifier was successfully installed on the NVidia AGX-Drive platform, so that the abovementioned objects can be recognised during the dumper drive. The 3D poses of the detected beacons are assigned to lanelets and transferred to an existing map.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model-predictive Control with Parallelised Optimisation for the Navigation of Autonomous Mining Vehicles.\n \n \n \n \n\n\n \n Nikolovski, G.; Limpert, N.; Nessau, H.; Reke, M.; and Ferrein, A.\n\n\n \n\n\n\n In 2023 IEEE Intelligent Vehicles Symposium (IV), pages 1–6, June 2023. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Model-predictive ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{Nikolovski-etAl_IV2023_MPC-Nav-Mining,\n  author       = {Nikolovski, Gjorgji and Limpert, Nicolas and Nessau, Hendrik and Reke, Michael and Ferrein, Alexander},\n  booktitle    = {2023 IEEE Intelligent Vehicles Symposium (IV)}, \n  title        = {Model-predictive Control with Parallelised Optimisation for the Navigation of Autonomous Mining Vehicles}, \n  year         = {2023},\n  month        = {June},\n  day          = {04-07},\n  pages        = {1--6},\n  location     = {Anchorage, AK, USA},\n  doi          = {10.1109/IV55152.2023.10186806},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/10186806},\n  publisher    = {IEEE},\n  ISSN         = {2642-7214},\n  keywords     = {Navigation;Intelligent vehicles;Hydraulic\n                  drives;Steering systems;Transportation;Hydraulic\n                  systems;Minimization;mpc;control;path-following;navigation;automation},\n  abstract     = {The work in modern open-pit and underground mines\n                  requires the transportation of large amounts of\n                  resources between fixed points. The navigation to\n                  these fixed points is a repetitive task that can be\n                  automated. The challenge in automating the\n                  navigation of vehicles commonly used in mines is the\n                  systemic properties of such vehicles. Many mining\n                  vehicles, such as the one we have used in the\n                  research for this paper, use steering systems with\n                  an articulated joint bending the vehicle’s drive\n                  axis to change its course and a hydraulic drive\n                  system to actuate axial drive components or the\n                  movements of tippers if available. To address the\n                  difficulties of controlling such a vehicle, we\n                  present a model-predictive approach for controlling\n                  the vehicle. While the control optimisation based on\n                  a parallel error minimisation of the predicted state\n                  has already been established in the past, we provide\n                  insight into the design and implementation of an MPC\n                  for an articulated mining vehicle and show the\n                  results of real-world experiments in an open-pit\n                  mine environment.},\n}\n
\n
\n\n\n
\n The work in modern open-pit and underground mines requires the transportation of large amounts of resources between fixed points. The navigation to these fixed points is a repetitive task that can be automated. The challenge in automating the navigation of vehicles commonly used in mines is the systemic properties of such vehicles. Many mining vehicles, such as the one we have used in the research for this paper, use steering systems with an articulated joint bending the vehicle’s drive axis to change its course and a hydraulic drive system to actuate axial drive components or the movements of tippers if available. To address the difficulties of controlling such a vehicle, we present a model-predictive approach for controlling the vehicle. While the control optimisation based on a parallel error minimisation of the predicted state has already been established in the past, we provide insight into the design and implementation of an MPC for an articulated mining vehicle and show the results of real-world experiments in an open-pit mine environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Demonstrativ-aktiv-iterativ: Arbeitssysteme mit Künstlicher Intelligenz an Demonstratoren im Reallabor vermitteln, erproben und weiterentwickeln.\n \n \n \n \n\n\n \n Altepost, A.; Berlin, F.; Ferrein, A.; and Harlacher, M.\n\n\n \n\n\n\n In GfA (Hrsg) Nachhaltig Arbeiten und Lernen - Analyse und Gestaltung lern- förderlicher und nachhaltiger Arbeitssysteme und Arbeits- und Lernprozesse. Bericht zum 69. Arbeitswissenschaftlichen Kongress vom 01. – 03. März 2023, pages 1–6, Sankt Augustin, 2023. GfA Press\n \n\n\n\n
\n\n\n\n \n \n \"Demonstrativ-aktiv-iterativ: rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{ Altepost-etAl_GfA2023_Demonstrativ-aktiv-iterativ,\n  author       = {Altepost, Andrea and Berlin, Florian and Ferrein, Alexander and Harlacher, Markus},\n  title        = {{Demonstrativ-aktiv-iterativ: Arbeitssysteme mit K{\\"u}nstlicher Intelligenz\n                  an Demonstratoren im Reallabor vermitteln, erproben und weiterentwickeln}},\n  booktitle    = {GfA (Hrsg) Nachhaltig Arbeiten und Lernen - Analyse und Gestaltung lern-\n                  f{\\"o}rderlicher und nachhaltiger Arbeitssysteme und Arbeits- und Lernprozesse.\n\t          Bericht zum 69. Arbeitswissenschaftlichen Kongress vom 01. – 03. März 2023},\n  pages        = {1--6},\n  OPTdoi       = {},\n  number       = {C.6.2},\n  year         = {2023},\n  location     = {Gottfried Wilhelm Leibniz Universität Hannover},\n  publisher    = {GfA Press},\n  address      = {Sankt Augustin},\n  url_RWTH     = {https://publications.rwth-aachen.de/record/972487},\n  keywords     = {WIRKsam, Arbeitsgestaltung, Künstliche Intelligenz, Demonstratoren, Reallabor},\n  abstract     = {Das Kompetenzzentrum WIRKsam gestaltet innovative\n                  Arbeits- und Prozessabläufe mit Künstlicher\n                  Intelligenz für und mit Unternehmen im Rheinischen\n                  Braunkohlerevier. Neun unterschiedliche\n                  Problemstellungen regionaler Unternehmen werden mit\n                  maßgeschneiderten KI-Lösungen und Arbeitsgestaltung\n                  basierend auf dem MTO-Ansatz (Strohm & Ulich 1997;\n                  Ulich 2013) adressiert. Für das derzeit im Aufbau\n                  befindliche WIRKsam-Reallabor sollen Demonstratoren,\n                  die Erfahrungen aus den Anwendungsfällen aufgreifen,\n                  erleb- und erprobbar machen.  Darüber hinaus sollen\n                  sie interessierte Unternehmen dazu anregen, sich an\n                  der Weiterentwicklung gezeigter Lösungen und der\n                  Findung neuer Ansätze aktiv zu beteiligen, mit dem\n                  Ziel, den Transfer in das eigene Unternehmen\n                  vorzubereiten. In Workshops wurden von verschiedenen\n                  Teilnehmendengruppen Anforderungen und\n                  Gestaltungshinweise für die Entwicklung der\n                  Demonstratoren erarbeitet.},\n}\n
\n
\n\n\n
\n Das Kompetenzzentrum WIRKsam gestaltet innovative Arbeits- und Prozessabläufe mit Künstlicher Intelligenz für und mit Unternehmen im Rheinischen Braunkohlerevier. Neun unterschiedliche Problemstellungen regionaler Unternehmen werden mit maßgeschneiderten KI-Lösungen und Arbeitsgestaltung basierend auf dem MTO-Ansatz (Strohm & Ulich 1997; Ulich 2013) adressiert. Für das derzeit im Aufbau befindliche WIRKsam-Reallabor sollen Demonstratoren, die Erfahrungen aus den Anwendungsfällen aufgreifen, erleb- und erprobbar machen. Darüber hinaus sollen sie interessierte Unternehmen dazu anregen, sich an der Weiterentwicklung gezeigter Lösungen und der Findung neuer Ansätze aktiv zu beteiligen, mit dem Ziel, den Transfer in das eigene Unternehmen vorzubereiten. In Workshops wurden von verschiedenen Teilnehmendengruppen Anforderungen und Gestaltungshinweise für die Entwicklung der Demonstratoren erarbeitet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Anomaly Detection in the Metal-Textile Industry for the Reduction of the Cognitive Load of Quality Control Workers.\n \n \n \n \n\n\n \n Arndt, T.; Conzen, M.; Elsen, I.; Ferrein, A.; Galla, O.; Köse, H.; Schiffer, S.; and Tschesche, M.\n\n\n \n\n\n\n In Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments, of PETRA '23, pages 535–542, New York, NY, USA, 2023. Association for Computing Machinery\n Best Workshop Paper - Runner Up\n\n\n\n
\n\n\n\n \n \n \"AnomalyPaper\n  \n \n \n \"Anomaly acm dl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Arndt:etAl_PETRA2023_AnomalyDetection,\n  author       = {Arndt, Tobias and Conzen, Max and Elsen, Ingo and\n                  Ferrein, Alexander and Galla, Oskar and K{\\"o}se,\n                  Hakan and Schiffer, Stefan and Tschesche, Matteo},\n  title        = {{Anomaly Detection in the Metal-Textile Industry for the\n                  Reduction of the Cognitive Load of Quality Control Workers}},\n  booktitle    = {Proceedings of the 16th International Conference on PErvasive Technologies Related to Assistive Environments},\n  pages        = {535--542},\n  numpages     = {8},\n  year         = {2023},\n  isbn         = {9798400700699},\n  publisher    = {Association for Computing Machinery},\n  address      = {New York, NY, USA},\n  location     = {Corfu, Greece},\n  series       = {PETRA '23},\n  url          = {https://doi.org/10.1145/3594806.3596558},\n  url_ACM_DL   = {https://dl.acm.org/doi/abs/10.1145/3594806.3596558},\n  doi          = {10.1145/3594806.3596558},\n  keywords     = {WIRKsam, Artificial Intelligence, anomaly detection, datasets, neural networks, process optimization, quality control},\n  abstract     = {This paper presents an approach for reducing the\n                  cognitive load for humans working in quality control\n                  (QC) for production processes that adhere to the 6σ\n                  -methodology. While 100\\% QC requires every part to\n                  be inspected, this task can be reduced when a\n                  human-in-the-loop QC process gets supported by an\n                  anomaly detection system that only presents those\n                  parts for manual inspection that have a significant\n                  likelihood of being defective. This approach shows\n                  good results when applied to image-based QC for\n                  metal textile products.},\n  note         = {Best Workshop Paper - Runner Up},\n}\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  Leistung und Entgelt Magazine issue on WIRKsam w/ my co-authorship\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n This paper presents an approach for reducing the cognitive load for humans working in quality control (QC) for production processes that adhere to the 6σ -methodology. While 100% QC requires every part to be inspected, this task can be reduced when a human-in-the-loop QC process gets supported by an anomaly detection system that only presents those parts for manual inspection that have a significant likelihood of being defective. This approach shows good results when applied to image-based QC for metal textile products.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kompetenzzentrum WIRKsam - Wirtschaftlicher Wandel in der rheinischen Textil- und Kohleregion mit Künstlicher Intelligenz gemeinsam gestalten.\n \n \n \n \n\n\n \n Jeske, T.; Harlacher, M.; Altepost, A. A.; Schmenk, B.; Ferrein, A.; and Schiffer, S.,\n editors.\n \n\n\n \n\n\n\n Volume 2023 Joh. Heider Verlag GmbH, Bergisch Gladbach, 6 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Kompetenzzentrum rwth\n  \n \n \n \"Kompetenzzentrum ifaa\n  \n \n \n \"Kompetenzzentrum pdf ifaa\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@BOOK{ Leistung-und-Entgelt_2024_WIRKsam,\n  editor       = {Jeske, Tim and Harlacher, Markus and Altepost, Andrea Anna\n                  and Schmenk, Bernhard and Ferrein, Alexander and Schiffer, Stefan},\n  title        = {{K}ompetenzzentrum {WIRK}sam - {W}irtschaftlicher {W}andel in der rheinischen\n                      {T}extil- und {K}ohleregion mit {K}{\\"u}nstlicher {I}ntelligenz gemeinsam gestalten},\n  journal      = {Leistung \\& Entgelt},\n  volume       = {2023},\n  number       = {2},\n  issn         = {2510-0424},\n  address      = {Bergisch Gladbach},\n  publisher    = {Joh. Heider Verlag GmbH},\n  pages        = {46 Seiten : Illustrationen},\n  year         = {2023},\n  month        = {6},\n  subtyp       = {Brochure},\n  key_RWTH     = {962619},\n  url_RWTH     = {https://publications.rwth-aachen.de/record/962619},\n  url_ifaa     = {https://www.arbeitswissenschaft.net/angebote-produkte/broschueren/leistung-und-entgelt-kompetenzzentrum-wirksam},\n  url_PDF_ifaa = {https://www.arbeitswissenschaft.net/fileadmin/user_upload/Klein_Ende_24211_LundE_2_2023_finale_Version_fuer_Druckerei.pdf},\n  keywords     = {WIRKsam},\n  abstract     = {Das Kompetenzzentrum WIRKsam ist eines von acht\n                  regionalen Kompetenzzentren der Arbeitsforschung mit\n                  Fokus auf der Gestaltung neuer Arbeitsformen durch\n                  Künstliche Intelligenz. Es hat seine regionale\n                  Verankerung im Rheinischen Revier, das aufgrund des\n                  Kohleausstiegs von einem starken Strukturwandel\n                  betroffen ist. Gleichzeitig ist es Teil der\n                  Rheinischen Textilregion, die sich in den letzten 50\n                  Jahren stark verändert hat.  Künstliche Intelligenz\n                  bietet umfassende Möglichkeiten, die Arbeitswelt mit\n                  innovativen Arbeits- und Prozessabläufen zu\n                  gestalten und Produkte zu verbessern. Sie hilft\n                  Unter- nehmen dabei, im globalen Wettbewerb zu\n                  bestehen und Wohlstand und Arbeitsplätze zu\n                  sichern. Die Arbeiten im Kompetenzzentrum WIRKsam\n                  zielen darauf ab, die Potenziale von KI für die\n                  Unternehmen im Rheinischen Revier zu\n                  erschließen. Der Kern der For- schungsaktivitäten\n                  liegt in der prototypischen Entwicklung und\n                  Einführung von KI-gestützten Systemen zur\n                  Unterstützung von Arbeit in bislang neun\n                  Anwendungsunternehmen. So entstehen Beispiele\n                  guter Praxis, die anderen Unternehmen Orientierung\n                  bieten sollen.  In dieser Ausgabe der "Leistung \\&\n                  Entgelt" werden das vom Bundesministerium für Bil-\n                  dung und Forschung geförderte Projekt vorgestellt\n                  und seine bisher neun Anwendungs- fälle\n                  beschrieben.},\n}\n\n\n%%\n%article{ Harlacher:Niehus_LuE2023WIRKsam_SystematisierungAWFs\n%      pages        = {1--6},\n%% AP 3\n% 3-1_FEG\n% 3-2_Essedea\n% 3-3_Heusch\n%% AP 4\n% 4-1_AUNDE\n% 4-2_R+F\n% 4-3_neusser-fb\n%% AP 5\n% 5-1_GKD\n% 5-2_Heimbach\n% 5-3_Viethen\n%%\n\n\n
\n
\n\n\n
\n Das Kompetenzzentrum WIRKsam ist eines von acht regionalen Kompetenzzentren der Arbeitsforschung mit Fokus auf der Gestaltung neuer Arbeitsformen durch Künstliche Intelligenz. Es hat seine regionale Verankerung im Rheinischen Revier, das aufgrund des Kohleausstiegs von einem starken Strukturwandel betroffen ist. Gleichzeitig ist es Teil der Rheinischen Textilregion, die sich in den letzten 50 Jahren stark verändert hat. Künstliche Intelligenz bietet umfassende Möglichkeiten, die Arbeitswelt mit innovativen Arbeits- und Prozessabläufen zu gestalten und Produkte zu verbessern. Sie hilft Unter- nehmen dabei, im globalen Wettbewerb zu bestehen und Wohlstand und Arbeitsplätze zu sichern. Die Arbeiten im Kompetenzzentrum WIRKsam zielen darauf ab, die Potenziale von KI für die Unternehmen im Rheinischen Revier zu erschließen. Der Kern der For- schungsaktivitäten liegt in der prototypischen Entwicklung und Einführung von KI-gestützten Systemen zur Unterstützung von Arbeit in bislang neun Anwendungsunternehmen. So entstehen Beispiele guter Praxis, die anderen Unternehmen Orientierung bieten sollen. In dieser Ausgabe der \"Leistung & Entgelt\" werden das vom Bundesministerium für Bil- dung und Forschung geförderte Projekt vorgestellt und seine bisher neun Anwendungs- fälle beschrieben.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n WIRKsam : Projektvorstellung.\n \n \n \n \n\n\n \n Jeske, T.; Harlacher, M.; Altepost, A. A.; Schmenk, B.; Ferrein, A.; and Schiffer, S.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 7–12. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"WIRKsam rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{ Jeske:etAl_LuE2023WIRKsam_Projektvorstellung,\n      author       = {Jeske, Tim and Harlacher, Markus and Altepost, Andrea Anna\n                      and Schmenk, Bernhard and Ferrein, Alexander and Schiffer,\n                      Stefan},\n      title        = {{WIRK}sam : {P}rojektvorstellung},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07490},\n      pages        = {7--12},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962599},\n  keywords     = {WIRKsam},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multikriterielle KI-basierte Prozesssteuerung und Qualifizierung für Medizinprodukte.\n \n \n \n \n\n\n \n Harlacher, M.; Neihues, S.; Hansen-Ampah, A. T.; Köse, H.; Schiffer, S.; Ferrein, A.; Rezaey, A.; and Dievernich, A.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 16–18. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Multikriterielle rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{ Harlacher:etAl_LuE2023WIRKsam_3-1_FEG,\n      author       = {Harlacher, Markus and Neihues, Sina and Hansen-Ampah, Adjan\n                      Troy and K{\\"o}se, Hakan and Schiffer, Stefan and Ferrein,\n                      Alexander and Rezaey, Arash and Dievernich, Axel},\n      title        = {{M}ultikriterielle {KI}-basierte {P}rozesssteuerung und\n                      {Q}ualifizierung f{\\"u}r {M}edizinprodukte},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07491},\n      pages        = {16--18},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962600},\n  keywords     = {WIRKsam},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n KI-Expertensystem für lernförderliche Empfehlungen zur maßgetreuen Produktion von 3D-Textilien mit digital unterstützter Eingangswerterfassung.\n \n \n \n \n\n\n \n Harlacher, M.; Niehues, S.; Merx, W.; Roder, S.; Schiffer, S.; Ferrein, A.; Zohren, M.; and Rezaey, A.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 19-21. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"KI-Expertensystem rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{Harlacher:etAl_LuE2023WIRKsam_3-2_Essedea,\n      author       = {Harlacher, Markus and Niehues, Sina and Merx, Wolfgang and\n                      Roder, Simon and Schiffer, Stefan and Ferrein, Alexander and\n                      Zohren, Marc and Rezaey, Arash},\n      title        = {{KI}-{E}xpertensystem f{\\"u}r lernf{\\"o}rderliche {E}mpfehlungen\n                      zur ma{\\ss}getreuen {P}roduktion von 3{D}-{T}extilien mit\n                      digital unterst{\\"u}tzter {E}ingangswerterfassung},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07492},\n      pages        = {19-21},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962601},\n  keywords     = {WIRKsam},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n KI-basierte Unterstützung der Kompetenz- und Fertigkeitsentwicklung für die Metallprofilbearbeitung.\n \n \n \n \n\n\n \n Harlacher, A.; Niehues, S.; Hansen-Ampah, A. T.; Roder, S.; Schiffer, S.; Ferrein, A.; and Zenker, D.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 22-24. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"KI-basierte rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{Harlacher:etAl_LuE2023WIRKsam_3-3_Heusch,\n      author       = {Harlacher, Alexander and Niehues, Sina and Hansen-Ampah,\n                      Adjan Troy and Roder, Simon and Schiffer, Stefan and\n                      Ferrein, Alexander and Zenker, Dieter},\n      title        = {{KI}-basierte {U}nterst{\\"u}tzung der {K}ompetenz- und\n                      {F}ertigkeitsentwicklung f{\\"u}r die {M}etallprofilbearbeitung},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07494},\n      pages        = {22-24},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962604},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lernförderliches KI-Varianzmanagement für die Produktion von Geweben mit kundenspezifisch veränderlich ausgeprägten Prüfmerkmalen.\n \n \n \n \n\n\n \n Köse, H.; Schiffer, S.; Ferrein, A.; Ramm, G. M.; Harlacher, M.; Merx, W.; Zohren, M.; Rezaey, A.; Ernst, L.; and Ntzemos, E.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 25-27. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Lernförderliches rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{Koese:etAl_LuE2023WIRKsam_4-1_AUNDE,\n      author       = {K{\\"o}se, Hakan and Schiffer, Stefan and Ferrein, Alexander\n                      and Ramm, Gerda Maria and Harlacher, Markus and Merx,\n                      Wolfgang and Zohren, Marc and Rezaey, Arash and Ernst, Leon\n                      and Ntzemos, Emmanuil},\n      title        = {{L}ernf{\\"o}rderliches {KI}-{V}arianzmanagement f{\\"u}r die\n                      {P}roduktion von {G}eweben mit kundenspezifisch\n                      ver{\\"a}nderlich ausgepr{\\"a}gten {P}r{\\"u}fmerkmalen},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07495},\n      pages        = {25-27},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962605},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n KI-Nachfrageprognose zur Verringerung von Lagerbeständen, Produktionsschwankungen und damit verbundener Beschäftigungsbelastung.\n \n \n \n \n\n\n \n Tschesche, M.; Hennig, M.; Schiffer, S.; Ferrein, A.; Ramm, G. M.; Harlacher, M.; Merx, W.; Zohren, M.; Rezaey, A.; Kot, A.; and Smekal, J.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 28-30. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"KI-Nachfrageprognose rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{Tschesche:etAl_LuE2023WIRKsam_4-2_R+F,\n      author       = {Tschesche, Matteo and Hennig, Mike and Schiffer, Stefan and\n                      Ferrein, Alexander and Ramm, Gerda Maria and Harlacher,\n                      Markus and Merx, Wolfgang and Zohren, Marc and Rezaey, Arash\n                      and Kot, Aylin and Smekal, J{\\"u}rgen},\n      title        = {{KI}-{N}achfrageprognose zur {V}erringerung von\n                      {L}agerbest{\\"a}nden, {P}roduktionsschwankungen und damit\n                      verbundener {B}esch{\\"a}ftigungsbelastung},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07500},\n      pages        = {28-30},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962613},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Situative KI-Entscheidungsunterstützung zur Abschätzung arbeitsorganisatorischer Folgen im Rahmen des Shopfloor Managements.\n \n \n \n \n\n\n \n Tschesche, M.; Henning, M.; Schiffer, S.; Ferrein, A.; Ramm, G. M.; Harlacher, M.; Merx, W.; and Sahm, J.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 31-33. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Situative rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{Tschesche:etAl_LuE2023WIRKsam_4-3_neusser-fb,\n      author       = {Tschesche, Matteo and Henning, Mike and Schiffer, Stefan\n                      and Ferrein, Alexander and Ramm, Gerda Maria and Harlacher,\n                      Markus and Merx, Wolfgang and Sahm, Joachim},\n      title        = {{S}ituative {KI}-{E}ntscheidungsunterst{\\"u}tzung zur\n                      {A}bsch{\\"a}tzung arbeitsorganisatorischer {F}olgen im {R}ahmen\n                      des {S}hopfloor {M}anagements},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07501},\n      pages        = {31-33},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962614},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n KI-basierte Bildauswertung zur Qualitätsverbesserung und Entlastung von Beschäftigten bei der Herstellung von metallischen Filterprodukten.\n \n \n \n \n\n\n \n Hansen-Ampah, A. T.; Boltersdorf, C. D.; Köse, H.; Schiffer, S.; Ferrein, A.; Shahinfar, F. N.; Ramm, G. M.; Zohren, M.; and Herper, D.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 34-36. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"KI-basierte rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{HansenAmpah:etAl_LuE2023WIRKsam_5-1_GKD,\n      author       = {Hansen-Ampah, Adjan Troy and Boltersdorf, Christian Daniel\n                      and K{\\"o}se, Hakan and Schiffer, Stefan and Ferrein, Alexander\n                      and Shahinfar, Fatemeh N. and Ramm, Gerda Maria and Zohren,\n                      Marc and Herper, Dominik},\n      title        = {{KI}-basierte {B}ildauswertung zur {Q}ualit{\\"a}tsverbesserung\n                      und {E}ntlastung von {B}esch{\\"a}ftigten bei der {H}erstellung\n                      von metallischen {F}ilterprodukten},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07503},\n      pages        = {34-36},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962616},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unterstützung von Produktentwicklung und Qualitätssicherung durch KI-basierte Vergleiche von Produktkennwerten vor und nach dem Einsatz an Papiermaschinen.\n \n \n \n \n\n\n \n Hansen-Ampah, A. T.; Arndt, T.; Schiffer, S.; Ferrein, A.; Shahinfar, F. N.; Ramm, G. M.; and Klopp, K.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 37-39. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Unterstützung rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{HansenAmpah:etAl_LuE2023WIRKsam_5-1_Heimbach,\n      author       = {Hansen-Ampah, Adjan Troy and Arndt, Tobias and Schiffer,\n                      Stefan and Ferrein, Alexander and Shahinfar, Fatemeh N. and\n                      Ramm, Gerda Maria and Klopp, Kai},\n      title        = {{U}nterst{\\"u}tzung von {P}roduktentwicklung und\n                      {Q}ualit{\\"a}tssicherung durch {KI}-basierte {V}ergleiche von\n                      {P}roduktkennwerten vor und nach dem {E}insatz an\n                      {P}apiermaschinen},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07504},\n      pages        = {37-39},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962617},\n  keywords     = {WIRKsam},\n}\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nutzung einer Mensch-Roboter-Kollaboration zum Erlernen komplexer motorischer Fertigkeiten für Tätigkeiten in der Faserverbundherstellung.\n \n \n \n \n\n\n \n Hansen-Ampah, A. T.; Backes, S. C.; Arndt, T.; Schiffer, S.; Ferrein, A.; Shahinfar, F. N.; Ramm, G. M.; and Viethen, H.\n\n\n \n\n\n\n Leistung & Entgelt, 2023(2): 40-42. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Nutzung rwth\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{HansenAmpah:etAl_LuE2023WIRKsam_5-1_Viethen,\n      author       = {Hansen-Ampah, Adjan Troy and Backes, Sebastian Christoph\n                      and Arndt, Tobias and Schiffer, Stefan and Ferrein,\n                      Alexander and Shahinfar, Fatemeh N. and Ramm, Gerda Maria\n                      and Viethen, Heinrich},\n      title        = {{N}utzung einer {M}ensch-{R}oboter-{K}ollaboration zum\n                      {E}rlernen komplexer motorischer {F}ertigkeiten f{\\"u}r\n                      {T}{\\"a}tigkeiten in der {F}aserverbundherstellung},\n      journal      = {Leistung \\& Entgelt},\n      volume       = {2023},\n      number       = {2},\n      issn         = {2510-0424},\n      address      = {Bergisch-Gladbach},\n      publisher    = {Joh. Heider Verlag GmbH},\n      reportid     = {RWTH-2023-07505},\n      pages        = {40-42},\n      year         = {2023},\n      url_RWTH     = {https://publications.rwth-aachen.de/record/962618},\n  keywords     = {WIRKsam},\n}\n\n\n\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards a Fleet of Autonomous Haul-Dump Vehicles in Hybrid Mines.\n \n \n \n \n\n\n \n Ferrein, A.; Reke, M.; Scholl, I.; Decker, B.; Limpert, N.; Nikolovski, G.; and Schiffer, S.\n\n\n \n\n\n\n In Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART, pages 278–288, 2023. INSTICC, SciTePress\n \n\n\n\n
\n\n\n\n \n \n \"Towards sciteprs\n  \n \n \n \"Towards pdf\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{ Ferrein-etAl_ICAART2023_Towards-a-Fleet,\n  author       = {Alexander Ferrein and Michael Reke and Ingrid Scholl and Benjamin Decker\n                  and Nicolas Limpert and Gjorgji Nikolovski and Stefan Schiffer},\n  title        = {Towards a Fleet of Autonomous Haul-Dump Vehicles in Hybrid Mines},\n  booktitle    = {Proceedings of the 15th International Conference on Agents and Artificial Intelligence - Volume 1: ICAART},\n  year         = {2023},\n  pages        = {278--288},\n  publisher    = {SciTePress},\n  organization = {INSTICC},\n  doi          = {10.5220/0011693600003393},\n  url_sciteprs = {https://www.scitepress.org/Papers/2023/116936/},\n  url_PDF      = {https://www.scitepress.org/Papers/2023/116936/116936.pdf},\n  isbn         = {978-989-758-623-1},\n  abstract     = {Like many industries, the mining industry is facing\n                  major transformations towards more sustainable and\n                  decarbonised operations with smaller environmental\n                  footprints. Even though the mining industry, in\n                  general, is quite conservative, key drivers for\n                  future developments are digitalisation and\n                  automation. Another direction forward is to mine\n                  deeper and reduce the mine footprint at the\n                  surface. This leads to so-called hybrid mines, where\n                  part of the operation is open pit, and part of the\n                  mining takes place underground. In this paper, we\n                  present our approach to running a fleet of\n                  autonomous hauling vehicles suitable for hybrid\n                  mining operations. We present a ROS 2-based\n                  architecture for running the vehicles. The fleet of\n                  currently three vehicles is controlled by a\n                  SHOP3-based planner which dispatches missions to the\n                  vehicles. The basic actions of the vehicles are\n                  realised as behaviour trees in ROS 2. We used a deep\n                  learning network for detection and classification of\n                  mining objects trained with a mixing of synthetic\n                  and real world training images. In a life-long\n                  mapping approach, we define lanelets and show their\n                  integration into HD maps. We demonstrate a\n                  proof-of-concept of the vehicles in operation in\n                  simulation and in real-world experiments in a gravel\n                  pit.},\n}\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2022\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n Like many industries, the mining industry is facing major transformations towards more sustainable and decarbonised operations with smaller environmental footprints. Even though the mining industry, in general, is quite conservative, key drivers for future developments are digitalisation and automation. Another direction forward is to mine deeper and reduce the mine footprint at the surface. This leads to so-called hybrid mines, where part of the operation is open pit, and part of the mining takes place underground. In this paper, we present our approach to running a fleet of autonomous hauling vehicles suitable for hybrid mining operations. We present a ROS 2-based architecture for running the vehicles. The fleet of currently three vehicles is controlled by a SHOP3-based planner which dispatches missions to the vehicles. The basic actions of the vehicles are realised as behaviour trees in ROS 2. We used a deep learning network for detection and classification of mining objects trained with a mixing of synthetic and real world training images. In a life-long mapping approach, we define lanelets and show their integration into HD maps. We demonstrate a proof-of-concept of the vehicles in operation in simulation and in real-world experiments in a gravel pit.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Winning the RoboCup Logistics League with Visual Servoing and Centralized Goal Reasoning.\n \n \n \n \n\n\n \n Viehmann, T.; Limpert, N.; Hofmann, T.; Henning, M.; Ferrein, A.; and Lakemeyer, G.\n\n\n \n\n\n\n In Eguchi, A.; Lau, N.; Paetzel-Prüsmann, M.; and Wanichanon, T., editor(s), RoboCup 2022: Robot World Cup XXV, volume 13561, of Lecture Notes in Computer Science, pages 300–312, Cham, 2023. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"WinningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{Viehmann-etAl_RoboCup2022_Winning-the-RoboCup-Logistics-League,\n  author       = "Viehmann, Tarik and Limpert, Nicolas and Hofmann, Till and Henning, Mike and Ferrein, Alexander and Lakemeyer, Gerhard",\n  editor       = "Eguchi, Amy and Lau, Nuno and Paetzel-Pr{\\"u}smann, Maike and Wanichanon, Thanapat",\n  title        = "{Winning the RoboCup Logistics League with Visual Servoing and Centralized Goal Reasoning}",\n  booktitle    = "RoboCup 2022: Robot World Cup XXV",\n  year         = "2023",\n  pages        = "300--312",\n  series       = {Lecture Notes in Computer Science},\n  volume       = {13561},\n  publisher    = "Springer International Publishing",\n  address      = "Cham",\n  isbn         = "978-3-031-28469-4",\n  doi          = {10.1007/978-3-031-28469-4_25},\n  url          = {https://link.springer.com/chapter/10.1007/978-3-031-28469-4_25},\n  keywords     = {RoboCup, Logistics League, RCLL},\n  abstract     = "The RoboCup Logistics League (RCLL) is a robotics\n                  competition in a production logistics scenario in\n                  the context of a Smart Factory. In the competition,\n                  a team of three robots needs to assemble products to\n                  fulfill various orders that are requested online\n                  during the game. This year, the Carologistics team\n                  was able to win the competition with a new approach\n                  to multi-agent coordination as well as significant\n                  changes to the robot's perception unit and a\n                  pragmatic network setup using the cellular network\n                  instead of WiFi. In this paper, we describe the\n                  major components of our approach with a focus on the\n                  changes compared to the last physical competition in\n                  2019.",\n}\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2021\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n The RoboCup Logistics League (RCLL) is a robotics competition in a production logistics scenario in the context of a Smart Factory. In the competition, a team of three robots needs to assemble products to fulfill various orders that are requested online during the game. This year, the Carologistics team was able to win the competition with a new approach to multi-agent coordination as well as significant changes to the robot's perception unit and a pragmatic network setup using the cellular network instead of WiFi. In this paper, we describe the major components of our approach with a focus on the changes compared to the last physical competition in 2019.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments.\n \n \n \n \n\n\n \n Schulte-Tigges, J.; Förster, M.; Nikolovski, G.; Reke, M.; Ferrein, A.; Kaszner, D.; Matheis, D.; and Walter, T.\n\n\n \n\n\n\n Sensors, 22(19). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"BenchmarkingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@Article{Schulte-Tigges-etAl_Sensors2022_Benchmarking-LIDAR-Sensors,\n  AUTHOR       = {Schulte-Tigges, Joschua and F{\\"o}rster, Marco and\n                  Nikolovski, Gjorgji and Reke, Michael and\n\t\t  Ferrein, Alexander and Kaszner, Daniel and\n\t\t  Matheis, Dominik and Walter, Thomas},\n  TITLE        = {Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments},\n  JOURNAL      = {Sensors},\n  VOLUME       = {22},\n  YEAR         = {2022},\n  NUMBER       = {19},\n  ARTICLE-NUMBER = {7146},\n  URL          = {https://www.mdpi.com/1424-8220/22/19/7146},\n  PubMedID     = {36236247},\n  ISSN         = {1424-8220},\n  DOI          = {10.3390/s22197146},\n  keywords     = {ADP; LiDAR; benchmark; self-driving},\n  ABSTRACT     = {In this paper, we report on our benchmark results of\n                  the LiDAR sensors Livox Horizon, Robosense M1,\n                  Blickfeld Cube, Blickfeld Cube Range, Velodyne\n                  Velarray H800, and Innoviz Pro. The idea was to test\n                  the sensors in different typical scenarios that were\n                  defined with real-world use cases in mind, in order\n                  to find a sensor that meet the requirements of\n                  self-driving vehicles. For this, we defined static\n                  and dynamic benchmark scenarios. In the static\n                  scenarios, both LiDAR and the detection target do\n                  not move during the measurement. In dynamic\n                  scenarios, the LiDAR sensor was mounted on the\n                  vehicle which was driving toward the detection\n                  target. We tested all mentioned LiDAR sensors in\n                  both scenarios, show the results regarding the\n                  detection accuracy of the targets, and discuss their\n                  usefulness for deployment in self-driving cars.},\n}\n\n\n\n
\n
\n\n\n
\n In this paper, we report on our benchmark results of the LiDAR sensors Livox Horizon, Robosense M1, Blickfeld Cube, Blickfeld Cube Range, Velodyne Velarray H800, and Innoviz Pro. The idea was to test the sensors in different typical scenarios that were defined with real-world use cases in mind, in order to find a sensor that meet the requirements of self-driving vehicles. For this, we defined static and dynamic benchmark scenarios. In the static scenarios, both LiDAR and the detection target do not move during the measurement. In dynamic scenarios, the LiDAR sensor was mounted on the vehicle which was driving toward the detection target. We tested all mentioned LiDAR sensors in both scenarios, show the results regarding the detection accuracy of the targets, and discuss their usefulness for deployment in self-driving cars.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prototyping and Evaluation of Infrastructure-Assisted Transition of Control for Cooperative Automated Vehicles.\n \n \n \n \n\n\n \n Coll-Perales, B.; Schulte-Tigges, J.; Rondinone, M.; Gozalvez, J.; Reke, M.; Matheis, D.; and Walter, T.\n\n\n \n\n\n\n IEEE Transactions on Intelligent Transportation Systems, 23(7): 6720–6736. July 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Prototyping ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{ Coll-Perales-etAl_T-ITS2021_Prototyping-Evalution-ToC,\n  author       = {Coll-Perales, Baldomero and Schulte-Tigges, Joschua\n                  and Rondinone, Michele and Gozalvez, Javier and\n                  Reke, Michael and Matheis, Dominik and Walter,\n                  Thomas},\n  title        = {Prototyping and Evaluation of Infrastructure-Assisted Transition of Control for Cooperative Automated Vehicles}, \n  journal      = {IEEE Transactions on Intelligent Transportation Systems}, \n  year         = {2022},\n  volume       = {23},\n  number       = {7},\n  pages        = {6720--6736},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/9369993},\n  doi          = {10.1109/TITS.2021.3061085},\n  ISSN         = {1558-0016},\n  month        = {July},\n  keywords     = {ADP; Vehicles; Roads; Prototypes; Proposals;\n                  Automation; Area measurement; Vehicle-to-everything;\n                  Automated driving; automated vehicles; connected\n                  automated vehicles; CAV; experimental evaluation;\n                  field tests; minimum risk maneuver; MRM; prototype;\n                  transition of control; ToC; take over request;\n                  traffic management; V2X},\n  abstract     = {Automated driving is now possible in diverse road and\n                  traffic conditions. However, there are still\n                  situations that automated vehicles cannot handle\n                  safely and efficiently. In this case, a Transition\n                  of Control (ToC) is necessary so that the driver\n                  takes control of the driving. Executing a ToC\n                  requires the driver to get full situation awareness\n                  of the driving environment. If the driver fails to\n                  get back the control in a limited time, a Minimum\n                  Risk Maneuver (MRM) is executed to bring the vehicle\n                  into a safe state (e.g., decelerating to full\n                  stop). The execution of ToCs requires some time and\n                  can cause traffic disruption and safety risks that\n                  increase if several vehicles execute ToCs/MRMs at\n                  similar times and in the same area. This study\n                  proposes to use novel C-ITS traffic management\n                  measures where the infrastructure exploits V2X\n                  communications to assist Connected and Automated\n                  Vehicles (CAVs) in the execution of ToCs. The\n                  infrastructure can suggest a spatial distribution of\n                  ToCs, and inform vehicles of the locations where\n                  they could execute a safe stop in case of MRM. This\n                  paper reports the first field operational tests that\n                  validate the feasibility and quantify the benefits\n                  of the proposed infrastructure-assisted ToC and MRM\n                  management. The paper also presents the CAV and\n                  roadside infrastructure prototypes implemented and\n                  used in the trials. The conducted field trials\n                  demonstrate that infrastructure-assisted traffic\n                  management solutions can reduce safety risks and\n                  traffic disruptions.},\n}\n
\n
\n\n\n
\n Automated driving is now possible in diverse road and traffic conditions. However, there are still situations that automated vehicles cannot handle safely and efficiently. In this case, a Transition of Control (ToC) is necessary so that the driver takes control of the driving. Executing a ToC requires the driver to get full situation awareness of the driving environment. If the driver fails to get back the control in a limited time, a Minimum Risk Maneuver (MRM) is executed to bring the vehicle into a safe state (e.g., decelerating to full stop). The execution of ToCs requires some time and can cause traffic disruption and safety risks that increase if several vehicles execute ToCs/MRMs at similar times and in the same area. This study proposes to use novel C-ITS traffic management measures where the infrastructure exploits V2X communications to assist Connected and Automated Vehicles (CAVs) in the execution of ToCs. The infrastructure can suggest a spatial distribution of ToCs, and inform vehicles of the locations where they could execute a safe stop in case of MRM. This paper reports the first field operational tests that validate the feasibility and quantify the benefits of the proposed infrastructure-assisted ToC and MRM management. The paper also presents the CAV and roadside infrastructure prototypes implemented and used in the trials. The conducted field trials demonstrate that infrastructure-assisted traffic management solutions can reduce safety risks and traffic disruptions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n GPU based model-predictive path control for self-driving vehicles.\n \n \n \n \n\n\n \n Chajan, E.; Schulte-Tigges, J.; Reke, M.; Ferrein, A.; Matheis, D.; and Walter, T.\n\n\n \n\n\n\n In 2021 IEEE Intelligent Vehicles Symposium (IV), pages 1243–1248, July 2021. \n \n\n\n\n
\n\n\n\n \n \n \"GPU ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{ Chajan-etAl_IV2021_GPU-based-MPC,\n  author       = {Chajan, Eduard and Schulte-Tigges, Joschua and Reke, Michael and Ferrein, Alexander and Matheis, Dominik and Walter, Thomas},\n  booktitle    = {2021 IEEE Intelligent Vehicles Symposium (IV)}, \n  title        = {GPU based model-predictive path control for self-driving vehicles}, \n  year         = {2021},\n  month        = {July},\n  pages        = {1243--1248},\n  doi          = {10.1109/IV48863.2021.9575619},\n  url_ieeexpl  = {https://ieeexplore.ieee.org/abstract/document/9575619},\n  keywords     = {ADP;Heuristic algorithms;Computational modeling;Graphics\n                  processing units;Stochastic processes;Prediction\n                  algorithms;Trajectory;Vehicle dynamics;autonomous\n                  driving;GPU;model-predictive control;grid\n                  search;path control;ROS2},\n  abstract     = {One central challenge for self-driving cars is a\n                  proper path-planning. Once a trajectory has been\n                  found, the next challenge is to accurately and\n                  safely follow the precalculated path. The\n                  model-predictive controller (MPC) is a common\n                  approach for the lateral control of autonomous\n                  vehicles. The MPC uses a vehicle dynamics model to\n                  predict the future states of the vehicle for a given\n                  prediction horizon. However, in order to achieve\n                  real-time path control, the computational load is\n                  usually large, which leads to short prediction\n                  horizons. To deal with the computational load, the\n                  control algorithm can be parallelized on the\n                  graphics processing unit (GPU). In contrast to the\n                  widely used stochastic methods, in this paper we\n                  propose a deterministic approach based on grid\n                  search. Our approach focuses on systematically\n                  discovering the search area with different levels of\n                  granularity. To achieve this, we split the\n                  optimization algorithm into multiple iterations. The\n                  best sequence of each iteration is then used as an\n                  initial solution to the next iteration. The\n                  granularity increases, resulting in smooth and\n                  predictable steering angle sequences. We present a\n                  novel GPU-based algorithm and show its accuracy and\n                  realtime abilities with a number of real-world\n                  experiments.},\n}\n
\n
\n\n\n
\n One central challenge for self-driving cars is a proper path-planning. Once a trajectory has been found, the next challenge is to accurately and safely follow the precalculated path. The model-predictive controller (MPC) is a common approach for the lateral control of autonomous vehicles. The MPC uses a vehicle dynamics model to predict the future states of the vehicle for a given prediction horizon. However, in order to achieve real-time path control, the computational load is usually large, which leads to short prediction horizons. To deal with the computational load, the control algorithm can be parallelized on the graphics processing unit (GPU). In contrast to the widely used stochastic methods, in this paper we propose a deterministic approach based on grid search. Our approach focuses on systematically discovering the search area with different levels of granularity. To achieve this, we split the optimization algorithm into multiple iterations. The best sequence of each iteration is then used as an initial solution to the next iteration. The granularity increases, resulting in smooth and predictable steering angle sequences. We present a novel GPU-based algorithm and show its accuracy and realtime abilities with a number of real-world experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Machine learning based 3D object detection for navigation in unstructured environments.\n \n \n \n \n\n\n \n Nikolovski, G.; Reke, M.; Elsen, I.; and Schiffer, S.\n\n\n \n\n\n\n In 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), pages 236–242, July 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Machine ieeexplore\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{ Nikolovski:etAl_IV2021WS_ML3D-ObjDet,\n  author       = {Nikolovski, Gjorgji and Reke, Michael and Elsen, Ingo and Schiffer, Stefan},\n  booktitle    = {2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops)},\n  title        = {{Machine learning based 3D object detection for navigation in unstructured environments}},\n  year         = {2021},\n  pages        = {236--242},\n  abstract     = {In this paper we investigate the use of deep neural\n                  networks for 3D object detection in uncommon,\n                  unstructured environments such as in an open-pit\n                  mine. While neural nets are frequently used for\n                  object detection in regular autonomous driving\n                  applications, more unusual driving scenarios aside\n                  street traffic pose additional challenges. For one,\n                  the collection of appropriate data sets to train the\n                  networks is an issue. For another, testing the\n                  performance of trained networks often requires\n                  tailored integration with the particular domain as\n                  well. While there exist different solutions for\n                  these problems in regular autonomous driving, there\n                  are only very few approaches that work for special\n                  domains just as well. We address both the challenges\n                  above in this work. First, we discuss two possible\n                  ways of acquiring data for training and\n                  evaluation. That is, we evaluate a semi-automated\n                  annotation of recorded LIDAR data and we examine\n                  synthetic data generation. Using these datasets we\n                  train and test different deep neural network for the\n                  task of object detection. Second, we propose a\n                  possible integration of a ROS2 detector module for\n                  an autonomous driving platform. Finally, we present\n                  the performance of three state-of-the-art deep\n                  neural networks in the domain of 3D object detection\n                  on a synthetic dataset and a smaller one containing\n                  a characteristic object from an open-pit mine.},\n  keywords     = {ADP; ARTUS; UPNS4D; Deep learning; Training; Solid\n                  modeling; Three-dimensional displays; Annotations;\n                  Conferences; Neural networks; 3D object detection;\n                  LiDAR; autonomous driving},\n  doi          = {10.1109/IVWorkshops54471.2021.9669218},\n  url_IEEExplore = {https://ieeexplore.ieee.org/abstract/document/9669218},\n  ID_IEEE      = {9669218},\n  month        = {July},\n}\n
\n
\n\n\n
\n In this paper we investigate the use of deep neural networks for 3D object detection in uncommon, unstructured environments such as in an open-pit mine. While neural nets are frequently used for object detection in regular autonomous driving applications, more unusual driving scenarios aside street traffic pose additional challenges. For one, the collection of appropriate data sets to train the networks is an issue. For another, testing the performance of trained networks often requires tailored integration with the particular domain as well. While there exist different solutions for these problems in regular autonomous driving, there are only very few approaches that work for special domains just as well. We address both the challenges above in this work. First, we discuss two possible ways of acquiring data for training and evaluation. That is, we evaluate a semi-automated annotation of recorded LIDAR data and we examine synthetic data generation. Using these datasets we train and test different deep neural network for the task of object detection. Second, we propose a possible integration of a ROS2 detector module for an autonomous driving platform. Finally, we present the performance of three state-of-the-art deep neural networks in the domain of 3D object detection on a synthetic dataset and a smaller one containing a characteristic object from an open-pit mine.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CO2 Meter: A do-it-yourself carbon dioxide measuring device for the classroom.\n \n \n \n \n\n\n \n Dey, T.; Elsen, I.; Ferrein, A.; Frauenrath, T.; Reke, M.; and Schiffer, S.\n\n\n \n\n\n\n In Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference, of PETRA '21, pages 292–299, New York, NY, USA, 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"CO2Paper\n  \n \n \n \"CO2 acm dl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Dey:etAl_PETRA2021_CO2Meter,\n  author       = {Dey, Thomas and Elsen, Ingo and Ferrein, Alexander and Frauenrath, Tobias and Reke, Michael and Schiffer, Stefan},\n  title        = {{CO2 Meter}: A do-it-yourself carbon dioxide measuring device for the classroom},\n  booktitle    = {Proceedings of the 14th PErvasive Technologies Related to Assistive Environments Conference},\n  pages        = {292--299},\n  numpages     = {8},\n  keywords     = {sensor networks, information systems, embedded hardware, education, do-it-yourself},\n  location     = {Corfu, Greece},\n  series       = {PETRA '21},\n  year         = {2021},\n  isbn         = {9781450387927},\n  publisher    = {Association for Computing Machinery},\n  address      = {New York, NY, USA},\n  url          = {https://doi.org/10.1145/3453892.3462697},\n  url_ACM_DL   = {https://dl.acm.org/doi/abs/10.1145/3453892.3462697},\n  doi          = {10.1145/3453892.3462697},\n  abstract     = {In this paper we report on CO2 Meter, a\n                  do-it-yourself carbon dioxide measuring device for\n                  the classroom. Part of the current measures for\n                  dealing with the SARS-CoV-2 pandemic is proper\n                  ventilation in indoor settings. This is especially\n                  important in schools with students coming back to\n                  the classroom even with high incidents rates. Static\n                  ventilation patterns do not consider the individual\n                  situation for a particular class. Influencing\n                  factors like the type of activity, the physical\n                  structure or the room occupancy are not\n                  incorporated. Also, existing devices are rather\n                  expensive and often provide only limited information\n                  and only locally without any networking. This leaves\n                  the potential of analysing the situation across\n                  different settings untapped. Carbon dioxide level\n                  can be used as an indicator of air quality, in\n                  general, and of aerosol load in particular. Since,\n                  according to the latest findings, SARS-CoV-2 can be\n                  transmitted primarily in the form of aerosols,\n                  carbon dioxide may be used as a proxy for the risk\n                  of a virus infection. Hence, schools could improve\n                  the indoor air quality and potentially reduce the\n                  infection risk if they actually had measuring\n                  devices available in the classroom. Our device\n                  supports schools in ventilation and it allows for\n                  collecting data over the Internet to enable a\n                  detailed data analysis and model generation. First\n                  deployments in schools at different levels were\n                  received very positively. A pilot installation with\n                  a larger data collection and analysis is underway.},\n}\n
\n
\n\n\n
\n In this paper we report on CO2 Meter, a do-it-yourself carbon dioxide measuring device for the classroom. Part of the current measures for dealing with the SARS-CoV-2 pandemic is proper ventilation in indoor settings. This is especially important in schools with students coming back to the classroom even with high incidents rates. Static ventilation patterns do not consider the individual situation for a particular class. Influencing factors like the type of activity, the physical structure or the room occupancy are not incorporated. Also, existing devices are rather expensive and often provide only limited information and only locally without any networking. This leaves the potential of analysing the situation across different settings untapped. Carbon dioxide level can be used as an indicator of air quality, in general, and of aerosol load in particular. Since, according to the latest findings, SARS-CoV-2 can be transmitted primarily in the form of aerosols, carbon dioxide may be used as a proxy for the risk of a virus infection. Hence, schools could improve the indoor air quality and potentially reduce the infection risk if they actually had measuring devices available in the classroom. Our device supports schools in ventilation and it allows for collecting data over the Internet to enable a detailed data analysis and model generation. First deployments in schools at different levels were received very positively. A pilot installation with a larger data collection and analysis is underway.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Portable High-level Agent Programming with golog++.\n \n \n \n \n\n\n \n Mataré, V.; Viehmann, T.; Hofmann, T.; Lakemeyer, G.; Ferrein, A.; and Schiffer, S.\n\n\n \n\n\n\n In Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,, pages 218–227, 2021. INSTICC, SciTePress\n \n\n\n\n
\n\n\n\n \n \n \"Portable scitepress\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Matare-etAl_ICAART2021_PortableHighLevelAgentCode,\n  author       = {Victor Matar{\\'e} and Tarik Viehmann and Till Hofmann and Gerhard Lakemeyer and Alexander Ferrein and Stefan Schiffer},\n  title        = {{Portable High-level Agent Programming with golog++}},\n  booktitle    = {Proceedings of the 13th International Conference on Agents and Artificial Intelligence - Volume 2: ICAART,},\n  year         = {2021},\n  pages        = {218--227},\n  publisher    = {SciTePress},\n  organization = {INSTICC},\n  doi          = {10.5220/0010253902180227},\n  url_scitepress = {https://www.scitepress.org/Papers/2021/102539/},\n  isbn         = {978-989-758-484-8},\n  keywords     = {ConTrAkt, high-level programming, golog, agent programming},\n  abstract     = {We present golog++, a high-level agent programming\n                  and interfacing framework that offers a temporal\n                  constraint language to explicitly model\n                  layer-penetrating contingencies in low-level\n                  platform behavior. It can be used to maintain a\n                  clear separation between an agent’s domain model and\n                  certain quirks of its execution platform that affect\n                  problem solving behavior. Our system reasons about\n                  the execution of an abstract (i.e. exclusively\n                  domain-bound) plan on a particular execution\n                  platform. This way, we avoid compounding the\n                  complexity of the planning problem while improving\n                  the modularity of both golog++ and the user code. On\n                  a run-through example from the well-known\n                  blocksworld domain, we demonstrate the entire\n                  process from domain modeling and platform modeling\n                  to plan transformation and platform-specific plan\n                  execution.},\n}\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2020\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n We present golog++, a high-level agent programming and interfacing framework that offers a temporal constraint language to explicitly model layer-penetrating contingencies in low-level platform behavior. It can be used to maintain a clear separation between an agent’s domain model and certain quirks of its execution platform that affect problem solving behavior. Our system reasons about the execution of an abstract (i.e. exclusively domain-bound) plan on a particular execution platform. This way, we avoid compounding the complexity of the planning problem while improving the modularity of both golog++ and the user code. On a run-through example from the well-known blocksworld domain, we demonstrate the entire process from domain modeling and platform modeling to plan transformation and platform-specific plan execution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compiling ROS Schooling Curricula via Contentual Taxonomies.\n \n \n \n \n\n\n \n \n\n\n \n\n\n\n In null, editor(s), Robotics in Education - Methods and Applications for Teaching and Learning, Proceedings of the 11th International Conference on Robotics and Education (RiE 2020), volume 1316, of Advances in Intelligent Systems and Computing, pages 49–60, Cham, 2021. Springer\n \n\n\n\n
\n\n\n\n \n \n \"Compiling springer\n  \n \n \n \"Compiling doi\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n A Smart Factory Setup based on the RoboCup Logistics League.\n \n \n \n\n\n \n Eltester, N. S.; Ferrein, A.; and Schiffer, S.\n\n\n \n\n\n\n In 2020 IEEE International Conference on Industrial Cyber Physical Systems (ICPS), pages 297–302, June 10-12 2020. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{ Eltester-etAl_ICPS2020_SmartFactory,\n  author       = {Niklas Sebastian {Eltester} and Alexander {Ferrein} and Stefan {Schiffer}},\n  booktitle    = {2020 IEEE International Conference on Industrial Cyber Physical Systems (ICPS)},\n  OPTeditor       = {},\n  title        = {A Smart Factory Setup based on the RoboCup Logistics League}, \n  year         = {2020},\n  month        = {June 10-12},\n  location     = {Tampere, Finland - ONLINE},\n  pages        = {297--302},\n  abstract     = {In this paper we present Smart-fACtory, a setup for\n                  a research and teaching facility in industrial\n                  robotics that is based on the RoboCup Logistics\n                  League. It is driven by the need for developing and\n                  applying solutions for digital production.\n                  Digitization receives constantly increasing\n                  attention in many areas, especially in industry. The\n                  common theme is to make things smart by using\n                  intelligent computer technology. Especially in the\n                  last decade there have been many attempts to improve\n                  existing processes in factories, for example, in\n                  production logistics, also with deploying\n                  cyber-physical systems. An initiative that explores\n                  challenges and opportunities for robots in such a\n                  setting is the RoboCup Logistics League. Since its\n                  foundation in 2012 it is an international effort for\n                  research and education in an intra- warehouse\n                  logistics scenario. During seven years of\n                  competition a lot of knowledge and experience\n                  regarding autonomous robots was gained. This\n                  knowledge and experience shall provide the basis for\n                  further research in challenges of future\n                  production. The focus of our Smart-fACtory is to\n                  create a stimulating envi- ronment for research on\n                  logistics robotics, for teaching activities in\n                  computer science and electrical engineering\n                  programmes as well as for industrial users to study\n                  and explore the feasibility of future\n                  technologies. Building on a very successful history\n                  in the RoboCup Logistics League we aim to provide\n                  stakeholders with a dedicated facility oriented at\n                  their individual needs.},\n  keywords     = {logistics robotics, Industry 4.0, smart factory,\n                  RoboCup Logistics League, RCLL},\n  doi          = {10.1109/ICPHYS.2020.xxx},\n}\n\n
\n
\n\n\n
\n In this paper we present Smart-fACtory, a setup for a research and teaching facility in industrial robotics that is based on the RoboCup Logistics League. It is driven by the need for developing and applying solutions for digital production. Digitization receives constantly increasing attention in many areas, especially in industry. The common theme is to make things smart by using intelligent computer technology. Especially in the last decade there have been many attempts to improve existing processes in factories, for example, in production logistics, also with deploying cyber-physical systems. An initiative that explores challenges and opportunities for robots in such a setting is the RoboCup Logistics League. Since its foundation in 2012 it is an international effort for research and education in an intra- warehouse logistics scenario. During seven years of competition a lot of knowledge and experience regarding autonomous robots was gained. This knowledge and experience shall provide the basis for further research in challenges of future production. The focus of our Smart-fACtory is to create a stimulating envi- ronment for research on logistics robotics, for teaching activities in computer science and electrical engineering programmes as well as for industrial users to study and explore the feasibility of future technologies. Building on a very successful history in the RoboCup Logistics League we aim to provide stakeholders with a dedicated facility oriented at their individual needs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Self-Driving Car Architecture in ROS2.\n \n \n \n \n\n\n \n Reke, M.; Peter, D.; Schulte-Tigges, J.; Schiffer, S.; Ferrein, A.; Walter, T.; and Matheis, D.\n\n\n \n\n\n\n In 2020 International SAUPEC/RobMech/PRASA Conference, pages 1–6, Jan 2020. \n \n\n\n\n
\n\n\n\n \n \n \"A ieeexpl\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{ Reke-etAl_RobMech2020_Self-Driving-Car-Arch-ROS2,\n  author       = {M. {Reke} and D. {Peter} and J. {Schulte-Tigges} and\n                  S. {Schiffer} and A. {Ferrein} and T. {Walter} and\n                  D. {Matheis}},\n  booktitle    = {2020 International SAUPEC/RobMech/PRASA Conference}, \n  title        = {A Self-Driving Car Architecture in {ROS2}}, \n  month        = {Jan},\n  year         = {2020},\n  pages        = {1--6},\n  doi          = {10.1109/SAUPEC/RobMech/PRASA48453.2020.9041020},\n  url_IEEEXpl  = {https://ieeexplore.ieee.org/abstract/document/9041020},\n  ISBN         = {Electronic ISBN: 978-1-7281-4162-6, Print on Demand(PoD) ISBN: 978-1-7281-4163-3},\n  keywords     = {ADP; UPNS4D; ARTUS; automobiles; control engineering\n                  computing; mobile robots; safety-critical software;\n                  autonomous driving; robotic software; self-driving\n                  cars; automated real passenger car; self-driving car\n                  architecture; Self-driving car; autonomous driving;\n                  architecture; robot operating system; ROS; ROS2;\n                  LKAS; V2X},\n  abstract     = {In this paper we report on an architecture for a\n                  self-driving car that is based on ROS2. Self-driving\n                  cars have to take decisions based on their sensory\n                  input in real-time, providing high reliability with\n                  a strong demand in functional safety. In principle,\n                  self-driving cars are robots. However, typical robot\n                  software, in general, and the previous version of\n                  the Robot Operating System (ROS), in particular,\n                  does not always meet these requirements. With the\n                  successor ROS2 the situation has changed and it\n                  might be considered as a solution for automated and\n                  autonomous driving. Existing robotic software based\n                  on ROS was not ready for safety critical\n                  applications like self-driving cars. We propose an\n                  architecture for using ROS2 for a self-driving car\n                  that enables safe and reliable real-time behaviour,\n                  but keeping the advantages of ROS such as a\n                  distributed architecture and standardised message\n                  types. First experiments with an automated real\n                  passenger car at lower and higher speed-levels show\n                  that our approach seems feasible for autonomous\n                  driving under the necessary real-time conditions.},\n}\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2019\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n In this paper we report on an architecture for a self-driving car that is based on ROS2. Self-driving cars have to take decisions based on their sensory input in real-time, providing high reliability with a strong demand in functional safety. In principle, self-driving cars are robots. However, typical robot software, in general, and the previous version of the Robot Operating System (ROS), in particular, does not always meet these requirements. With the successor ROS2 the situation has changed and it might be considered as a solution for automated and autonomous driving. Existing robotic software based on ROS was not ready for safety critical applications like self-driving cars. We propose an architecture for using ROS2 for a self-driving car that enables safe and reliable real-time behaviour, but keeping the advantages of ROS such as a distributed architecture and standardised message types. First experiments with an automated real passenger car at lower and higher speed-levels show that our approach seems feasible for autonomous driving under the necessary real-time conditions.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n A System for Continuous Underground Site Mapping and Exploration.\n \n \n \n \n\n\n \n Ferrein, A.; Scholl, I.; Neumann, T.; Krückel, K.; and Schiffer, S.\n\n\n \n\n\n\n In Reyhanoglu, M.; and Cubber, G. D., editor(s), Unmanned Robotic Systems and Applications, 4. IntechOpen, Rijeka, 2019.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n \n \"A intech\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@incollection{Ferrein:etAl_InTechOpen2019_A-System-for-Continuous,\n  author       = {Alexander Ferrein and Ingrid Scholl and Tobias Neumann and Kai Kr{\\"u}ckel and Stefan Schiffer},\n  title        = {{A System for Continuous Underground Site Mapping and Exploration}},\n  booktitle    = {Unmanned Robotic Systems and Applications},\n  publisher    = {IntechOpen},\n  address      = {Rijeka},\n  year         = {2019},\n  editor       = {Mahmut Reyhanoglu and Geert De Cubber},\n  chapter      = {4},\n  doi          = {10.5772/intechopen.85859},\n  url          = {https://doi.org/10.5772/intechopen.85859},\n  url_InTech   = {https://www.intechopen.com/chapters/67435},\n  keywords     = {UPNS4D},\n  abstract     = {3D mapping becomes ever more important not only in\n                  industrial mobile robotic applications for AGV and\n                  production vehicles but also for search and rescue\n                  scenarios. In this chapter we report on our work of\n                  mapping and exploring underground mines. Our\n                  contribution is two-fold: First, we present our\n                  custom-built 3D laser range platform SWAP and\n                  compare it against an architectural laser\n                  scanner. The advantages are that the mapping vehicle\n                  can scan in a continuous mode and does not have to\n                  do stop-and-go scanning. The second contribution is\n                  the mapping tool mapit which supports and automates\n                  the registration of large sets of point clouds. The\n                  idea behind mapit is to keep the raw point cloud\n                  data as a basis for any map generation and only\n                  store all operations executed on the point\n                  clouds. This way the initial data do not get lost,\n                  and improvements on low-level date (e.g. improved\n                  transforms through loop closure) will automatically\n                  improve the final maps. Finally, we also present\n                  methods for visualization and interactive\n                  exploration of such maps.},\n}\n
\n
\n\n\n
\n 3D mapping becomes ever more important not only in industrial mobile robotic applications for AGV and production vehicles but also for search and rescue scenarios. In this chapter we report on our work of mapping and exploring underground mines. Our contribution is two-fold: First, we present our custom-built 3D laser range platform SWAP and compare it against an architectural laser scanner. The advantages are that the mapping vehicle can scan in a continuous mode and does not have to do stop-and-go scanning. The second contribution is the mapping tool mapit which supports and automates the registration of large sets of point clouds. The idea behind mapit is to keep the raw point cloud data as a basis for any map generation and only store all operations executed on the point clouds. This way the initial data do not get lost, and improvements on low-level date (e.g. improved transforms through loop closure) will automatically improve the final maps. Finally, we also present methods for visualization and interactive exploration of such maps.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Calibration of a Rotating or Revolving Platform with a LiDAR Sensor.\n \n \n \n \n\n\n \n Claer, M.; Ferrein, A.; and Schiffer, S.\n\n\n \n\n\n\n Applied Sciences, 9(11): 2238. January 2019.\n \n\n\n\n
\n\n\n\n \n \n \"CalibrationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{Claer:Ferrein:Schiffer_ApplSci2019_Calibration,\n  author       = {Claer, Mario and Ferrein, Alexander and Schiffer, Stefan},\n  title        = {Calibration of a {{Rotating}} or {{Revolving Platform}} with a {{LiDAR Sensor}}},\n  journal      = {Applied Sciences},\n  year         = {2019},\n  month        = jan,\n  volume       = {9},\n  number       = {11},\n  pages        = {2238},\n  ARTICLE-NUMBER = {2238},\n  doi          = {10.3390/app9112238},\n  URL          = {https://www.mdpi.com/2076-3417/9/11/2238},\n  ISSN         = {2076-3417},\n  language     = {en},\n  copyright    = {http://creativecommons.org/licenses/by/3.0/},\n  keywords     = {UPNS4D,ARTUS,calibration,extrinsic parameter,LiDAR,LRF},\n  abstract     = {Perceiving its environment in 3D is an important\n                  ability for a modern robot. Today, this is often\n                  done using LiDARs which come with a strongly limited\n                  field of view (FOV), however. To extend their FOV,\n                  the sensors are mounted on driving vehicles in\n                  several different ways. This allows 3D perception\n                  even with 2D LiDARs if a corresponding localization\n                  system or technique is available. Another popular\n                  way to gain most information of the scanners is to\n                  mount them on a rotating carrier platform. In this\n                  way, their measurements in different directions can\n                  be collected and transformed into a common frame, in\n                  order to achieve a nearly full spherical\n                  perception. However, this is only possible if the\n                  kinetic chains of the platforms are known exactly,\n                  that is, if the LiDAR pose w.r.t. to its rotation\n                  center is well known. The manual measurement of\n                  these chains is often very cumbersome or sometimes\n                  even impossible to do with the necessary\n                  precision. Our paper proposes a method to calibrate\n                  the extrinsic LiDAR parameters by decoupling the\n                  rotation from the full six degrees of freedom\n                  transform and optimizing both separately. Thus, one\n                  error measure for the orientation and one for the\n                  translation with known orientation are minimized\n                  subsequently with a combination of a consecutive\n                  grid search and a gradient descent. Both error\n                  measures are inferred from spherical calibration\n                  targets. Our experiments with the method suggest\n                  that the main influences on the calibration results\n                  come from the the distance to the calibration\n                  targets, the accuracy of their center point\n                  estimation and the search grid resolution. However,\n                  our proposed calibration method improves the\n                  extrinsic parameters even with unfavourable\n                  configurations and from inaccurate initial pose\n                  guesses.},\n}\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2019\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n Perceiving its environment in 3D is an important ability for a modern robot. Today, this is often done using LiDARs which come with a strongly limited field of view (FOV), however. To extend their FOV, the sensors are mounted on driving vehicles in several different ways. This allows 3D perception even with 2D LiDARs if a corresponding localization system or technique is available. Another popular way to gain most information of the scanners is to mount them on a rotating carrier platform. In this way, their measurements in different directions can be collected and transformed into a common frame, in order to achieve a nearly full spherical perception. However, this is only possible if the kinetic chains of the platforms are known exactly, that is, if the LiDAR pose w.r.t. to its rotation center is well known. The manual measurement of these chains is often very cumbersome or sometimes even impossible to do with the necessary precision. Our paper proposes a method to calibrate the extrinsic LiDAR parameters by decoupling the rotation from the full six degrees of freedom transform and optimizing both separately. Thus, one error measure for the orientation and one for the translation with known orientation are minimized subsequently with a combination of a consecutive grid search and a gradient descent. Both error measures are inferred from spherical calibration targets. Our experiments with the method suggest that the main influences on the calibration results come from the the distance to the calibration targets, the accuracy of their center point estimation and the search grid resolution. However, our proposed calibration method improves the extrinsic parameters even with unfavourable configurations and from inaccurate initial pose guesses.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Die Extraktion bergbaulich relevanter Merkmale aus 3D-Punktwolken eines untertagetauglichen mobilen Multisensorsystems.\n \n \n \n\n\n \n Donner, R.; Rabel, M.; Scholl, I.; Ferrein, A.; Donner, M.; Geier, A.; John, A.; Köhler, C.; and Varga, S.\n\n\n \n\n\n\n In Tagungsband Geomonitoring, pages 91 – 110, 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{DonnerRabelScholletal.2019,\n  author    = {Ralf Donner and Matthias Rabel and Ingrid Scholl and Alexander Ferrein and Marc Donner and Andreas Geier and Andr{\\ÂŽe} John and Christian K{\\"o}hler and Sebastian Varga},\n  title     = {Die Extraktion bergbaulich relevanter Merkmale aus 3D-Punktwolken eines untertagetauglichen mobilen Multisensorsystems},\n  booktitle = {Tagungsband Geomonitoring},\n  pages     = {91 -- 110},\n  doi       = {10.15488/4515},\n  year      = {2019},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Proceedings of the 11th Cognitive Robotics Workshop 2018, co-located with 16th International Conference on Principles of Knowledge Representation and Reasoning, CogRob@KR 2018, Tempe, AZ, USA, October 27th, 2018.\n \n \n \n \n\n\n \n Steinbauer, G.; and Ferrein, A.,\n editors.\n \n\n\n \n\n\n\n Volume 2325, of CEUR Workshop Proceedings.CEUR-WS.org. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"ProceedingsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@proceedings{2018cogrob,\n  editor    = {Gerald Steinbauer and\n               Alexander Ferrein},\n  title     = {Proceedings of the 11th Cognitive Robotics Workshop 2018, co-located\n               with 16th International Conference on Principles of Knowledge Representation\n               and Reasoning, CogRob@KR 2018, Tempe, AZ, USA, October 27th, 2018},\n  series    = {{CEUR} Workshop Proceedings},\n  volume    = {2325},\n  publisher = {CEUR-WS.org},\n  year      = {2019},\n  url       = {http://ceur-ws.org/Vol-2325},\n  urn       = {urn:nbn:de:0074-2325-7},\n  timestamp = {Tue, 28 May 2019 16:23:42 +0200},\n  biburl    = {https://dblp.org/rec/bib/conf/kr/2018cogrob},\n  bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n\n\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2018\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An Efficient Hashing Algorithm for NN Problem in HD Spaces.\n \n \n \n\n\n \n Alhwarin, F.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In De Marsico, M.; di Baja, G. S.; and Fred, A. L. N., editor(s), Pattern Recognition Applications and Methods - 7th International Conference, ICPRAM 2018, Funchal, Madeira, Portugal, January 16-18, 2018, Revised Selected Papers, volume 11351, of Lecture Notes in Computer Science, pages 101–115, 2019. Springer\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{AlhwarinFS18a,\n  author    = {Faraj Alhwarin and\n               Alexander Ferrein and\n               Ingrid Scholl},\n  title     = {An Efficient Hashing Algorithm for {NN} Problem in {HD} Spaces},\n  booktitle = {Pattern Recognition Applications and Methods - 7th International Conference,\n               {ICPRAM} 2018, Funchal, Madeira, Portugal, January 16-18, 2018, Revised\n               Selected Papers},\n  pages     = {101--115},\n  year      = {2018},\n  editor    = {Maria {De Marsico} and\n               Gabriella Sanniti di Baja and\n               Ana L. N. Fred},\n  series    = {Lecture Notes in Computer Science},\n  volume    = {11351},\n  publisher = {Springer},\n  year      = {2019},\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (9)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Proceedings of the Workshop on Teaching Robotics with ROS (held at ERF 2018) (TRROS2018).\n \n \n \n \n\n\n \n Schiffer, S.; Ferrein, A.; Bharatheesha, M.; and Corbato, C. H.,\n editors.\n \n\n\n \n\n\n\n of CEUR Workshop Proceedings.Aachen, 2018.\n \n\n\n\n
\n\n\n\n \n \n \"ProceedingsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@proceedings{TRROS2018,\n  booktitle    = {Workshop on Teaching Robotics with ROS (held at the European Robotics Forum) (TRROS2018)},\n  title        = {Proceedings of the Workshop on Teaching Robotics with ROS (held at ERF 2018) (TRROS2018)},\n  year         = 2018,\n  editor       = {Stefan Schiffer and Alexander Ferrein and Mukunda Bharatheesha and Carlos Hern{\\'a}ndez Corbato},\n  number       = 2329,\n  series       = {CEUR Workshop Proceedings},\n  address      = {Aachen},\n  issn         = {1613-0073},\n  url          = {http://ceur-ws.org/Vol-2329/},\n  venue        = {Tampere, Finland},\n  eventdate    = {2018-03-15},\n  keywords     = {ROSin},\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n golog++ : An Integrative System Design.\n \n \n \n \n\n\n \n Mataré, V.; Schiffer, S.; and Ferrein, A.\n\n\n \n\n\n\n In Steinbauer, G.; and Ferrein, A., editor(s), Proceedings of the 11th Cognitive Robotics Workshop 2018 (CogRob), co-located with the 16th International Conference on Principles of Knowledge Representation and Reasoning, CogRob@KR 2018, volume 2325, of CEUR Workshop Proceedings, pages 29–35, Aachen, 2018. CEUR-WS.org\n \n\n\n\n
\n\n\n\n \n \n \"golog++Paper\n  \n \n \n \"golog++ proc\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{Matare:EtAl:CogRob2018:gologpp,\n  title     = {golog++ : An Integrative System Design},\n  author    = {Victor Matar{\\'{e}} and Stefan Schiffer and Alexander Ferrein},\n  pages     = {29--35},\n  booktitle = {Proceedings of the 11th Cognitive Robotics Workshop 2018 (CogRob),\n               co-located with the 16th International Conference on\n\t       Principles of Knowledge Representation and Reasoning, CogRob@KR 2018},\n  year      = 2018,\n  location  = {Tempe, AZ, USA},\n  editor    = {Gerald Steinbauer and Alexander Ferrein},\n  volume    = {2325},\n  publisher = {CEUR-WS.org},\n  series    = {CEUR Workshop Proceedings},\n  address   = {Aachen},\n  issn      = {1613-0073},\n  url       = {http://ceur-ws.org/Vol-2325/#paper-06},\n  url_proc  = {http://ceur-ws.org/Vol-2325/},\n  venue     = {Tempe, AZ, USA},\n  eventdate = {2018-10-27},\n  keywords     = {ConTrAkt, high-level programming, golog, agent programming},\n  abstract     = {Golog is a language family with great untapped\n                  potential. We argue that it could become a practical\n                  and widely usable high-level control language, if\n                  only it had an implementation that is usable in a\n                  production environment. In this paper, we do not\n                  specify another Golog interpreter, but an extensible\n                  C++ framework that defines a coherent grammar,\n                  developer tool support, internal/external\n                  consistency checking with clean error handling, and\n                  a simple, portable platform interface. The framework\n                  specifically does not implement language\n                  semantics. For this purpose we can simply hook into\n                  any of the many existing implementations that do\n                  very well in implementing language semantics, but\n                  fall short in regards to interfacing, portability,\n                  usability and practicality in general.},\n}\n
\n
\n\n\n
\n Golog is a language family with great untapped potential. We argue that it could become a practical and widely usable high-level control language, if only it had an implementation that is usable in a production environment. In this paper, we do not specify another Golog interpreter, but an extensible C++ framework that defines a coherent grammar, developer tool support, internal/external consistency checking with clean error handling, and a simple, portable platform interface. The framework specifically does not implement language semantics. For this purpose we can simply hook into any of the many existing implementations that do very well in implementing language semantics, but fall short in regards to interfacing, portability, usability and practicality in general.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ERIKA – Early Robotics Introduction at Kindergarten Age.\n \n \n \n \n\n\n \n Schiffer, S.; and Ferrein, A.\n\n\n \n\n\n\n Multimodal Technologies and Interaction, 2(4). 2018.\n \n\n\n\n
\n\n\n\n \n \n \"ERIKAPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Article{ Schiffer:Ferrein:MTI2018:ERiKA,\n  author       = {Schiffer, Stefan and Ferrein, Alexander},\n  title        = {{ERIKA} -- {Early Robotics Introduction at Kindergarten Age}},\n  journal      = {Multimodal Technologies and Interaction},\n  year         = {2018},\n  volume       = {2},\n  number       = {4},\n  article-number = {64},\n  url          = {http://www.mdpi.com/2414-4088/2/4/64},\n  issn         = {2414-4088},\n  abstract     = {In this work, we report on our attempt to design and\n                  implement an early introduction to basic robotics\n                  principles for children at kindergarten age. One of\n                  the main challenges of this effort is to explain\n                  complex robotics contents in a way that pre-school\n                  children could follow the basic principles and ideas\n                  using examples from their world of experience. What\n                  sets apart our effort from other work is that part\n                  of the lecturing is actually done by a robot itself\n                  and that a quiz at the end of the lesson is done\n                  using robots as well. The humanoid robot Pepper from\n                  Softbank, which is a great platform for human-robot\n                  interaction experiments, was used to present a\n                  lecture on robotics by reading out the contents to\n                  the children making use of its speech synthesis\n                  capability. A quiz in a Runaround-game-show style\n                  after the lecture activated the children to recap\n                  the contents they acquired about how mobile robots\n                  work in principle. In this quiz, two LEGO Mindstorm\n                  EV3 robots were used to implement a strongly\n                  interactive scenario. Besides the thrill of being\n                  exposed to a mobile robot that would also react to\n                  the children, they were very excited and at the same\n                  time very concentrated. We got very positive\n                  feedback from the children as well as from their\n                  educators. To the best of our knowledge, this is one\n                  of only few attempts to use a robot like Pepper not\n                  as a tele-teaching tool, but as the teacher itself\n                  in order to engage pre-school children with complex\n                  robotics contents.},\n  doi          = {10.3390/mti2040064},\n}\n\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2019\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  2019\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%  LEGACY\n% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2019\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n In this work, we report on our attempt to design and implement an early introduction to basic robotics principles for children at kindergarten age. One of the main challenges of this effort is to explain complex robotics contents in a way that pre-school children could follow the basic principles and ideas using examples from their world of experience. What sets apart our effort from other work is that part of the lecturing is actually done by a robot itself and that a quiz at the end of the lesson is done using robots as well. The humanoid robot Pepper from Softbank, which is a great platform for human-robot interaction experiments, was used to present a lecture on robotics by reading out the contents to the children making use of its speech synthesis capability. A quiz in a Runaround-game-show style after the lecture activated the children to recap the contents they acquired about how mobile robots work in principle. In this quiz, two LEGO Mindstorm EV3 robots were used to implement a strongly interactive scenario. Besides the thrill of being exposed to a mobile robot that would also react to the children, they were very excited and at the same time very concentrated. We got very positive feedback from the children as well as from their educators. To the best of our knowledge, this is one of only few attempts to use a robot like Pepper not as a tele-teaching tool, but as the teacher itself in order to engage pre-school children with complex robotics contents.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Enhancing Software and Hardware Reliability for a Successful Participation in the RoboCup Logistics League 2017.\n \n \n \n\n\n \n Hofmann, T.; Mataré, V.; Neumann, T.; Schönitz, S.; Henke, C.; Limpert, N.; Niemueller, T.; Ferrein, A.; Jeschke, S.; and Lakemeyer, G.\n\n\n \n\n\n\n In Akiyama, H.; Obst, O.; Sammut, C.; and Tonidandel, F., editor(s), RoboCup 2017: Robot World Cup XXI, volume 11175, of Lecture Notes in Computer Science, pages 486–497, 2018. Springer\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{HofmannMNSHLNFJ18,\n  author    = {Till Hofmann and Victor Matar{\\'{e}} and Tobias Neumann\n                  and Sebastian Sch{\\"{o}}nitz and Christoph Henke and\n                  Nicolas Limpert and Tim Niemueller and Alexander\n                  Ferrein and Sabina Jeschke and Gerhard Lakemeyer},\n  title     = {Enhancing Software and Hardware Reliability for a\n                  Successful Participation in the RoboCup Logistics\n                  League 2017},\n  editor    = {Hidehisa Akiyama and Oliver Obst and Claude Sammut and\n                  Flavio Tonidandel},\n  booktitle = {RoboCup 2017: Robot World Cup {XXI}},\n  pages     = {486--497},\n  series    = {Lecture Notes in Computer Science},\n  volume    = {11175},\n  publisher = {Springer},\n  year      = {2018},\n  keywords  = {ConTrAkt},\n}\n\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2017\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The ROSIN Education Concept - Fostering ROS Industrial-Related Robotics Education in Europe.\n \n \n \n\n\n \n Ferrein, A.; Schiffer, S.; and Kallweit, S.\n\n\n \n\n\n\n In Ollero, A.; Sanfeliu, A.; Montano, L.; Lau, N.; and Cardeira, C., editor(s), ROBOT 2017: Third Iberian Robotics Conference - Volume 2, volume 694, of Advances in Intelligent Systems and Computing, pages 370–381, 2018. Springer\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Ferrein0K17,\n  author    = {Alexander Ferrein and Stefan Schiffer and Stephan\n                  Kallweit},\n  title     = {The {ROSIN} Education Concept - Fostering {ROS}\n                  Industrial-Related Robotics Education in Europe},\n  booktitle = {{ROBOT} 2017: Third Iberian Robotics Conference - Volume 2},\n  pages     = {370--381},\n  year      = {2017},\n  editor    = {An{\\'{\\i}}bal Ollero and Alberto Sanfeliu and Luis\n                  Montano and Nuno Lau and Carlos Cardeira},\n  series    = {Advances in Intelligent Systems and Computing},\n  volume    = {694},\n  publisher = {Springer},\n  year      = {2018},\n  keywords  = {ROSin},\n}\n\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2016\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Direct Volume Rendering in Virtual Reality.\n \n \n \n\n\n \n Scholl, I.; Suder, S.; and Schiffer, S.\n\n\n \n\n\n\n In Maier, A.; Deserno, T. M.; Handels, H.; Maier-Hein, K. H.; Palm, C.; and Tolxdorff, T., editor(s), Bildverarbeitung für die Medizin 2018, pages 297–302, Berlin, Heidelberg, March 2018. Springer\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{ Scholl:Suder:Schiffer:BVM2018:MedicVR,\r\n  title        = "{Direct Volume Rendering in Virtual Reality}",\r\n  author       = "Scholl, Ingrid and Suder, Sebastian and Schiffer, Stefan",\r\n  editor       = "Maier, Andreas and Deserno, Thomas M. and Handels, Heinz and Maier-Hein, Klaus Hermann and Palm, Christoph and Tolxdorff, Thomas",\r\n  booktitle    = "Bildverarbeitung f{\\"u}r die Medizin 2018",\r\n  year         = "2018",\r\n  month        = "March",\r\n  day          = "10--12",\r\n  publisher    = "Springer",\r\n  address      = "Berlin, Heidelberg",\r\n  pages        = "297--302",\r\n  isbn         = "978-3-662-56537-7",\r\n  doi          = "10.1007/978-3-662-56537-7_79",\r\n  springerlink = "https://link.springer.com/chapter/10.1007/978-3-662-56537-7_79",\r\n  abstract     = "Direct Volume Rendering (DVR) techniques are used to\r\n                  visualize surfaces from 3D volume data sets, without\r\n                  computing a 3D geometry. Several surfaces can be\r\n                  classified using a transfer function by assigning\r\n                  optical properties like color and opacity\r\n                  (RGB$\\alpha$) to the voxel data. Finding a good\r\n                  transfer function in order to separate specific\r\n                  structures from the volume data set, is in general a\r\n                  manual and time-consuming procedure, and requires\r\n                  detailed knowledge of the data and the image\r\n                  acquisition technique. In this paper, we present a\r\n                  new Virtual Reality (VR) application based on the\r\n                  HTC Vive headset. Onedimensional transfer functions\r\n                  can be designed in VR while continuously rendering\r\n                  the stereoscopic image pair through massively\r\n                  parallel GPUbased ray casting shader techniques. The\r\n                  usability of the VR application is evaluated.",\r\n}\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2017\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n
\n
\n\n\n
\n Direct Volume Rendering (DVR) techniques are used to visualize surfaces from 3D volume data sets, without computing a 3D geometry. Several surfaces can be classified using a transfer function by assigning optical properties like color and opacity (RGB$α$) to the voxel data. Finding a good transfer function in order to separate specific structures from the volume data set, is in general a manual and time-consuming procedure, and requires detailed knowledge of the data and the image acquisition technique. In this paper, we present a new Virtual Reality (VR) application based on the HTC Vive headset. Onedimensional transfer functions can be designed in VR while continuously rendering the stereoscopic image pair through massively parallel GPUbased ray casting shader techniques. The usability of the VR application is evaluated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Constraint-based online transformation of abstract plans into executable robot actions.\n \n \n \n \n\n\n \n Hofmann, T.; Mataré, V.; Schiffer, S.; Ferrein, A.; and Lakemeyer, G.\n\n\n \n\n\n\n In Srivastava, S.; Zhang, S.; Hawes, N.; Karpas, E.; Konidaris, G.; Leonetti, M.; Sridharan, M.; and Wyatt, J., editor(s), Proceedings of the 2018 AAAI Spring Symposium on Integrating Representation, Reasoning, Learning, and Execution for Goal Directed Autonomy, pages 549–553, March 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Constraint-based symposium\n  \n \n \n \"Constraint-based session\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{HofmannEtAl:AAAI-SS2018-SIRLE:ConTrAkt,\n  author       = {Till Hofmann and Victor Matar{\\'e} and Stefan Schiffer and Alexander Ferrein and Gerhard Lakemeyer},\n  title        = {Constraint-based online transformation of abstract plans into executable robot actions},\n  booktitle    = {Proceedings of the 2018 AAAI Spring Symposium on\n                  Integrating Representation, Reasoning, Learning, and Execution for Goal Directed Autonomy},\n  editor       = {Siddharth Srivastava and Shiqi Zhang and Nick Hawes and Erez Karpas\n                  and George Konidaris and Matteo Leonetti and Mohan Sridharan and Jeremy Wyatt},\n  year         = {2018},\n  month        = {March},\n  day          = {26--28},\n  location     = {Stanford University, CA, USA},\n  pages        = {549--553},\n  url_Symposium = {http://siddharthsrivastava.net/sirle18/},\n  url_Session  = {https://aaai.org/Symposia/Spring/sss18symposia.php#ss06},\n  abstract     = {In this paper, we are concerned with making the\n                  execution of abstract action plans for robotic\n                  agents more robust. To this end, we propose to model\n                  the internals of a robot system and its ties to the\n                  actions that the robot can perform. Based on these\n                  models, we propose an online transformation of an\n                  abstract plan into executable actions conforming\n                  with system specifics. With our framework we aim to\n                  achieve two goals. For one, modeling the system\n                  internals is beneficial in its own right in order to\n                  achieve longer term autonomy as well as system\n                  transparency and comprehensibility. For another,\n                  separating the system details from determining the\n                  course of action on an abstract level leverages the\n                  use of planning for actual robotic systems.},\n}\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2017\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n%%  2016\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n\n
\n
\n\n\n
\n In this paper, we are concerned with making the execution of abstract action plans for robotic agents more robust. To this end, we propose to model the internals of a robot system and its ties to the actions that the robot can perform. Based on these models, we propose an online transformation of an abstract plan into executable actions conforming with system specifics. With our framework we aim to achieve two goals. For one, modeling the system internals is beneficial in its own right in order to achieve longer term autonomy as well as system transparency and comprehensibility. For another, separating the system details from determining the course of action on an abstract level leverages the use of planning for actual robotic systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimized KinectFusion Algorithm for 3D Scanning Applications.\n \n \n \n \n\n\n \n Alhwarin, F.; Schiffer, S.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In Wiebe, S.; Gamboa, H.; Fred, A. L. N.; and i Badia, S. B., editor(s), Proceedings of the 11th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2018) , volume 2: BIOIMAGING, pages 50–57, 2018. SciTePress\n Best Paper Candidate (Short Listed)\n\n\n\n
\n\n\n\n \n \n \"Optimized scitepress\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{Alhwarin0FS18,\n  author    = {Faraj Alhwarin and Stefan Schiffer and Alexander Ferrein and Ingrid Scholl},\n  editor    = {Sheldon Wiebe and Hugo Gamboa and Ana L. N. Fred and Sergi Berm{\\'{u}}dez i Badia},\n  title     = {{Optimized KinectFusion Algorithm for 3D Scanning Applications}},\n  booktitle = {Proceedings of the 11th International Joint Conference on Biomedical\n               Engineering Systems and Technologies ({BIOSTEC} 2018) },\n  volume       = {2: BIOIMAGING},\n  pages     = {50--57},\n  publisher = {SciTePress},\n  isbn      = {978-989-758-278-3},\n  year      = {2018},\n  doi          = {10.5220/0006594700500057},\n  url_scitepress = {http://www.scitepress.org/PublicationsDetail.aspx?ID=dZs8lGPb760=&t=1},\n  note      = {Best Paper Candidate (Short Listed)},\n  keywords  = {BodyScanner},\n  abstract     = {KinectFusion is an effective way to reconstruct\n                  indoor scenes. It takes a depth image stream and\n                  uses the iterative closests point (ICP) method to\n                  estimate the camera motion. Then it merges the\n                  images in a volume to construct a 3D model. The\n                  model accuracy is not satisfactory for certain\n                  applications such as scanning a human body to\n                  provide information about bone structure health. For\n                  one reason, camera noise and noise in the ICP method\n                  limit the accuracy. For another, the error in\n                  estimating the global camera poses accumulates. In\n                  this paper, we present a method to optimize\n                  KinectFusion for 3D scanning in the above\n                  scenarios. We aim to reduce the noise influence on\n                  camera pose tracking. The idea is as follows: in our\n                  application scenarios we can always assume that\n                  either the camera rotates around the object to be\n                  scanned or that the object rotates in front of the\n                  camera. In both cases, the relative camera/object\n                  pose is located on a 3D-circle. Therefore, camera\n                  motion can be described as a rotation around a fixed\n                  axis passing through a fixed point. Since the axis\n                  and the center of rotation are always fixed, the\n                  error averaging principle can be utilized to reduce\n                  the noise impact and hence to enhance the 3D model\n                  accuracy of scanned object.},\n}\n\n
\n
\n\n\n
\n KinectFusion is an effective way to reconstruct indoor scenes. It takes a depth image stream and uses the iterative closests point (ICP) method to estimate the camera motion. Then it merges the images in a volume to construct a 3D model. The model accuracy is not satisfactory for certain applications such as scanning a human body to provide information about bone structure health. For one reason, camera noise and noise in the ICP method limit the accuracy. For another, the error in estimating the global camera poses accumulates. In this paper, we present a method to optimize KinectFusion for 3D scanning in the above scenarios. We aim to reduce the noise influence on camera pose tracking. The idea is as follows: in our application scenarios we can always assume that either the camera rotates around the object to be scanned or that the object rotates in front of the camera. In both cases, the relative camera/object pose is located on a 3D-circle. Therefore, camera motion can be described as a rotation around a fixed axis passing through a fixed point. Since the axis and the center of rotation are always fixed, the error averaging principle can be utilized to reduce the noise impact and hence to enhance the 3D model accuracy of scanned object.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n CRVM: Circular Random Variable-based Matcher - A Novel Hashing Method for Fast NN Search in High-dimensional Spaces.\n \n \n \n\n\n \n Alhwarin, F.; Ferrein, A.; and Scholl, I.\n\n\n \n\n\n\n In Marsico, M. D.; di Baja, G. S.; and Fred, A. L. N., editor(s), Proceedings of the 7th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2018, pages 214–221, 2018. SciTePress\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{AlhwarinFS18,\n  author    = {Faraj Alhwarin and Alexander Ferrein and Ingrid Scholl},\n  title     = {{CRVM:} Circular Random Variable-based Matcher - {A} Novel Hashing\n               Method for Fast {NN} Search in High-dimensional Spaces},\n  booktitle = {Proceedings of the 7th International Conference on Pattern Recognition\n               Applications and Methods, {ICPRAM} 2018},\n  pages     = {214--221},\n  year      = {2018},\n  editor    = {Maria De Marsico and Gabriella Sanniti di Baja and Ana L. N. Fred},\n  publisher = {SciTePress},\n  year      = {2018},\n  isbn      = {978-989-758-276-9},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n AGNES: The African-German Network of Excellence in Science.\n \n \n \n\n\n \n \n\n\n \n\n\n\n 2017.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Proceedings{MarcoF17,\n  title = \t {AGNES: The African-German Network of Excellence in Science},\n  year = \t {2017},\n  OPTbooktitle = {Proceedings of the 2nd Developing World Robotics\n                  Forum, Workshop at IEEE AFRICON 2017},\n  OPTeditor = \t {},\n  OPTvolume = \t {},\n  OPTnumber = \t {},\n  OPTseries = \t {},\n  OPTaddress = \t {},\n  OPTmonth = \t {},\n  OPTorganization = {},\n  OPTpublisher = {},\n  OPTnote = \t {},\n  OPTannote = \t {}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A decentralised system approach for controlling AGVs with ROS.\n \n \n \n\n\n \n Walenta, R.; Schellekens, T.; Ferrein, A.; and Schiffer, S.\n\n\n \n\n\n\n In 2017 IEEE AFRICON, pages 1436–1441, 2017. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@inproceedings{WalentaSFS17,\n\tAuthor = {R. Walenta and T. Schellekens and A. Ferrein and S. Schiffer},\n\tBooktitle = {2017 IEEE AFRICON},\n\tDate-Added = {2018-04-19 21:28:26 +0000},\n\tDate-Modified = {2018-04-19 21:28:26 +0000},\n\tDoi = {10.1109/AFRCON.2017.8095693},\n\tJournal = {2017 IEEE AFRICON},\n\tJournal1 = {2017 IEEE AFRICON},\n\tKeywords = {automatic guided vehicles; control engineering computing; decentralised control; industrial robots; materials handling; middleware; mobile robots; motion control; multi-robot systems; path planning; AGV; Robot Operating System; centralised controllers; conflict-free routing; current industrial state; decentralised control; decentralised system approach; dynamic path planning; industrial applications; intra-logistic applications; material transportation tasks; mobile robotics solutions; motion control; multiple automated guided vehicles; navigation algorithms; noncentralised control; predefined paths; robotics applications; robust path planning; system architecture; testing phase; traffic coordination; Navigation; Operating systems; Path planning; Robot kinematics; Service robots},\n\tPages = {1436--1441},\n\tTitle = {A decentralised system approach for controlling AGVs with ROS},\n\tTy = {CONF},\n\tYear = {2017},\n\tYear1 = {18-20 Sept. 2017},\n\tBdsk-Url-1 = {http://dx.doi.org/10.1109/AFRCON.2017.8095693}}\n\n\n\n\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cyber-Physical System Intelligence – Knowledge-Based Mobile Robot Autonomy in an Industrial Scenario.\n \n \n \n \n\n\n \n Niemueller, T.; Zwilling, F.; Lakemeyer, G.; Löbach, M.; Reuter, S.; Jeschke, S.; and Ferrein, A.\n\n\n \n\n\n\n In Song, H.; Jeschke, S.; Brecher, C.; and Rawat, D., editor(s), Industrial Internet of Things — Cybermanufacturing Systems, pages 447–472. Springer, 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Cyber-PhysicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InCollection{NiemuellerZLLRJF17,\n  author = \t {Tim Niemueller and Frederik Zwilling and Gerhard\n                  Lakemeyer and Matthias L{\\"o}bach and Sebastian\n                  Reuter and Sabina Jeschke and Alexander Ferrein},\n  title = \t {{Cyber-Physical System Intelligence -- Knowledge-Based Mobile Robot Autonomy in an Industrial Scenario}},\n  booktitle = \t {Industrial Internet of Things --- Cybermanufacturing Systems},\n  OPTcrossref =  {},\n  OPTkey = \t {},\n  OPTpages = \t {},\n  publisher =    {Springer},\n  year = \t {2017},\n  editor = \t {Houbing Song and Sabina Jeschke and Christian\n                  Brecher and Danda Rawat},\n  pages="447--472",\n  isbn="978-3-319-42559-7",\n\n  OPTvolume = \t {},\n  OPTnumber = \t {},\n  OPTseries = \t {},\n  OPTtype = \t {},\n  OPTchapter = \t {},\n  OPTaddress = \t {},\n  OPTedition = \t {},\n  OPTmonth = \t {},\n  OPTannote = \t {},\ndoi="doi:10.1007/978-3-319-42559-7_17",\nurl="http://dx.doi.org/10.1007/978-3-319-42559-7_17"\n}\n\n\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ReVolVR – Rendering Volume Data in VR using HTC Vive.\n \n \n \n \n\n\n \n Suder, S.; Schiffer, S.; and Scholl, I.\n\n\n \n\n\n\n In GTC Europe 2017, October 10-12 2017. \n \n\n\n\n
\n\n\n\n \n \n \"ReVolVRPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{ Suder:Scholl:Schiffer:GTC-Europe2017:ReVolVR,\r\n  title        = {{ReVolVR} -- Rendering Volume Data in {VR} using {HTC Vive}},\r\n  author       = {Sebastian Suder and Stefan Schiffer and Ingrid Scholl},\r\n  booktitle    = {GTC Europe 2017},\r\n  year         = {2017},\r\n  month        = {October 10-12},\r\n  location     = {Munich},\r\n  ID           = {ID 23055},\r\n  keywords     = {Virtual Reality, Volumen Rendering, HTC Vive},\r\n  url          = {http://on-demand-gtc.gputechconf.com/gtc-quicklink/PzLIdd},\r\n  abstract     = {ReVolVR is a new Virtual Reality (VR) volume\r\n                  rendering application based on the HTC Vive VR\r\n                  technique. The application uses the ray casting\r\n                  algorithm for direct volume rendering. Ray casting\r\n                  needs a transfer function to classify several\r\n                  surfaces. To find a good transfer function is in\r\n                  general a manual and time-consuming procedure and\r\n                  requires detailed knowledge of the data. With\r\n                  ReVolVr, the transfer function can be modified in\r\n                  the virtual scene while continuously real-time\r\n                  rendering the stereoscopic 3D volume through\r\n                  GPU-based ray casting shader. All interactions are\r\n                  designed to conveniently reflect to real movements\r\n                  of the user.},\r\n}\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2016\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2015\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2014\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2013\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2012\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n%%  2011\r\n%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\r\n\r\n
\n
\n\n\n
\n ReVolVR is a new Virtual Reality (VR) volume rendering application based on the HTC Vive VR technique. The application uses the ray casting algorithm for direct volume rendering. Ray casting needs a transfer function to classify several surfaces. To find a good transfer function is in general a manual and time-consuming procedure and requires detailed knowledge of the data. With ReVolVr, the transfer function can be modified in the virtual scene while continuously real-time rendering the stereoscopic 3D volume through GPU-based ray casting shader. All interactions are designed to conveniently reflect to real movements of the user.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cognitive Robot Architectures, Proceedings of EUCognition 2016 European Association for Cognitive Systems, Vienna, 8-9 December, 2016.\n \n \n \n \n\n\n \n Schiffer, S.; and Ferrein, A.\n\n\n \n\n\n\n In Chrisley, R.; Müller, V. C.; Sandamirskaya, Y.; and Vincze, M., editor(s), Cognitive Robot Architectures, Proceedings of EUCognition 2016, volume 1855, of CEUR Workshop Proceedings, pages 44–45, 2017. CEUR-WS.org\n \n\n\n\n
\n\n\n\n \n \n \"CognitivePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n