var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F2805047%2Fcollections%2FICBS3JG5%2Fitems%3Fkey%3Dvu5JAgEl9brxvxihKOTCpenn%26format%3Dbibtex%26limit%3D100&jsonp=1&css=self&theme=default&group0=type&sort=-year&owner=Kashino&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F2805047%2Fcollections%2FICBS3JG5%2Fitems%3Fkey%3Dvu5JAgEl9brxvxihKOTCpenn%26format%3Dbibtex%26limit%3D100&jsonp=1&css=self&theme=default&group0=type&sort=-year&owner=Kashino\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fapi.zotero.org%2Fusers%2F2805047%2Fcollections%2FICBS3JG5%2Fitems%3Fkey%3Dvu5JAgEl9brxvxihKOTCpenn%26format%3Dbibtex%26limit%3D100&jsonp=1&css=self&theme=default&group0=type&sort=-year&owner=Kashino\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n Thesis\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n An Adaptive Approach to Optimal Sparse Mobile-target Search Planning using Heterogeneous Agents.\n \n \n \n \n\n\n \n Kashino, Z.\n\n\n \n\n\n\n Ph.D. Thesis, June 2020.\n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@phdthesis{kashino_adaptive_2020,\n\ttype = {Thesis},\n\ttitle = {An {Adaptive} {Approach} to {Optimal} {Sparse} {Mobile}-target {Search} {Planning} using {Heterogeneous} {Agents}},\n\tcopyright = {Attribution 4.0 International},\n\turl = {https://tspace.library.utoronto.ca/handle/1807/101347},\n\tabstract = {Planning an effective search for locating mobile-targets is essential in a variety of scenarios (e.g., search and rescue, environmental monitoring, law enforcement, etc.). Effective planning is especially important when resources are sparse (i.e., they are insufficient to search a significant fraction of the area of interest). For example, in wilderness search and rescue, where the search area is typically much larger than the search team can cover, effective planning is necessary to locate a lost person as soon as possible and maximize their probability of survival. Most research to-date has investigated the topic of effective search planning that makes use of homogeneous resources. Heterogeneous search teams (e.g., a team employing static sensors, unmanned ground vehicles, and unmanned aerial vehicles) could enhance the variety and overall quantity of resources deployed for search. Additionally, due to the varied nature of heterogeneous search resources, synergies between resource types could be exploited to achieve effectiveness that surpasses what would be achievable when working independently. \nThis dissertation presents the results of an in-depth investigation into the topic of search planning for locating mobile targets. Specifically, it presents novel methods and strategies for planning a mobile-target search with heterogeneous search resources to maximize the probability of success. The strategies and methods presented herein are: (i) a spatiotemporally optimized dynamic static-sensor network deployment planning method, (ii) two static-mobile hybrid search planning methodologies, and (iii) an aerial-ground hybrid search planning method. These methodologies were developed with an application to wilderness search and rescue in mind, assuming sparseness of search resources, the existence of terrain, and limited information regarding the target. However, despite this focus, the problem formulation and solution methods presented herein are applicable to a wide variety of problems in other fields as well. The validity of the methodologies is supported by the results of extensive simulated search experiments.\nBy developing search planning methodologies for heterogeneous search teams which include static-sensors, mobile ground units, and mobile aerial units, this research has expanded the range of resources and tools available to searchers to improve search success probability in real world mobile-target search.},\n\tlanguage = {en},\n\turldate = {2020-08-27},\n\tauthor = {Kashino, Zendai},\n\tmonth = jun,\n\tyear = {2020},\n}\n\n
\n
\n\n\n
\n Planning an effective search for locating mobile-targets is essential in a variety of scenarios (e.g., search and rescue, environmental monitoring, law enforcement, etc.). Effective planning is especially important when resources are sparse (i.e., they are insufficient to search a significant fraction of the area of interest). For example, in wilderness search and rescue, where the search area is typically much larger than the search team can cover, effective planning is necessary to locate a lost person as soon as possible and maximize their probability of survival. Most research to-date has investigated the topic of effective search planning that makes use of homogeneous resources. Heterogeneous search teams (e.g., a team employing static sensors, unmanned ground vehicles, and unmanned aerial vehicles) could enhance the variety and overall quantity of resources deployed for search. Additionally, due to the varied nature of heterogeneous search resources, synergies between resource types could be exploited to achieve effectiveness that surpasses what would be achievable when working independently. This dissertation presents the results of an in-depth investigation into the topic of search planning for locating mobile targets. Specifically, it presents novel methods and strategies for planning a mobile-target search with heterogeneous search resources to maximize the probability of success. The strategies and methods presented herein are: (i) a spatiotemporally optimized dynamic static-sensor network deployment planning method, (ii) two static-mobile hybrid search planning methodologies, and (iii) an aerial-ground hybrid search planning method. These methodologies were developed with an application to wilderness search and rescue in mind, assuming sparseness of search resources, the existence of terrain, and limited information regarding the target. However, despite this focus, the problem formulation and solution methods presented herein are applicable to a wide variety of problems in other fields as well. The validity of the methodologies is supported by the results of extensive simulated search experiments. By developing search planning methodologies for heterogeneous search teams which include static-sensors, mobile ground units, and mobile aerial units, this research has expanded the range of resources and tools available to searchers to improve search success probability in real world mobile-target search.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n article\n \n \n (14)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Transparency in Human-Machine Mutual Action.\n \n \n \n \n\n\n \n Saito, H.; Horie, A.; Maekawa, A.; Matsubara, S.; Wakisaka, S.; Kashino, Z.; Kasahara, S.; and Inami, M.\n\n\n \n\n\n\n Journal of Robotics and Mechatronics, 33(5): 987–1003. October 2021.\n \n\n\n\n
\n\n\n\n \n \n \"TransparencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{saito_transparency_2021,\n\ttitle = {Transparency in {Human}-{Machine} {Mutual} {Action}},\n\tvolume = {33},\n\turl = {https://www.fujipress.jp/jrm/rb/robot003300050987/},\n\tdoi = {10.20965/jrm.2021.p0987},\n\tabstract = {Title: Transparency in Human-Machine Mutual Action {\\textbar} Keywords: human-computer integration, transparency, human-machine mutual action, human augmentation {\\textbar} Author: Hiroto Saito, Arata Horie, Azumi Maekawa, Seito Matsubara, Sohei Wakisaka, Zendai Kashino, Shunichi Kasahara, and Masahiko Inami},\n\tnumber = {5},\n\turldate = {2021-10-20},\n\tjournal = {Journal of Robotics and Mechatronics},\n\tauthor = {Saito, Hiroto and Horie, Arata and Maekawa, Azumi and Matsubara, Seito and Wakisaka, Sohei and Kashino, Zendai and Kasahara, Shunichi and Inami, Masahiko},\n\tmonth = oct,\n\tyear = {2021},\n\tpages = {987--1003},\n}\n\n
\n
\n\n\n
\n Title: Transparency in Human-Machine Mutual Action \\textbar Keywords: human-computer integration, transparency, human-machine mutual action, human augmentation \\textbar Author: Hiroto Saito, Arata Horie, Azumi Maekawa, Seito Matsubara, Sohei Wakisaka, Zendai Kashino, Shunichi Kasahara, and Masahiko Inami\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An inchworm-inspired motion strategy for robotic swarms.\n \n \n \n \n\n\n \n Eshaghi, K.; Kashino, Z.; Yoon, H. J.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n Robotica,1–23. April 2021.\n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{eshaghi_inchworm-inspired_2021,\n\ttitle = {An inchworm-inspired motion strategy for robotic swarms},\n\tissn = {0263-5747, 1469-8668},\n\turl = {https://www.cambridge.org/core/journals/robotica/article/abs/an-inchworminspired-motion-strategy-for-robotic-swarms/C36D55D6588FB4E7E561615FBAF6D1A9},\n\tdoi = {10.1017/S0263574721000321},\n\tabstract = {Effective motion planning and localization are necessary tasks for swarm robotic systems to maintain a desired formation while maneuvering. Herein, we present an inchworm-inspired strategy that addresses both these tasks concurrently using anchor robots. The proposed strategy is novel as, by dynamically and optimally selecting the anchor robots, it allows the swarm to maximize its localization performance while also considering secondary objectives, such as the swarm’s speed. A complementary novel method for swarm localization, that fuses inter-robot proximity measurements and motion commands, is also presented. Numerous simulated and physical experiments are included to illustrate our contributions.},\n\tlanguage = {en},\n\turldate = {2021-05-10},\n\tjournal = {Robotica},\n\tauthor = {Eshaghi, Kasra and Kashino, Zendai and Yoon, Hyun Joong and Nejat, Goldie and Benhabib, Beno},\n\tmonth = apr,\n\tyear = {2021},\n\tkeywords = {Millirobots, Motion planning, Multi-robot systems, Swarm localization, Swarm robotics},\n\tpages = {1--23},\n}\n\n
\n
\n\n\n
\n Effective motion planning and localization are necessary tasks for swarm robotic systems to maintain a desired formation while maneuvering. Herein, we present an inchworm-inspired strategy that addresses both these tasks concurrently using anchor robots. The proposed strategy is novel as, by dynamically and optimally selecting the anchor robots, it allows the swarm to maximize its localization performance while also considering secondary objectives, such as the swarm’s speed. A complementary novel method for swarm localization, that fuses inter-robot proximity measurements and motion commands, is also presented. Numerous simulated and physical experiments are included to illustrate our contributions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Aerial Wilderness Search and Rescue with Ground Support.\n \n \n \n \n\n\n \n Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n Journal of Intelligent & Robotic Systems, 99(1): 147–163. July 2020.\n \n\n\n\n
\n\n\n\n \n \n \"AerialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 8 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{kashino_aerial_2020,\n\ttitle = {Aerial {Wilderness} {Search} and {Rescue} with {Ground} {Support}},\n\tvolume = {99},\n\tissn = {1573-0409},\n\turl = {https://doi.org/10.1007/s10846-019-01105-y},\n\tdoi = {10.1007/s10846-019-01105-y},\n\tabstract = {Unmanned aerial vehicles (UAVs) have been proposed for a wide range of applications. Their use in wilderness search and rescue (WiSAR), in particular, has been investigated for fast search-area coverage from a high vantage point. The probability of success in such searches, however, can be further improved utilizing cooperative systems that employ both UAVs and unmanned ground vehicles (UGVs). In this paper, we present a new coordinated-search planning method, for collaborative UAV-UGV teams. The proposed method, particularly developed for WiSAR, considers the search area to be continuously growing and that the search is sparse. It is also assumed that targets detected by UAVs must be identified by a ground-level searcher. The UAV/UGV motion-planning method presented herein, therefore, has two major components: (i) coordinated search and (ii) joint target identification. The novelty of the proposed method lies in its use of (i) time-dependent target-location iso-probability curves, and (ii) an effective and efficient coordinated target-identification algorithm. The method has been validated via numerous simulated WiSAR searches for varying scenarios. Furthermore, extensive comparative experiments with other methods have shown that our method has higher rates of target detection and shorter search times, significantly outperforming alternative techniques by 75\\% – 255\\% in terms of target detection probability.},\n\tlanguage = {en},\n\tnumber = {1},\n\turldate = {2020-08-26},\n\tjournal = {Journal of Intelligent \\& Robotic Systems},\n\tauthor = {Kashino, Zendai and Nejat, Goldie and Benhabib, Beno},\n\tmonth = jul,\n\tyear = {2020},\n\tkeywords = {Autonomous mobile-target search, Iso-probability curves, UAV-UGV Cooperative search planning, Wilderness search and rescue},\n\tpages = {147--163},\n}\n\n
\n
\n\n\n
\n Unmanned aerial vehicles (UAVs) have been proposed for a wide range of applications. Their use in wilderness search and rescue (WiSAR), in particular, has been investigated for fast search-area coverage from a high vantage point. The probability of success in such searches, however, can be further improved utilizing cooperative systems that employ both UAVs and unmanned ground vehicles (UGVs). In this paper, we present a new coordinated-search planning method, for collaborative UAV-UGV teams. The proposed method, particularly developed for WiSAR, considers the search area to be continuously growing and that the search is sparse. It is also assumed that targets detected by UAVs must be identified by a ground-level searcher. The UAV/UGV motion-planning method presented herein, therefore, has two major components: (i) coordinated search and (ii) joint target identification. The novelty of the proposed method lies in its use of (i) time-dependent target-location iso-probability curves, and (ii) an effective and efficient coordinated target-identification algorithm. The method has been validated via numerous simulated WiSAR searches for varying scenarios. Furthermore, extensive comparative experiments with other methods have shown that our method has higher rates of target detection and shorter search times, significantly outperforming alternative techniques by 75% – 255% in terms of target detection probability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Directional-Sensor Network Deployment Planning for Mobile-Target Search.\n \n \n \n \n\n\n \n Wasim, S.; Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n Robotics, 9(4): 82. December 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Directional-SensorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{wasim_directional-sensor_2020,\n\ttitle = {Directional-{Sensor} {Network} {Deployment} {Planning} for {Mobile}-{Target} {Search}},\n\tvolume = {9},\n\tcopyright = {http://creativecommons.org/licenses/by/3.0/},\n\turl = {https://www.mdpi.com/2218-6581/9/4/82},\n\tdoi = {10.3390/robotics9040082},\n\tabstract = {In this paper, a novel time-phased directional-sensor network deployment strategy is presented for the mobile-target search problem, e.g., wilderness search and rescue (WiSAR). The proposed strategy uses probabilistic target-motion models combined with a variation of a standard direct search algorithm to plan the optimal locations of directional-sensors which maximize the likelihood of target detection. A linear sensing model is employed as a simplification for directional-sensor network deployment planning, while considering physical constraints, such as on-time sensor deliverability. Extensive statistical simulations validated our method. One such illustrative experiment is included herein to demonstrate the method\\&rsquo;s operation. A comparative study was also carried out, whose summary is included in this paper, to highlight the tangible improvement of our approach versus three traditional deployment strategies: a uniform, a random, and a ring-of-fire type deployment, respectively.},\n\tlanguage = {en},\n\tnumber = {4},\n\turldate = {2020-10-15},\n\tjournal = {Robotics},\n\tauthor = {Wasim, Shiraz and Kashino, Zendai and Nejat, Goldie and Benhabib, Beno},\n\tmonth = dec,\n\tyear = {2020},\n\tkeywords = {directional-sensors, mobile-target search, sensing models, time-phased sensor delivery},\n\tpages = {82},\n}\n\n
\n
\n\n\n
\n In this paper, a novel time-phased directional-sensor network deployment strategy is presented for the mobile-target search problem, e.g., wilderness search and rescue (WiSAR). The proposed strategy uses probabilistic target-motion models combined with a variation of a standard direct search algorithm to plan the optimal locations of directional-sensors which maximize the likelihood of target detection. A linear sensing model is employed as a simplification for directional-sensor network deployment planning, while considering physical constraints, such as on-time sensor deliverability. Extensive statistical simulations validated our method. One such illustrative experiment is included herein to demonstrate the method’s operation. A comparative study was also carried out, whose summary is included in this paper, to highlight the tangible improvement of our approach versus three traditional deployment strategies: a uniform, a random, and a ring-of-fire type deployment, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Hybrid Strategy for Target Search Using Static and Mobile Sensors.\n \n \n \n\n\n \n Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n IEEE Transactions on Cybernetics, 50(2): 856–868. February 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{kashino_hybrid_2020,\n\ttitle = {A {Hybrid} {Strategy} for {Target} {Search} {Using} {Static} and {Mobile} {Sensors}},\n\tvolume = {50},\n\tissn = {2168-2275},\n\tdoi = {10.1109/TCYB.2018.2875625},\n\tabstract = {Locating a mobile target, untrackable in real-time, is pertinent to numerous time-critical applications, such as wilderness search and rescue. This paper proposes a hybrid approach to this dynamic problem, where both static and mobile sensors are utilized for the goal of detecting a target. The approach is novel in that a team of robots utilized to deploy a staticsensor network also actively searches for the target via on-board sensors. Synergy is achieved through: 1) optimal deployment planning of static-sensor networks and 2) optimal routing and motion planning of the robots for the deployment of the network and target search. The static-sensor network is planned first to maximize the likelihood of target detection while ensuring (temporal and spatial) unbiasedness in target motion. Robot motions are, subsequently, planned in two stages: 1) route planning and 2) trajectory planning. In the first stage, given a static-sensor network configuration, robot routes are planned to maximize the amount of spare time available to the mobile agents/sensors, for target search in between (just-in-time) static-sensor deployments. In the second stage, given robot routes (i.e., optimal sequences of sensor delivery locations and times), the corresponding robot trajectories are planned to make effective use of any spare time the mobile agents may have to search for the target. The proposed search strategy was validated through extensive simulations, some of which are given in detail here. An analysis of the method's performance in terms of target-search success is also included.},\n\tnumber = {2},\n\tjournal = {IEEE Transactions on Cybernetics},\n\tauthor = {Kashino, Zendai and Nejat, Goldie and Benhabib, Beno},\n\tmonth = feb,\n\tyear = {2020},\n\tkeywords = {Hybrid search planning, Object detection, Planning, Robot kinematics, Robot sensing systems, Search problems, hybrid approach, hybrid strategy, mobile robots, mobile sensors, mobile target, mobile-target search, multi-robot systems, multirobot coordination, on-board sensors, optimisation, path planning, robot motions, robot routes, robot trajectories, search problems, sensor delivery locations, spare time, static-sensor deployments, static-sensor network configuration, target detection, target motion, target search, target-search success, wilderness search, wilderness search and rescue (WiSAR), wireless sensor networks},\n\tpages = {856--868},\n}\n\n
\n
\n\n\n
\n Locating a mobile target, untrackable in real-time, is pertinent to numerous time-critical applications, such as wilderness search and rescue. This paper proposes a hybrid approach to this dynamic problem, where both static and mobile sensors are utilized for the goal of detecting a target. The approach is novel in that a team of robots utilized to deploy a staticsensor network also actively searches for the target via on-board sensors. Synergy is achieved through: 1) optimal deployment planning of static-sensor networks and 2) optimal routing and motion planning of the robots for the deployment of the network and target search. The static-sensor network is planned first to maximize the likelihood of target detection while ensuring (temporal and spatial) unbiasedness in target motion. Robot motions are, subsequently, planned in two stages: 1) route planning and 2) trajectory planning. In the first stage, given a static-sensor network configuration, robot routes are planned to maximize the amount of spare time available to the mobile agents/sensors, for target search in between (just-in-time) static-sensor deployments. In the second stage, given robot routes (i.e., optimal sequences of sensor delivery locations and times), the corresponding robot trajectories are planned to make effective use of any spare time the mobile agents may have to search for the target. The proposed search strategy was validated through extensive simulations, some of which are given in detail here. An analysis of the method's performance in terms of target-search success is also included.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n mROBerTO 2.0 – An Autonomous Millirobot With Enhanced Locomotion for Swarm Robotics.\n \n \n \n\n\n \n Eshaghi, K.; Li, Y.; Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n IEEE Robotics and Automation Letters, 5(2): 962–969. April 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{eshaghi_mroberto_2020,\n\ttitle = {{mROBerTO} 2.0 – {An} {Autonomous} {Millirobot} {With} {Enhanced} {Locomotion} for {Swarm} {Robotics}},\n\tvolume = {5},\n\tcopyright = {All rights reserved},\n\tissn = {2377-3774},\n\tdoi = {10.1109/LRA.2020.2966411},\n\tabstract = {Numerous millirobots were developed in the past decade for autonomous swarm systems that aim to utilize large numbers of these units in space-constrained environments. However, the size limitation of these robots has often resulted in their reduced computational, sensing, and locomotion capabilities. mROBerTO (milli-ROBot-TOronto) was developed in response to such limitations. Despite its enhanced features, the reliable and repeatable locomotion of mROBerTO has still been of some concern due to lack of effective closed-loop motion control – as is the case with all other similar millirobots. In this letter, we present the next version of mROBerTO with a new locomotion mechanism that utilizes stepper motors, capable of micro-stepping down to 1/32 of a full step, to yield a millirobot with maneuvering capabilities superior to current similar-sized robots. mROBerTO 2.0 is novel in that it utilizes these stepper motors without relying on a separate processor for controlling them. This letter also presents a complementary new algorithm for efficiently converting desired trajectories into robot-motion commands. The proposed algorithm was developed to allow millirobots to execute complex trajectories reliably in an open-loop manner.},\n\tnumber = {2},\n\tjournal = {IEEE Robotics and Automation Letters},\n\tauthor = {Eshaghi, Kasra and Li, Yuchen and Kashino, Zendai and Nejat, Goldie and Benhabib, Beno},\n\tmonth = apr,\n\tyear = {2020},\n\tkeywords = {Swarms, millirobots, motion control},\n\tpages = {962--969},\n}\n\n
\n
\n\n\n
\n Numerous millirobots were developed in the past decade for autonomous swarm systems that aim to utilize large numbers of these units in space-constrained environments. However, the size limitation of these robots has often resulted in their reduced computational, sensing, and locomotion capabilities. mROBerTO (milli-ROBot-TOronto) was developed in response to such limitations. Despite its enhanced features, the reliable and repeatable locomotion of mROBerTO has still been of some concern due to lack of effective closed-loop motion control – as is the case with all other similar millirobots. In this letter, we present the next version of mROBerTO with a new locomotion mechanism that utilizes stepper motors, capable of micro-stepping down to 1/32 of a full step, to yield a millirobot with maneuvering capabilities superior to current similar-sized robots. mROBerTO 2.0 is novel in that it utilizes these stepper motors without relying on a separate processor for controlling them. This letter also presents a complementary new algorithm for efficiently converting desired trajectories into robot-motion commands. The proposed algorithm was developed to allow millirobots to execute complex trajectories reliably in an open-loop manner.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Sensor-Network-Supported Mobile-Agent-Search Strategy for Wilderness Rescue.\n \n \n \n \n\n\n \n Chong Lee Shin, J.; Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n Robotics, 8(3): 61. July 2019.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{chong_lee_shin_sensor-network-supported_2019,\n\ttitle = {A {Sensor}-{Network}-{Supported} {Mobile}-{Agent}-{Search} {Strategy} for {Wilderness} {Rescue}},\n\tvolume = {8},\n\tcopyright = {http://creativecommons.org/licenses/by/3.0/},\n\turl = {https://www.mdpi.com/2218-6581/8/3/61},\n\tdoi = {10.3390/robotics8030061},\n\tabstract = {Mobile target search is a problem pertinent to a variety of applications, including wilderness search and rescue. This paper proposes a hybrid approach for target search utilizing a team of mobile agents supported by a network of static sensors. The approach is novel in that the mobile agents deploy the sensors at optimized times and locations while they themselves travel along their respective optimized search trajectories. In the proposed approach, mobile-agent trajectories are first planned to maximize the likelihood of target detection. The deployment of the static-sensor network is subsequently planned. Namely, deployment locations and times are optimized while being constrained by the already planned mobile-agent trajectories. The latter optimization problem, as formulated and solved herein, aims to minimize an overall network-deployment error. This overall error comprises three main components, each quantifying a deviation from one of three main objectives the network aims to achieve: (i) maintaining directional unbiasedness in target-motion consideration, (ii) maintaining unbiasedness in temporal search-effort distribution, and, (iii) maximizing the likelihood of target detection. We solve this unique optimization problem using an iterative heuristic-based algorithm with random starts. The proposed hybrid search strategy was validated through the extensive simulations presented in this paper. Furthermore, its performance was evaluated with respect to an alternative hybrid search strategy, where it either outperformed or performed comparably depending on the search resources available.},\n\tlanguage = {en},\n\tnumber = {3},\n\turldate = {2019-07-24},\n\tjournal = {Robotics},\n\tauthor = {Chong Lee Shin, Jason and Kashino, Zendai and Nejat, Goldie and Benhabib, Beno},\n\tmonth = jul,\n\tyear = {2019},\n\tkeywords = {constrained optimization, mobile-target search, multi-agent coordination, wireless sensor networks},\n\tpages = {61},\n}\n\n
\n
\n\n\n
\n Mobile target search is a problem pertinent to a variety of applications, including wilderness search and rescue. This paper proposes a hybrid approach for target search utilizing a team of mobile agents supported by a network of static sensors. The approach is novel in that the mobile agents deploy the sensors at optimized times and locations while they themselves travel along their respective optimized search trajectories. In the proposed approach, mobile-agent trajectories are first planned to maximize the likelihood of target detection. The deployment of the static-sensor network is subsequently planned. Namely, deployment locations and times are optimized while being constrained by the already planned mobile-agent trajectories. The latter optimization problem, as formulated and solved herein, aims to minimize an overall network-deployment error. This overall error comprises three main components, each quantifying a deviation from one of three main objectives the network aims to achieve: (i) maintaining directional unbiasedness in target-motion consideration, (ii) maintaining unbiasedness in temporal search-effort distribution, and, (iii) maximizing the likelihood of target detection. We solve this unique optimization problem using an iterative heuristic-based algorithm with random starts. The proposed hybrid search strategy was validated through the extensive simulations presented in this paper. Furthermore, its performance was evaluated with respect to an alternative hybrid search strategy, where it either outperformed or performed comparably depending on the search resources available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A high-performance millirobot for swarm-behaviour studies: Swarm-topology estimation.\n \n \n \n \n\n\n \n Kim, J. Y; Kashino, Z.; Pineros, L. M.; Bayat, S.; Colaco, T.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n International Journal of Advanced Robotic Systems, 16(6): 1729881419892127. November 2019.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{kim_high-performance_2019,\n\ttitle = {A high-performance millirobot for swarm-behaviour studies: {Swarm}-topology estimation},\n\tvolume = {16},\n\tcopyright = {All rights reserved},\n\tissn = {1729-8814},\n\tshorttitle = {A high-performance millirobot for swarm-behaviour studies},\n\turl = {https://doi.org/10.1177/1729881419892127},\n\tdoi = {10.1177/1729881419892127},\n\tabstract = {In this article, we present a novel high-performance millirobot (milli-robot-Toronto), designed to allow for the testing of complex swarm-behaviours, including human–swarm interaction. milli-robot-Toronto, built only with off-the-shelf components, has locomotion, processing and sensing capabilities that significantly improve upon existing designs, while maintaining one of the smallest footprints among current millirobots. As complementary software to this hardware development, herein, we also present a new global swarm-topology estimation algorithm. The method is novel in that it uniquely fuses incomplete location data collected by the individual robots in a distributed manner to optimally estimate the topology of the overall swarm using a centralized computer. It is a generalized technique usable by any swarm comprising robots capable of collecting location estimates of neighbouring robots. Numerous experiments, evaluating the performance of milli-robot-Toronto and the proposed optimal swarm-topology estimation algorithm, are also included.},\n\tlanguage = {en},\n\tnumber = {6},\n\turldate = {2020-02-06},\n\tjournal = {International Journal of Advanced Robotic Systems},\n\tauthor = {Kim, Justin Y and Kashino, Zendai and Pineros, Laura Marcela and Bayat, Sayeh and Colaco, Tyler and Nejat, Goldie and Benhabib, Beno},\n\tmonth = nov,\n\tyear = {2019},\n\tkeywords = {Mobile millirobots, human–swarm interactions, swarm-topology estimation},\n\tpages = {1729881419892127},\n}\n\n
\n
\n\n\n
\n In this article, we present a novel high-performance millirobot (milli-robot-Toronto), designed to allow for the testing of complex swarm-behaviours, including human–swarm interaction. milli-robot-Toronto, built only with off-the-shelf components, has locomotion, processing and sensing capabilities that significantly improve upon existing designs, while maintaining one of the smallest footprints among current millirobots. As complementary software to this hardware development, herein, we also present a new global swarm-topology estimation algorithm. The method is novel in that it uniquely fuses incomplete location data collected by the individual robots in a distributed manner to optimally estimate the topology of the overall swarm using a centralized computer. It is a generalized technique usable by any swarm comprising robots capable of collecting location estimates of neighbouring robots. Numerous experiments, evaluating the performance of milli-robot-Toronto and the proposed optimal swarm-topology estimation algorithm, are also included.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Deep Reinforcement Learning Robot for Search and Rescue Applications: Exploration in Unknown Cluttered Environments.\n \n \n \n\n\n \n Niroui, F.; Zhang, K.; Kashino, Z.; and Nejat, G.\n\n\n \n\n\n\n IEEE Robotics and Automation Letters, 4(2): 610–617. April 2019.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{niroui_deep_2019,\n\ttitle = {Deep {Reinforcement} {Learning} {Robot} for {Search} and {Rescue} {Applications}: {Exploration} in {Unknown} {Cluttered} {Environments}},\n\tvolume = {4},\n\tissn = {2377-3766},\n\tshorttitle = {Deep {Reinforcement} {Learning} {Robot} for {Search} and {Rescue} {Applications}},\n\tdoi = {10.1109/LRA.2019.2891991},\n\tabstract = {Rescue robots can be used in urban search and rescue (USAR) applications to perform the important task of exploring unknown cluttered environments. Due to the unpredictable nature of these environments, deep learning techniques can be used to perform these tasks. In this letter, we present the first use of deep learning to address the robot exploration task in USAR applications. In particular, we uniquely combine the traditional approach of frontier-based exploration with deep reinforcement learning to allow a robot to autonomously explore unknown cluttered environments. Experiments conducted with a mobile robot in unknown cluttered environments of varying sizes and layouts showed that the proposed exploration approach can effectively determine appropriate frontier locations to navigate to, while being robust to different environment layouts and sizes. Furthermore, a comparison study with other frontier exploration approaches showed that our learning-based frontier exploration technique was able to explore more of an environment earlier on, allowing for potential identification of a larger number of victims at the beginning of the time-critical exploration task.},\n\tnumber = {2},\n\tjournal = {IEEE Robotics and Automation Letters},\n\tauthor = {Niroui, F. and Zhang, K. and Kashino, Z. and Nejat, G.},\n\tmonth = apr,\n\tyear = {2019},\n\tkeywords = {Autonomous agents, Computer architecture, Layout, Microprocessors, Navigation, Robot sensing systems, Task analysis, USAR applications, deep learning in robotics and automation, deep reinforcement learning robot, learning (artificial intelligence), learning-based frontier exploration technique, mobile robot, path planning, rescue robots, robot exploration task, search and rescue robots, time-critical exploration task, unknown cluttered environments, urban search and rescue applications},\n\tpages = {610--617},\n}\n\n
\n
\n\n\n
\n Rescue robots can be used in urban search and rescue (USAR) applications to perform the important task of exploring unknown cluttered environments. Due to the unpredictable nature of these environments, deep learning techniques can be used to perform these tasks. In this letter, we present the first use of deep learning to address the robot exploration task in USAR applications. In particular, we uniquely combine the traditional approach of frontier-based exploration with deep reinforcement learning to allow a robot to autonomously explore unknown cluttered environments. Experiments conducted with a mobile robot in unknown cluttered environments of varying sizes and layouts showed that the proposed exploration approach can effectively determine appropriate frontier locations to navigate to, while being robust to different environment layouts and sizes. Furthermore, a comparison study with other frontier exploration approaches showed that our learning-based frontier exploration technique was able to explore more of an environment earlier on, allowing for potential identification of a larger number of victims at the beginning of the time-critical exploration task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design and implementation of a millirobot for swarm studies – mROBerTO.\n \n \n \n \n\n\n \n Kim, J. Y.; Kashino, Z.; Colaco, T.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n Robotica, 36(11): 1591–1612. November 2018.\n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{kim_design_2018,\n\ttitle = {Design and implementation of a millirobot for swarm studies – {mROBerTO}},\n\tvolume = {36},\n\tcopyright = {All rights reserved},\n\tissn = {0263-5747, 1469-8668},\n\turl = {https://www.cambridge.org/core/journals/robotica/article/design-and-implementation-of-a-millirobot-for-swarm-studies-mroberto/CB3C63E9D40BFD33BAE06D9B1230FDFA},\n\tdoi = {10.1017/S0263574718000589},\n\tabstract = {The use of millirobots, particularly in swarm studies, would enable researchers to verify their proposed autonomous cooperative behavior algorithms under realistic conditions with a large number of agents. While multiple designs for such robots have been proposed, they, typically, require custom-made components, which make replication and manufacturing difficult, and, mostly, employ non-modular integral designs. Furthermore, these robots' proposed small sizes tend to limit sensory perception capabilities and operational time. Some have resolved few of the above issues through the use of extensions that, unfortunately, add to their size.In contribution to the pertinent field, thus, a novel millirobot with an open-source design, addressing the above concerns, is presented in this paper. Our proposed millirobot has a modular design and uses easy to source, off-the-shelf components. The milli-robot-Toronto (mROBerTO) also includes a variety of sensors and has a 16 × 16 mm2 footprint. mROBerTO's wireless communication capabilities include ANT™, Bluetooth Smart, or both simultaneously. Data-processing is handled by an ARM processor with 256 KB of flash memory. Additionally, the sensing modules allow for extending or changing the robot's perception capabilities without adding to the robot's size. For example, the swarm-sensing module, designed to facilitate swarm studies, allows for measuring proximity and bearing to neighboring robots and performing local communications.Extensive experiments, some of which are presented herein, have illustrated the capability of mROBerTO units for use in implementing a variety of commonly proposed swarm algorithms.},\n\tlanguage = {en},\n\tnumber = {11},\n\turldate = {2019-04-01},\n\tjournal = {Robotica},\n\tauthor = {Kim, Justin Y. and Kashino, Zendai and Colaco, Tyler and Nejat, Goldie and Benhabib, Beno},\n\tmonth = nov,\n\tyear = {2018},\n\tkeywords = {Control of robotic systems, Design, Multi-robot systems, Robot localization, Swarm robotics},\n\tpages = {1591--1612},\n}\n\n
\n
\n\n\n
\n The use of millirobots, particularly in swarm studies, would enable researchers to verify their proposed autonomous cooperative behavior algorithms under realistic conditions with a large number of agents. While multiple designs for such robots have been proposed, they, typically, require custom-made components, which make replication and manufacturing difficult, and, mostly, employ non-modular integral designs. Furthermore, these robots' proposed small sizes tend to limit sensory perception capabilities and operational time. Some have resolved few of the above issues through the use of extensions that, unfortunately, add to their size.In contribution to the pertinent field, thus, a novel millirobot with an open-source design, addressing the above concerns, is presented in this paper. Our proposed millirobot has a modular design and uses easy to source, off-the-shelf components. The milli-robot-Toronto (mROBerTO) also includes a variety of sensors and has a 16 × 16 mm2 footprint. mROBerTO's wireless communication capabilities include ANT™, Bluetooth Smart, or both simultaneously. Data-processing is handled by an ARM processor with 256 KB of flash memory. Additionally, the sensing modules allow for extending or changing the robot's perception capabilities without adding to the robot's size. For example, the swarm-sensing module, designed to facilitate swarm studies, allows for measuring proximity and bearing to neighboring robots and performing local communications.Extensive experiments, some of which are presented herein, have illustrated the capability of mROBerTO units for use in implementing a variety of commonly proposed swarm algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Vehicle Routing for Resource Management in Time-Phased Deployment of Sensor Networks.\n \n \n \n\n\n \n Woiceshyn, K.; Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n IEEE Transactions on Automation Science and Engineering, 16(2): 716–728. August 2018.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{woiceshyn_vehicle_2018,\n\ttitle = {Vehicle {Routing} for {Resource} {Management} in {Time}-{Phased} {Deployment} of {Sensor} {Networks}},\n\tvolume = {16},\n\tcopyright = {All rights reserved},\n\tissn = {1545-5955},\n\tdoi = {10.1109/TASE.2018.2857630},\n\tabstract = {Time-phased sensor-network deployment refers to the delivery of a set of sensors to their predetermined locations at exact times by a fleet of vehicles. Applications for such network deployments include wilderness search and rescue (WiSAR) and wildfire monitoring, where desirable resource management would imply allowing the vehicles to perform other tasks between deliveries. The goal of this paper is, thus, to formulate and solve a vehicle-routing problem (VRP) for such just-in-time time-phased sensor-network deployments. The proposed optimization method for the modified VRP outlined herein has two primary novelties: 1) the consideration of spare time as the objective function and 2) the use of a targeted local-search (LS) method. The spare-time objective function was formulated to address the uniqueness of the modified routing problem at hand. The targeted LS algorithm, on the other hand, was developed to tangibly improve the efficiency of the search for the optimal values of the chosen objective function. The proposed vehicle-route-planning method was validated via a range of simulated WiSAR scenarios, some of which are included herein. The robustness of the method to variations in problem parameters was also investigated.},\n\tnumber = {2},\n\tjournal = {IEEE Transactions on Automation Science and Engineering},\n\tauthor = {Woiceshyn, K. and Kashino, Z. and Nejat, G. and Benhabib, B.},\n\tmonth = aug,\n\tyear = {2018},\n\tkeywords = {Monitoring, Planning, Resource management, Robot sensing systems, Task analysis, Vehicle routing, vehicle routing, wireless sensor networks (WSNs)},\n\tpages = {716--728},\n}\n\n
\n
\n\n\n
\n Time-phased sensor-network deployment refers to the delivery of a set of sensors to their predetermined locations at exact times by a fleet of vehicles. Applications for such network deployments include wilderness search and rescue (WiSAR) and wildfire monitoring, where desirable resource management would imply allowing the vehicles to perform other tasks between deliveries. The goal of this paper is, thus, to formulate and solve a vehicle-routing problem (VRP) for such just-in-time time-phased sensor-network deployments. The proposed optimization method for the modified VRP outlined herein has two primary novelties: 1) the consideration of spare time as the objective function and 2) the use of a targeted local-search (LS) method. The spare-time objective function was formulated to address the uniqueness of the modified routing problem at hand. The targeted LS algorithm, on the other hand, was developed to tangibly improve the efficiency of the search for the optimal values of the chosen objective function. The proposed vehicle-route-planning method was validated via a range of simulated WiSAR scenarios, some of which are included herein. The robustness of the method to variations in problem parameters was also investigated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of three-dimensional structured-light sensory systems for microscale measurements.\n \n \n \n \n\n\n \n Marin, V. E.; Kashino, Z.; Chang, W. H. W.; Luitjens, P.; and Nejat, G.\n\n\n \n\n\n\n Optical Engineering, 56(12): 124109. December 2017.\n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{marin_design_2017,\n\ttitle = {Design of three-dimensional structured-light sensory systems for microscale measurements},\n\tvolume = {56},\n\tcopyright = {All rights reserved},\n\tissn = {0091-3286, 1560-2303},\n\turl = {https://www.spiedigitallibrary.org/journals/Optical-Engineering/volume-56/issue-12/124109/Design-of-three-dimensional-structured-light-sensory-systems-for-microscale/10.1117/1.OE.56.12.124109.short},\n\tdoi = {10.1117/1.OE.56.12.124109},\n\tabstract = {Recent advances in precision manufacturing have generated an increasing demand for accurate microscale three-dimensional metrology approaches. Structured light (SL) sensory systems can be used to successfully measure objects in the microscale. However, there are two main challenges in designing SL systems to measure complex microscale objects: (1) the limited measurement volume defined by the system triangulation and microscope optics and (2) the increased random noise in the measurements introduced by the microscope magnification of the noise from the fringe patterns. In a paper, a methodology is proposed for the design of SL systems using image focus fusion for microscale applications, maximizing the measurement volume and minimizing measurement noise for a given set of hardware components. An empirical calibration procedure that relies on a global model for the entire measurement volume to reduce measurement errors is also proposed. Experiments conducted with a variety of microscale objects validate the effectiveness of the proposed design methodology.},\n\tnumber = {12},\n\turldate = {2018-01-23},\n\tjournal = {Optical Engineering},\n\tauthor = {Marin, Veronica E. and Kashino, Zendai and Chang, Wei Hao Wayne and Luitjens, Pieter and Nejat, Goldie},\n\tmonth = dec,\n\tyear = {2017},\n\tpages = {124109},\n}\n\n
\n
\n\n\n
\n Recent advances in precision manufacturing have generated an increasing demand for accurate microscale three-dimensional metrology approaches. Structured light (SL) sensory systems can be used to successfully measure objects in the microscale. However, there are two main challenges in designing SL systems to measure complex microscale objects: (1) the limited measurement volume defined by the system triangulation and microscope optics and (2) the increased random noise in the measurements introduced by the microscope magnification of the noise from the fringe patterns. In a paper, a methodology is proposed for the design of SL systems using image focus fusion for microscale applications, maximizing the measurement volume and minimizing measurement noise for a given set of hardware components. An empirical calibration procedure that relies on a global model for the entire measurement volume to reduce measurement errors is also proposed. Experiments conducted with a variety of microscale objects validate the effectiveness of the proposed design methodology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Spatiotemporal Adaptive Optimization of a Static-Sensor Network via a Non-Parametric Estimation of Target Location Likelihood.\n \n \n \n\n\n \n Kashino, Z.; Kim, J. Y.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n IEEE Sensors Journal, 17(5): 1479–1492. March 2017.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{kashino_spatiotemporal_2017,\n\ttitle = {Spatiotemporal {Adaptive} {Optimization} of a {Static}-{Sensor} {Network} via a {Non}-{Parametric} {Estimation} of {Target} {Location} {Likelihood}},\n\tvolume = {17},\n\tcopyright = {All rights reserved},\n\tissn = {1530-437X},\n\tdoi = {10.1109/JSEN.2016.2638623},\n\tabstract = {The search for a mobile target is a dynamic spatial and temporal problem. Information gathered during the search, regarding the potential whereabouts of the target, can be used to influence significantly the search strategy employed. In this paper, thus, a novel (wireless) static-sensor network deployment strategy is presented to detect efficiently and effectively a mobile target in an unbounded environment. The proposed strategy deploys the static sensors optimally and in a time-phased manner while adapting in real-time to the availability of new information during the search. An optimal network-deployment plan, herein, refers to a set of optimal sensor-deployment times and locations. The optimal sensor-deployment instances aim to achieve a uniform deployment of search effort over time. Optimal sensor-deployment locations, in turn, are determined according to the highest possible likelihood of detection of the mobile target. The proposed deployment strategy contains two novel contributions: it makes use of a non-parametric approach to estimate effectively the target location likelihood, and it performs an optimization of sensor-placement times to maximize the adaptive characteristic of the deployment plan. Several detailed experiments (in virtual and physical environments) of the proposed strategy for static-sensor network deployment in wilderness search and rescue applications are presented. Furthermore, a comparative study is included to highlight the advantages of our approach versus traditional methods that deploy sensors simultaneously for uniform coverage.},\n\tnumber = {5},\n\tjournal = {IEEE Sensors Journal},\n\tauthor = {Kashino, Z. and Kim, J. Y. and Nejat, G. and Benhabib, B.},\n\tmonth = mar,\n\tyear = {2017},\n\tkeywords = {Adaptive systems, Likelihood estimation, Mobile communication, Mobile computing, Object detection, Search problems, Sensors, Wireless sensor networks, likelihood estimation, mobile target, mobile-target detection, nonparametric estimation, object detection, optimal deployment, search problems, sensor placement, sensor-placement times, spatiotemporal adaptive optimization, static-sensor network, target location likelihood, unbounded environment, wireless sensor networks},\n\tpages = {1479--1492},\n}\n\n
\n
\n\n\n
\n The search for a mobile target is a dynamic spatial and temporal problem. Information gathered during the search, regarding the potential whereabouts of the target, can be used to influence significantly the search strategy employed. In this paper, thus, a novel (wireless) static-sensor network deployment strategy is presented to detect efficiently and effectively a mobile target in an unbounded environment. The proposed strategy deploys the static sensors optimally and in a time-phased manner while adapting in real-time to the availability of new information during the search. An optimal network-deployment plan, herein, refers to a set of optimal sensor-deployment times and locations. The optimal sensor-deployment instances aim to achieve a uniform deployment of search effort over time. Optimal sensor-deployment locations, in turn, are determined according to the highest possible likelihood of detection of the mobile target. The proposed deployment strategy contains two novel contributions: it makes use of a non-parametric approach to estimate effectively the target location likelihood, and it performs an optimization of sensor-placement times to maximize the adaptive characteristic of the deployment plan. Several detailed experiments (in virtual and physical environments) of the proposed strategy for static-sensor network deployment in wilderness search and rescue applications are presented. Furthermore, a comparative study is included to highlight the advantages of our approach versus traditional methods that deploy sensors simultaneously for uniform coverage.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Dynamic Approach to Sensor Network Deployment for Mobile-Target Detection in Unstructured, Expanding Search Areas.\n \n \n \n\n\n \n Vilela, J.; Kashino, Z.; Ly, R.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n IEEE Sensors Journal, 16(11): 4405–4417. June 2016.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{vilela_dynamic_2016,\n\ttitle = {A {Dynamic} {Approach} to {Sensor} {Network} {Deployment} for {Mobile}-{Target} {Detection} in {Unstructured}, {Expanding} {Search} {Areas}},\n\tvolume = {16},\n\tcopyright = {All rights reserved},\n\tissn = {1530-437X},\n\tdoi = {10.1109/JSEN.2016.2537331},\n\tabstract = {This paper proposes a novel strategy for the deployment of a static-sensor network based on the use of a target-motion probability model. The focus is on the real-time dynamic and optimal deployment of the network for detecting untrackable targets. The dynamic nature of the deployment refers to the on-line reconfigurability of the network as real-time information about the target becomes available. The optimal locations of the network nodes, in turn, are determined based on maximizing the probability of finding the target through the use of iso-cumulative-probability curves. The proposed strategy is adaptable to unstructured environments with natural terrain variation and the presence of obstacles. Extensive simulations, some of which are included in this paper, verified the advantage of our deployment strategy over other existing methods. Namely, the proposed strategy can tangibly increase the success rate of target detection, while reducing the mean detection time, when compared with uniform-coverage-based approaches that do not consider probabilistic target-motion modeling. A comprehensive example is also included, herein, to illustrate the successful application of our proposed deployment strategy to a wilderness search and rescue scenario, where both static and mobile sensors are employed within a hybrid sensor-deployment strategy.},\n\tnumber = {11},\n\tjournal = {IEEE Sensors Journal},\n\tauthor = {Vilela, J. and Kashino, Z. and Ly, R. and Nejat, G. and Benhabib, B.},\n\tmonth = jun,\n\tyear = {2016},\n\tkeywords = {Genetic algorithms, Mobile communication, Object detection, Probabilistic logic, Real-time systems, Sensor phenomena and characterization, Sensors, Static-sensor networks, Target tracking, dynamic approach, expanding search areas, genetic algorithms, mobile-target detection, natural terrain variation, object detection, on-line reconfigurability, optimal deployment, real-time systems, sensor network deployment, sensors, static-sensor network, unstructured search areas},\n\tpages = {4405--4417},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel strategy for the deployment of a static-sensor network based on the use of a target-motion probability model. The focus is on the real-time dynamic and optimal deployment of the network for detecting untrackable targets. The dynamic nature of the deployment refers to the on-line reconfigurability of the network as real-time information about the target becomes available. The optimal locations of the network nodes, in turn, are determined based on maximizing the probability of finding the target through the use of iso-cumulative-probability curves. The proposed strategy is adaptable to unstructured environments with natural terrain variation and the presence of obstacles. Extensive simulations, some of which are included in this paper, verified the advantage of our deployment strategy over other existing methods. Namely, the proposed strategy can tangibly increase the success rate of target detection, while reducing the mean detection time, when compared with uniform-coverage-based approaches that do not consider probabilistic target-motion modeling. A comprehensive example is also included, herein, to illustrate the successful application of our proposed deployment strategy to a wilderness search and rescue scenario, where both static and mobile sensors are employed within a hybrid sensor-deployment strategy.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (23)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Cyborgs, Human Augmentation, Cybernetics, and JIZAI Body.\n \n \n \n\n\n \n Inami, M.; Uriu, D.; Kashino, Z.; Yoshida, S.; Saito, H.; Maekawa, A.; and Kitazaki, M.\n\n\n \n\n\n\n In Augmented Humans Conference 2022, pages 13, Kashiwa, Chiba, Japan, March 2022. ACM\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{inami_cyborgs_2022,\n\taddress = {Kashiwa, Chiba, Japan},\n\ttitle = {Cyborgs, {Human} {Augmentation}, {Cybernetics}, and {JIZAI} {Body}},\n\tdoi = {10.1145/3519391.3519401},\n\tlanguage = {en},\n\tbooktitle = {Augmented {Humans} {Conference} 2022},\n\tpublisher = {ACM},\n\tauthor = {Inami, Masahiko and Uriu, Daisuke and Kashino, Zendai and Yoshida, Shigeo and Saito, Hiroto and Maekawa, Azumi and Kitazaki, Michiteru},\n\tmonth = mar,\n\tyear = {2022},\n\tpages = {13},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A New Mask for a New Normal: Investigating an AR Supported Future under COVID-19.\n \n \n \n\n\n \n Kashino, Z.; Uriu, D.; Zhang, Z.; Yoshida, S.; and Inami, M.\n\n\n \n\n\n\n In Augmented Humans Conference 2022, pages 11, Kashiwa, Chiba, Japan, March 2022. ACM\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{kashino_new_2022,\n\taddress = {Kashiwa, Chiba, Japan},\n\ttitle = {A {New} {Mask} for a {New} {Normal}: {Investigating} an {AR} {Supported} {Future} under {COVID}-19},\n\tdoi = {10.1145/3519391.3519409},\n\tlanguage = {en},\n\tbooktitle = {Augmented {Humans} {Conference} 2022},\n\tpublisher = {ACM},\n\tauthor = {Kashino, Zendai and Uriu, Daisuke and Zhang, Ziyue and Yoshida, Shigeo and Inami, Masahiko},\n\tmonth = mar,\n\tyear = {2022},\n\tpages = {11},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy.\n \n \n \n \n\n\n \n Shimobayashi, H.; Sasaki, T.; Horie, A.; Arakawa, R.; Kashino, Z.; and Inami, M.\n\n\n \n\n\n\n In Augmented Humans Conference 2021, pages 19–30, New York, NY, USA, February 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"IndependentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{shimobayashi_independent_2021,\n\taddress = {New York, NY, USA},\n\ttitle = {Independent {Control} of {Supernumerary} {Appendages} {Exploiting} {Upper} {Limb} {Redundancy}},\n\tisbn = {978-1-4503-8428-5},\n\turl = {https://doi.org/10.1145/3458709.3458980},\n\tdoi = {10.1145/3458709.3458980},\n\tabstract = {In the field of physical augmentation, researchers have attempted to extend human capabilities by expanding the number of human appendages. To fully realize the potential of having an additional appendage, supernumerary appendages should be independently controllable without interfering with the functionality of existing appendages. Herein, we propose a novel approach for controlling supernumerary appendages by exploiting upper limb redundancy. We present a headphone-style visual sensing device and a recognition system to estimate shoulder movement. Through a set of user experiments, we evaluate the feasibility of our system and reveal the potential of independent control using upper limb redundancy. Our results indicate that participants are able to intentionally give commands through their shoulder motions. Finally, we demonstrate the wide range of supernumerary appendage control applications that our novel approach enables and discuss future prospects for our work.},\n\turldate = {2021-09-14},\n\tbooktitle = {Augmented {Humans} {Conference} 2021},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Shimobayashi, Hideki and Sasaki, Tomoya and Horie, Arata and Arakawa, Riku and Kashino, Zendai and Inami, Masahiko},\n\tmonth = feb,\n\tyear = {2021},\n\tkeywords = {Human Body Redundancy, Independent Control, Supernumerary Appendages, Supernumerary Robotic Limbs, Wearable Sensing},\n\tpages = {19--30},\n}\n\n
\n
\n\n\n
\n In the field of physical augmentation, researchers have attempted to extend human capabilities by expanding the number of human appendages. To fully realize the potential of having an additional appendage, supernumerary appendages should be independently controllable without interfering with the functionality of existing appendages. Herein, we propose a novel approach for controlling supernumerary appendages by exploiting upper limb redundancy. We present a headphone-style visual sensing device and a recognition system to estimate shoulder movement. Through a set of user experiments, we evaluate the feasibility of our system and reveal the potential of independent control using upper limb redundancy. Our results indicate that participants are able to intentionally give commands through their shoulder motions. Finally, we demonstrate the wide range of supernumerary appendage control applications that our novel approach enables and discuss future prospects for our work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 群ロボットの身体化に関する予備検討.\n \n \n \n \n\n\n \n 中川, 雅.; 柏野, 善.; 吉田, 成.; and 稲見, 昌.\n\n\n \n\n\n\n In 第26回VR学会大会, pages 2C1–5, Japan, September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"群ロボットの身体化に関する予備検討Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{__2021-2,\n\taddress = {Japan},\n\ttitle = {群ロボットの身体化に関する予備検討},\n\turl = {http://conference.vrsj.org/ac2021/program/doc/2C1-5.pdf},\n\tabstract = {複数の個体群が協調して動作するシステムを、群ロボットシステムと呼ぶ。このようなシステムは高い頑強性・柔軟性・拡張性を備えている。ヒトの身体が群ロボットで構成されればヒトもこれら性質を獲得することができる。本研究ではヒトの身体全て、もしくは一部を群ロボットで代替する「群身体」を提案しコンセプトを検証するプロトタイプの構成、群ロボットの身体化に必要な条件を探る実験の結果を報告する。},\n\turldate = {2021-09-14},\n\tbooktitle = {第{26回VR学会大会}},\n\tauthor = {中川, 雅人 and 柏野, 善大 and 吉田, 成朗 and 稲見, 昌彦},\n\tmonth = sep,\n\tyear = {2021},\n\tpages = {2C1--5},\n}\n\n
\n
\n\n\n
\n 複数の個体群が協調して動作するシステムを、群ロボットシステムと呼ぶ。このようなシステムは高い頑強性・柔軟性・拡張性を備えている。ヒトの身体が群ロボットで構成されればヒトもこれら性質を獲得することができる。本研究ではヒトの身体全て、もしくは一部を群ロボットで代替する「群身体」を提案しコンセプトを検証するプロトタイプの構成、群ロボットの身体化に必要な条件を探る実験の結果を報告する。\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n マスクには仮面を:AR技術を用いた対人距離の変容.\n \n \n \n \n\n\n \n 柏野, 善.; 瓜生, 大.; and 稲見, 昌.\n\n\n \n\n\n\n In 第26回VR学会大会, pages 1D2–3, Japan, September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"マスクには仮面を:AR技術を用いた対人距離の変容Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{_ar_2021,\n\taddress = {Japan},\n\ttitle = {マスクには仮面を:{AR技術を用いた対人距離の変容}},\n\turl = {http://conference.vrsj.org/ac2021/program/doc/1D2-3.pdf},\n\tabstract = {コロナ禍において自宅外でのマスク着用は常識となっているが、その一方で顔や表情が見えないことはコミュニケーションを阻害する要因になっている。顔本来の表現力を取り戻すためにVR技術を駆使したマスクの「改造」を通じマスクの「透明化」や表情の「代行」などを実現する技術を組み込む試みなどがなされているが、その表現力は本物には及ばない。本研究ではマスクの着用とソーシャルディスタンスの確保が求められるウィズコロナの時代において新たな方向性で視覚的な表現を試みた。具体的にはAR技術を用いてバーチャルならではの動的な表現力を持つ仮面を開発・試用し、本システムがもたらす経験を考察する。},\n\turldate = {2021-09-14},\n\tbooktitle = {第{26回VR学会大会}},\n\tauthor = {柏野, 善大 and 瓜生, 大輔 and 稲見, 昌彦},\n\tmonth = sep,\n\tyear = {2021},\n\tpages = {1D2--3},\n}\n\n
\n
\n\n\n
\n コロナ禍において自宅外でのマスク着用は常識となっているが、その一方で顔や表情が見えないことはコミュニケーションを阻害する要因になっている。顔本来の表現力を取り戻すためにVR技術を駆使したマスクの「改造」を通じマスクの「透明化」や表情の「代行」などを実現する技術を組み込む試みなどがなされているが、その表現力は本物には及ばない。本研究ではマスクの着用とソーシャルディスタンスの確保が求められるウィズコロナの時代において新たな方向性で視覚的な表現を試みた。具体的にはAR技術を用いてバーチャルならではの動的な表現力を持つ仮面を開発・試用し、本システムがもたらす経験を考察する。\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High-Speed Non-Contact Thermal Display Using Infrared Rays and Shutter Mechanism.\n \n \n \n \n\n\n \n Ichihashi, S.; Horie, A.; Hirose, M.; Kashino, Z.; Yoshida, S.; and Inami, M.\n\n\n \n\n\n\n In Adjunct Proceedings of the 2021 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2021 ACM International Symposium on Wearable Computers, of UbiComp '21, pages 565–569, New York, NY, USA, September 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"High-SpeedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{ichihashi_high-speed_2021,\n\taddress = {New York, NY, USA},\n\tseries = {{UbiComp} '21},\n\ttitle = {High-{Speed} {Non}-{Contact} {Thermal} {Display} {Using} {Infrared} {Rays} and {Shutter} {Mechanism}},\n\tisbn = {978-1-4503-8461-2},\n\turl = {https://doi.org/10.1145/3460418.3480160},\n\tdoi = {10.1145/3460418.3480160},\n\tabstract = {Considerable literature has explored the topic of haptics, including thermal presentation, in virtual reality (VR). However, the existing thermal presentation methods have limitations such as low responsiveness and need for contacts. As such, this study proposes a high-speed, non-contact thermal presentation method using infrared rays and a shutter mechanism consisting of fins. The results of the preliminary study suggested that the method could present temperature with different intensities according to the angle of the fin. Finally, several applications taking advantage of its features of high-speed and non-contact thermal presentation are proposed.},\n\turldate = {2021-10-01},\n\tbooktitle = {Adjunct {Proceedings} of the 2021 {ACM} {International} {Joint} {Conference} on {Pervasive} and {Ubiquitous} {Computing} and {Proceedings} of the 2021 {ACM} {International} {Symposium} on {Wearable} {Computers}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Ichihashi, Sosuke and Horie, Arata and Hirose, Masaharu and Kashino, Zendai and Yoshida, Shigeo and Inami, Masahiko},\n\tmonth = sep,\n\tyear = {2021},\n\tkeywords = {haptics, temperature, thermal feedback},\n\tpages = {565--569},\n}\n\n
\n
\n\n\n
\n Considerable literature has explored the topic of haptics, including thermal presentation, in virtual reality (VR). However, the existing thermal presentation methods have limitations such as low responsiveness and need for contacts. As such, this study proposes a high-speed, non-contact thermal presentation method using infrared rays and a shutter mechanism consisting of fins. The results of the preliminary study suggested that the method could present temperature with different intensities according to the angle of the fin. Finally, several applications taking advantage of its features of high-speed and non-contact thermal presentation are proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Two-Dimensional Moving Phantom Sensation Created by Rotational Skin Stretch Distribution.\n \n \n \n\n\n \n Horie, A.; Kashino, Z.; Shimobayashi, H.; and Inami, M.\n\n\n \n\n\n\n In 2021 IEEE World Haptics Conference (WHC), pages 139–144, July 2021. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{horie_two-dimensional_2021,\n\ttitle = {Two-{Dimensional} {Moving} {Phantom} {Sensation} {Created} by {Rotational} {Skin} {Stretch} {Distribution}},\n\tdoi = {10.1109/WHC49131.2021.9517252},\n\tabstract = {This study reports on one of the first attempts to achieve a sense of tactile motion in two-dimensions using an array of stationary rotational skin stretch elements presenting a moving phantom sensation. Herein, we propose an algorithm with two independent control parameters (the size of the stimulus area and the size of the area with maximum stimulus) for generating the moving phantom sensation using our skin stretch tactile display device. In our investigations, we first conducted an experiment to identify the relationship between the mechanical action of our device (a rotation) and the perceived stimulus intensity. Then, using the proposed algorithm, we evaluated the continuity, consistency, and position clarity of phantom sensations under several control parameters and motion direction conditions. Our results showed that both control parameters had a significant effect on the continuity of the stimulus in all directions. Furthermore, we confirmed that, using our current algorithm, the size of the stimulus area has a trade-off relation with the stimulus position clarity. We conclude the paper by discussing our findings, new control parameters that may directly determine continuity of the phantom sensation, and factors that may contribute to the consistency of stimulus intensity. This paper provides fundamental insights into the presentation of skin stretch based moving phantom sensations.},\n\tbooktitle = {2021 {IEEE} {World} {Haptics} {Conference} ({WHC})},\n\tauthor = {Horie, Arata and Kashino, Zendai and Shimobayashi, Hideki and Inami, Masahiko},\n\tmonth = jul,\n\tyear = {2021},\n\tkeywords = {Conferences, Estimation, Haptic interfaces, Object recognition, Phantoms, Quality of experience, Skin},\n\tpages = {139--144},\n}\n\n
\n
\n\n\n
\n This study reports on one of the first attempts to achieve a sense of tactile motion in two-dimensions using an array of stationary rotational skin stretch elements presenting a moving phantom sensation. Herein, we propose an algorithm with two independent control parameters (the size of the stimulus area and the size of the area with maximum stimulus) for generating the moving phantom sensation using our skin stretch tactile display device. In our investigations, we first conducted an experiment to identify the relationship between the mechanical action of our device (a rotation) and the perceived stimulus intensity. Then, using the proposed algorithm, we evaluated the continuity, consistency, and position clarity of phantom sensations under several control parameters and motion direction conditions. Our results showed that both control parameters had a significant effect on the continuity of the stimulus in all directions. Furthermore, we confirmed that, using our current algorithm, the size of the stimulus area has a trade-off relation with the stimulus position clarity. We conclude the paper by discussing our findings, new control parameters that may directly determine continuity of the phantom sensation, and factors that may contribute to the consistency of stimulus intensity. This paper provides fundamental insights into the presentation of skin stretch based moving phantom sensations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 対面-ハイブリッド型会議においてCOVID-19感染対策と円滑な音声対話を両立させる試行実験.\n \n \n \n \n\n\n \n 瓜生, 大.; 澤田, 怜.; 柏野, 善.; and 稲見, 昌.\n\n\n \n\n\n\n In 第26回VR学会大会, pages 1D2–3, Japan, September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"対面-ハイブリッド型会議においてCOVID-19感染対策と円滑な音声対話を両立させる試行実験Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{_-covid-19_2021,\n\taddress = {Japan},\n\ttitle = {対面-ハイブリッド型会議において{COVID}-19感染対策と円滑な音声対話を両立させる試行実験},\n\turl = {http://conference.vrsj.org/ac2021/program/doc/1D2-3.pdf},\n\tabstract = {コロナ禍において自宅外でのマスク着用は常識となっているが、その一方で顔や表情が見えないことはコミュニケーションを阻害する要因になっている。顔本来の表現力を取り戻すためにVR技術を駆使したマスクの「改造」を通じマスクの「透明化」や表情の「代行」などを実現する技術を組み込む試みなどがなされているが、その表現力は本物には及ばない。本研究ではマスクの着用とソーシャルディスタンスの確保が求められるウィズコロナの時代において新たな方向性で視覚的な表現を試みた。具体的にはAR技術を用いてバーチャルならではの動的な表現力を持つ仮面を開発・試用し、本システムがもたらす経験を考察する。},\n\turldate = {2021-09-14},\n\tbooktitle = {第{26回VR学会大会}},\n\tauthor = {瓜生, 大輔 and 澤田, 怜旺 and 柏野, 善大 and 稲見, 昌彦},\n\tmonth = sep,\n\tyear = {2021},\n\tpages = {1D2--3},\n}\n\n
\n
\n\n\n
\n コロナ禍において自宅外でのマスク着用は常識となっているが、その一方で顔や表情が見えないことはコミュニケーションを阻害する要因になっている。顔本来の表現力を取り戻すためにVR技術を駆使したマスクの「改造」を通じマスクの「透明化」や表情の「代行」などを実現する技術を組み込む試みなどがなされているが、その表現力は本物には及ばない。本研究ではマスクの着用とソーシャルディスタンスの確保が求められるウィズコロナの時代において新たな方向性で視覚的な表現を試みた。具体的にはAR技術を用いてバーチャルならではの動的な表現力を持つ仮面を開発・試用し、本システムがもたらす経験を考察する。\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generating the Presence of Remote Mourners: a Case Study of Funeral Webcasting in Japan.\n \n \n \n \n\n\n \n Uriu, D.; Toshima, K.; Manabe, M.; Yazaki, T.; Funatsu, T.; Izumihara, A.; Kashino, Z.; Hiyama, A.; and Inami, M.\n\n\n \n\n\n\n In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–14, New York, NY, USA, May 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"GeneratingPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{uriu_generating_2021,\n\taddress = {New York, NY, USA},\n\ttitle = {Generating the {Presence} of {Remote} {Mourners}: a {Case} {Study} of {Funeral} {Webcasting} in {Japan}},\n\tisbn = {978-1-4503-8096-6},\n\tshorttitle = {Generating the {Presence} of {Remote} {Mourners}},\n\turl = {https://doi.org/10.1145/3411764.3445617},\n\tabstract = {Funerals are irreplaceable events, especially for bereaved family members and relatives. However, the COVID-19 pandemic has prevented many people worldwide from attending their loved ones’ funerals. The authors had the opportunity to assist one family faced with this predicament by webcasting and recording funeral rites held near Tokyo in June, 2020. Using our original 360-degree Telepresence system and smartphones running Zoom, we enabled the deceased’s elder siblings to remotely attend the funeral and did our utmost to make them feel present in the funeral hall. Despite the webcasting via Zoom contributing more to their remote attendances than our system, we discovered thoughtful findings which could be useful for designing remote funeral attendances. From the findings, we also discuss how HCI designers can contribute to this highly sensitive issue, weaving together knowledge from various domains including techno-spiritual practices, thanato-sensitive designs; and other religious and cultural aspects related to death rituals.},\n\turldate = {2021-12-08},\n\tbooktitle = {Proceedings of the 2021 {CHI} {Conference} on {Human} {Factors} in {Computing} {Systems}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Uriu, Daisuke and Toshima, Kenta and Manabe, Minori and Yazaki, Takeru and Funatsu, Takeshi and Izumihara, Atsushi and Kashino, Zendai and Hiyama, Atsushi and Inami, Masahiko},\n\tmonth = may,\n\tyear = {2021},\n\tkeywords = {360-degree camera, Death Rituals, Funeral, Mourning and Memorialization, Techno-spiritual practices, Telepresence and Telexistence, Thanato-sensitivity},\n\tpages = {1--14},\n}\n\n
\n
\n\n\n
\n Funerals are irreplaceable events, especially for bereaved family members and relatives. However, the COVID-19 pandemic has prevented many people worldwide from attending their loved ones’ funerals. The authors had the opportunity to assist one family faced with this predicament by webcasting and recording funeral rites held near Tokyo in June, 2020. Using our original 360-degree Telepresence system and smartphones running Zoom, we enabled the deceased’s elder siblings to remotely attend the funeral and did our utmost to make them feel present in the funeral hall. Despite the webcasting via Zoom contributing more to their remote attendances than our system, we discovered thoughtful findings which could be useful for designing remote funeral attendances. From the findings, we also discuss how HCI designers can contribute to this highly sensitive issue, weaving together knowledge from various domains including techno-spiritual practices, thanato-sensitive designs; and other religious and cultural aspects related to death rituals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Floral Tribute Ritual in Virtual Reality: Design and Validation of SenseVase with Virtual Memorial.\n \n \n \n \n\n\n \n Uriu, D.; Obushi, N.; Kashino, Z.; Hiyama, A.; and Inami, M.\n\n\n \n\n\n\n In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–15, New York, NY, USA, May 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"FloralPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{uriu_floral_2021,\n\taddress = {New York, NY, USA},\n\ttitle = {Floral {Tribute} {Ritual} in {Virtual} {Reality}: {Design} and {Validation} of {SenseVase} with {Virtual} {Memorial}},\n\tisbn = {978-1-4503-8096-6},\n\tshorttitle = {Floral {Tribute} {Ritual} in {Virtual} {Reality}},\n\turl = {https://doi.org/10.1145/3411764.3445216},\n\tabstract = {While floral tributes are commonly used for the public commemoration of victims of disasters, war, and other accidents, flowers in vases color everyday life. In this research, these features of flowers are intertwined with the recent phenomenon of online memorials to develop a virtual floral tribute concept that includes physical rituals. We designed SenseVase, a smart vase to detect flowers placed in it, and a 3DCG Virtual Memorial that illustrates floral tributes given by people using SenseVases at home. This paper describes how we developed our design concept by reviewing previous literature and social aspects, and presents a video illustrating the concept. To validate the current concept, we interviewed several experts knowledgeable in public commemorations, virtual and online communities, and the floral business. Through a discussion of our findings from the design process and interviews, we propose a new direction for how HCI technology can contribute to public commemoration in addition to personal memorialization.},\n\turldate = {2021-09-14},\n\tbooktitle = {Proceedings of the 2021 {CHI} {Conference} on {Human} {Factors} in {Computing} {Systems}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Uriu, Daisuke and Obushi, Noriyasu and Kashino, Zendai and Hiyama, Atsushi and Inami, Masahiko},\n\tmonth = may,\n\tyear = {2021},\n\tkeywords = {Commemoration, Death Rituals, Memorialization, Mourning, Online Memorial, Research through Design, Techno-spiritual Practices, Thanatosensitive Design},\n\tpages = {1--15},\n}\n\n
\n
\n\n\n
\n While floral tributes are commonly used for the public commemoration of victims of disasters, war, and other accidents, flowers in vases color everyday life. In this research, these features of flowers are intertwined with the recent phenomenon of online memorials to develop a virtual floral tribute concept that includes physical rituals. We designed SenseVase, a smart vase to detect flowers placed in it, and a 3DCG Virtual Memorial that illustrates floral tributes given by people using SenseVases at home. This paper describes how we developed our design concept by reviewing previous literature and social aspects, and presents a video illustrating the concept. To validate the current concept, we interviewed several experts knowledgeable in public commemorations, virtual and online communities, and the floral business. Through a discussion of our findings from the design process and interviews, we propose a new direction for how HCI technology can contribute to public commemoration in addition to personal memorialization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Digital Speech Makeup: Voice Conversion Based Altered Auditory Feedback for Transforming Self-Representation.\n \n \n \n \n\n\n \n Arakawa, R.; Kashino, Z.; Takamichi, S.; Verhulst, A.; and Inami, M.\n\n\n \n\n\n\n In Proceedings of the 2021 International Conference on Multimodal Interaction, pages 159–167, New York, NY, USA, October 2021. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"DigitalPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{arakawa_digital_2021,\n\taddress = {New York, NY, USA},\n\ttitle = {Digital {Speech} {Makeup}: {Voice} {Conversion} {Based} {Altered} {Auditory} {Feedback} for {Transforming} {Self}-{Representation}},\n\tisbn = {978-1-4503-8481-0},\n\tshorttitle = {Digital {Speech} {Makeup}},\n\turl = {https://doi.org/10.1145/3462244.3479934},\n\tabstract = {Makeup (i.e., cosmetics) has long been used to transform not only one’s appearance but also their self-representation. Previous studies have demonstrated that visual transformations can induce a variety of effects on self-representation. Herein, we introduce Digital Speech Makeup (DSM), the novel concept of using voice conversion (VC) based auditory feedback to transform human self-representation. We implemented a proof-of-concept system that leverages a state-of-the-art algorithm for near real-time VC and bone-conduction headphones for resolving speech disruptions caused by delayed auditory feedback. Our user study confirmed that conversing for a few dozen minutes using the system influenced participants’ speech ownership and implicit bias. Furthermore, we reviewed the participants’ comments about the experience of DSM and gained additional qualitative insight into possible future directions for the concept. Our work represents the first step towards utilizing VC to design various interpersonal interactions, centered on influencing the users’ psychological state.},\n\turldate = {2021-12-08},\n\tbooktitle = {Proceedings of the 2021 {International} {Conference} on {Multimodal} {Interaction}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Arakawa, Riku and Kashino, Zendai and Takamichi, Shinnosuke and Verhulst, Adrien and Inami, Masahiko},\n\tmonth = oct,\n\tyear = {2021},\n\tkeywords = {auditory feedback, self-representation, speech transformation, voice conversion},\n\tpages = {159--167},\n}\n\n
\n
\n\n\n
\n Makeup (i.e., cosmetics) has long been used to transform not only one’s appearance but also their self-representation. Previous studies have demonstrated that visual transformations can induce a variety of effects on self-representation. Herein, we introduce Digital Speech Makeup (DSM), the novel concept of using voice conversion (VC) based auditory feedback to transform human self-representation. We implemented a proof-of-concept system that leverages a state-of-the-art algorithm for near real-time VC and bone-conduction headphones for resolving speech disruptions caused by delayed auditory feedback. Our user study confirmed that conversing for a few dozen minutes using the system influenced participants’ speech ownership and implicit bias. Furthermore, we reviewed the participants’ comments about the experience of DSM and gained additional qualitative insight into possible future directions for the concept. Our work represents the first step towards utilizing VC to design various interpersonal interactions, centered on influencing the users’ psychological state.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 視覚効果を含む触覚ディスプレイの提案及び基礎的評価.\n \n \n \n \n\n\n \n 村田, 陵; 堀江, 新; 柏野, 善.; and 稲見, 昌.\n\n\n \n\n\n\n In 第26回VR学会大会, pages 1D2–3, Japan, September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"視覚効果を含む触覚ディスプレイの提案及び基礎的評価Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{__2021-1,\n\taddress = {Japan},\n\ttitle = {視覚効果を含む触覚ディスプレイの提案及び基礎的評価},\n\turl = {http://conference.vrsj.org/ac2021/program/doc/1D2-2.pdf},\n\tabstract = {本稿では視覚効果を含む触覚ディスプレイを提案する。従来は目に見える形で刺激提示を行う簡易的な触覚ディスプレイが無く、触覚体験が提示を受ける本人のみに限られていた。第三者から提示が視認できるディスプレイを提案することで触覚体験を共有し、ディスプレイを介した相互インタラクションが可能になる。そこで今回は視覚効果を含む触覚ディスプレイを作成し、刺激の空間的な流れの提示に関して基礎的評価を行った。},\n\turldate = {2021-09-14},\n\tbooktitle = {第{26回VR学会大会}},\n\tauthor = {村田, 陵 and 堀江, 新 and 柏野, 善大 and 稲見, 昌彦},\n\tmonth = sep,\n\tyear = {2021},\n\tpages = {1D2--3},\n}\n\n
\n
\n\n\n
\n 本稿では視覚効果を含む触覚ディスプレイを提案する。従来は目に見える形で刺激提示を行う簡易的な触覚ディスプレイが無く、触覚体験が提示を受ける本人のみに限られていた。第三者から提示が視認できるディスプレイを提案することで触覚体験を共有し、ディスプレイを介した相互インタラクションが可能になる。そこで今回は視覚効果を含む触覚ディスプレイを作成し、刺激の空間的な流れの提示に関して基礎的評価を行った。\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 他者の視線に応じた温度提示による遠隔コミュニケーションへの影響.\n \n \n \n \n\n\n \n 市橋, 爽.; 堀江, 新; 柏野, 善.; 吉田, 成.; and 稲見, 昌.\n\n\n \n\n\n\n In 第26回VR学会大会, pages 1D2–3, Japan, September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"他者の視線に応じた温度提示による遠隔コミュニケーションへの影響Paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 8 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{__2021,\n\taddress = {Japan},\n\ttitle = {他者の視線に応じた温度提示による遠隔コミュニケーションへの影響},\n\turl = {http://conference.vrsj.org/ac2021/program/doc/1D2-1.pdf},\n\tabstract = {温度提示は環境・触感再現だけでなく,コミュニケーションでの行動・情動喚起への応用が期待される.本稿では,遠隔コミュニケーションで他者の視線に応じた温度提示をユーザに行うシステムを提案する.赤外線による温度提示により,他者の視線がユーザに向いている,つまり他者の注視点が他者のモニタ中心に近いほど強い温度提示をユーザは受ける.本システムによる他者存在感の増大や印象の変調などについて考察する.},\n\turldate = {2021-09-14},\n\tbooktitle = {第{26回VR学会大会}},\n\tauthor = {市橋, 爽介 and 堀江, 新 and 柏野, 善大 and 吉田, 成朗 and 稲見, 昌彦},\n\tmonth = sep,\n\tyear = {2021},\n\tpages = {1D2--3},\n}\n\n
\n
\n\n\n
\n 温度提示は環境・触感再現だけでなく,コミュニケーションでの行動・情動喚起への応用が期待される.本稿では,遠隔コミュニケーションで他者の視線に応じた温度提示をユーザに行うシステムを提案する.赤外線による温度提示により,他者の視線がユーザに向いている,つまり他者の注視点が他者のモニタ中心に近いほど強い温度提示をユーザは受ける.本システムによる他者存在感の増大や印象の変調などについて考察する.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n EncounteredLimbs: A Room-scale Encountered-type Haptic Presentation using Wearable Robotic Arms.\n \n \n \n\n\n \n Horie, A.; Saraiji, M. Y.; Kashino, Z.; and Inami, M.\n\n\n \n\n\n\n In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pages 260–269, March 2021. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{horie_encounteredlimbs_2021,\n\ttitle = {{EncounteredLimbs}: {A} {Room}-scale {Encountered}-type {Haptic} {Presentation} using {Wearable} {Robotic} {Arms}},\n\tshorttitle = {{EncounteredLimbs}},\n\tdoi = {10.1109/VR50410.2021.00048},\n\tabstract = {Haptic information significantly improves human awareness of objects in virtual reality. One way of presenting this information is via encountered-type haptic feedback. An advantage of encounter type feedback is that it enables physical interaction with virtual environments without the need for specialized haptic devices on the hand. Additionally, encountered-type haptics are known for being able to provide high quality contact feedback to the user. However, such systems are typically designed to be grounded (i.e., fixed to the floor). As such, they typically have bounded workspace and a limited range of possible applications. In this work, we present a novel, wearable approach to presenting a user with encountered-type haptic feedback. We realize this feedback using a wearable robotic limb that holds a plate where the user might interact with their environment. An appropriate location for the plate is determined by a novel haptic solver while control of the arm is made possible using motion trackers. The system was designed to be stable, for presenting consistent haptic feedback, while also being safe and lightweight for wearability. By making the feedback system wearable, we enable the presentation of stiff feedback while maintaining the spatial freedom and unbounded workspace of natural hand interaction. Herein, we present the design of the novel system, mechanical and safety considerations when designing a wearable encountered-type system, and an evaluation of the system. A technical evaluation of the implemented system showed that the system provides a stiffness over 25 N/m and slant angle errors under 3°. Three user studies show the limitations of haptic slant perception in humans and the quantitative and qualitative effectiveness of the current prototype system. We conclude the paper by discussing various potentialapplications and possible improvements that could be made to the system.},\n\tbooktitle = {2021 {IEEE} {Virtual} {Reality} and {3D} {User} {Interfaces} ({VR})},\n\tauthor = {Horie, Arata and Saraiji, MHD Yamen and Kashino, Zendai and Inami, Masahiko},\n\tmonth = mar,\n\tyear = {2021},\n\tkeywords = {End effectors, Human-centered computing-Human computer interaction(HCI)-Interaction devices-Haptic devices, Tactile sensors, Three-dimensional displays, Tracking, User interfaces, Virtual environments, Visualization},\n\tpages = {260--269},\n}\n\n
\n
\n\n\n
\n Haptic information significantly improves human awareness of objects in virtual reality. One way of presenting this information is via encountered-type haptic feedback. An advantage of encounter type feedback is that it enables physical interaction with virtual environments without the need for specialized haptic devices on the hand. Additionally, encountered-type haptics are known for being able to provide high quality contact feedback to the user. However, such systems are typically designed to be grounded (i.e., fixed to the floor). As such, they typically have bounded workspace and a limited range of possible applications. In this work, we present a novel, wearable approach to presenting a user with encountered-type haptic feedback. We realize this feedback using a wearable robotic limb that holds a plate where the user might interact with their environment. An appropriate location for the plate is determined by a novel haptic solver while control of the arm is made possible using motion trackers. The system was designed to be stable, for presenting consistent haptic feedback, while also being safe and lightweight for wearability. By making the feedback system wearable, we enable the presentation of stiff feedback while maintaining the spatial freedom and unbounded workspace of natural hand interaction. Herein, we present the design of the novel system, mechanical and safety considerations when designing a wearable encountered-type system, and an evaluation of the system. A technical evaluation of the implemented system showed that the system provides a stiffness over 25 N/m and slant angle errors under 3°. Three user studies show the limitations of haptic slant perception in humans and the quantitative and qualitative effectiveness of the current prototype system. We conclude the paper by discussing various potentialapplications and possible improvements that could be made to the system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hand with Sensing Sphere: Body-Centered Spatial Interactions with a Hand-Worn Spherical Camera.\n \n \n \n \n\n\n \n Arakawa, R.; Maekawa, A.; Kashino, Z.; and Inami, M.\n\n\n \n\n\n\n In Symposium on Spatial User Interaction, of SUI '20, pages 1–10, New York, NY, USA, October 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"HandPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{arakawa_hand_2020,\n\taddress = {New York, NY, USA},\n\tseries = {{SUI} '20},\n\ttitle = {Hand with {Sensing} {Sphere}: {Body}-{Centered} {Spatial} {Interactions} with a {Hand}-{Worn} {Spherical} {Camera}},\n\tisbn = {978-1-4503-7943-4},\n\tshorttitle = {Hand with {Sensing} {Sphere}},\n\turl = {https://doi.org/10.1145/3385959.3418450},\n\tdoi = {10.1145/3385959.3418450},\n\tabstract = {We propose a novel body-centered interaction system making use of a spherical camera attached to a hand. Its broad and unique field of view enables an all-in-one approach to sensing multiple pieces of contextual information in hand-based spatial interactions: (i) hand location on the body surface, (ii) hand posture, (iii) hand keypoints in certain postures, and (iv) the near-hand environment. The proposed system makes use of a deep-learning approach to perform hand location and posture recognition. The proposed system is capable of achieving high hand location and posture recognition accuracy, 85.0 \\% and 88.9 \\% respectively, after collecting sufficient data and training. Our result and example demonstrations show the potential of utilizing 360° cameras for vision-based sensing in context-aware body-centered spatial interactions.},\n\turldate = {2020-12-17},\n\tbooktitle = {Symposium on {Spatial} {User} {Interaction}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Arakawa, Riku and Maekawa, Azumi and Kashino, Zendai and Inami, Masahiko},\n\tmonth = oct,\n\tyear = {2020},\n\tkeywords = {body-centered spatial interaction, hand interaction, spherical camera},\n\tpages = {1--10},\n}\n\n
\n
\n\n\n
\n We propose a novel body-centered interaction system making use of a spherical camera attached to a hand. Its broad and unique field of view enables an all-in-one approach to sensing multiple pieces of contextual information in hand-based spatial interactions: (i) hand location on the body surface, (ii) hand posture, (iii) hand keypoints in certain postures, and (iv) the near-hand environment. The proposed system makes use of a deep-learning approach to perform hand location and posture recognition. The proposed system is capable of achieving high hand location and posture recognition accuracy, 85.0 % and 88.9 % respectively, after collecting sufficient data and training. Our result and example demonstrations show the potential of utilizing 360° cameras for vision-based sensing in context-aware body-centered spatial interactions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n 温度提示による方向知覚への影響の基礎的調査.\n \n \n \n\n\n \n 市橋 爽介; 堀江 新; 齋藤 寛人; 柏野 善大; and 稲見 昌彦\n\n\n \n\n\n\n In 計測自動制御学会システムインテグレーション部門講演会, Online, December 2020. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{___2020-1,\n\taddress = {Online},\n\ttitle = {温度提示による方向知覚への影響の基礎的調査},\n\tabstract = {Human spatial perception considers multiple senses.  While there exists research relating some sensesto spatial perception,  research into the effect of thermal sensations is sparse.  Herein,  we investigate the effectof directional temperature stimulus on orientation perception.  Through our experiments, we demonstrated thatdirectional thermal stimulus has the potential to enhance manipulation of orientation perception when used withvisual stimulus.},\n\tlanguage = {Japanese},\n\tbooktitle = {計測自動制御学会システムインテグレーション部門講演会},\n\tauthor = {{市橋 爽介} and {堀江 新} and {齋藤 寛人} and {柏野 善大} and {稲見 昌彦}},\n\tmonth = dec,\n\tyear = {2020},\n}\n\n
\n
\n\n\n
\n Human spatial perception considers multiple senses. While there exists research relating some sensesto spatial perception, research into the effect of thermal sensations is sparse. Herein, we investigate the effectof directional temperature stimulus on orientation perception. Through our experiments, we demonstrated thatdirectional thermal stimulus has the potential to enhance manipulation of orientation perception when used withvisual stimulus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exploring in the City with Your Personal Guide:Design and User Study of T-Leap, a Telepresence System.\n \n \n \n \n\n\n \n Manabe, M.; Uriu, D.; Funatsu, T.; Izumihara, A.; Yazaki, T.; Chen, I.; Liao, Y.; Liu, K.; Ko, J.; Kashino, Z.; Hiyama, A.; and Inami, M.\n\n\n \n\n\n\n In 19th International Conference on Mobile and Ubiquitous Multimedia, pages 96–106, New York, NY, USA, November 2020. Association for Computing Machinery\n \n\n\n\n
\n\n\n\n \n \n \"ExploringPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{manabe_exploring_2020,\n\taddress = {New York, NY, USA},\n\ttitle = {Exploring in the {City} with {Your} {Personal} {Guide}:{Design} and {User} {Study} of {T}-{Leap}, a {Telepresence} {System}},\n\tisbn = {978-1-4503-8870-2},\n\tshorttitle = {Exploring in the {City} with {Your} {Personal} {Guide}},\n\turl = {https://doi.org/10.1145/3428361.3428382},\n\tdoi = {10.1145/3428361.3428382},\n\tabstract = {This paper describes a field study conducted with our system, T-Leap, a telepresence system connecting one person (the Viewer), situated indoors, with multiple destinations (the Nodes), that roam outdoors. Here, each Node is a person wearing a module that includes a 360-degree camera and a microphone-speaker. Through our study, we demonstrate that T-Leap enables the Viewer to perform various interactions with the Nodes including being helped by them, collaborating with them, and guiding them. These interactions were demonstrated through three studies completing different tasks: 1) Nodes purchasing souvenirs for the Viewer, 2) Nodes finding objects in the park, and 3) Viewer guiding Nodes to purchase things. The studies were primarily conducted with Taiwanese locals and Japanese visitors in Taipei. Throughout the studies, we found that T-Leap worked especially well for mediating communication between a Viewer with local knowledge acting as a guide and several Nodes who were being guided. To conclude the paper, we broadly discuss our findings, the lessons we learned from our field study, and present recommendations for the future development of mobile and wearable telepresence systems.},\n\turldate = {2020-12-17},\n\tbooktitle = {19th {International} {Conference} on {Mobile} and {Ubiquitous} {Multimedia}},\n\tpublisher = {Association for Computing Machinery},\n\tauthor = {Manabe, Minori and Uriu, Daisuke and Funatsu, Takeshi and Izumihara, Atsushi and Yazaki, Takeru and Chen, I-Hsin and Liao, Yi-Ya and Liu, Kang-Yi and Ko, Ju-Chun and Kashino, Zendai and Hiyama, Atsushi and Inami, Masahiko},\n\tmonth = nov,\n\tyear = {2020},\n\tkeywords = {Remote Communication, Research through Design, Telepresence},\n\tpages = {96--106},\n}\n\n
\n
\n\n\n
\n This paper describes a field study conducted with our system, T-Leap, a telepresence system connecting one person (the Viewer), situated indoors, with multiple destinations (the Nodes), that roam outdoors. Here, each Node is a person wearing a module that includes a 360-degree camera and a microphone-speaker. Through our study, we demonstrate that T-Leap enables the Viewer to perform various interactions with the Nodes including being helped by them, collaborating with them, and guiding them. These interactions were demonstrated through three studies completing different tasks: 1) Nodes purchasing souvenirs for the Viewer, 2) Nodes finding objects in the park, and 3) Viewer guiding Nodes to purchase things. The studies were primarily conducted with Taiwanese locals and Japanese visitors in Taipei. Throughout the studies, we found that T-Leap worked especially well for mediating communication between a Viewer with local knowledge acting as a guide and several Nodes who were being guided. To conclude the paper, we broadly discuss our findings, the lessons we learned from our field study, and present recommendations for the future development of mobile and wearable telepresence systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 展示空間を追体験するバーチャルミュージアム:日本科学未来館における協同デザインプロセス.\n \n \n \n \n\n\n \n 佐々木 智也; 瓜生 大輔; 船津 武志; 登嶋 健太; 泉原 厚史; 柏野 善大; 檜山 敦; and 稲見 昌彦\n\n\n \n\n\n\n In 第25回VR学会大会, Tokyo, September 2020. 日本バーチャルリアリティ学会\n \n\n\n\n
\n\n\n\n \n \n \"展示空間を追体験するバーチャルミュージアム:日本科学未来館における協同デザインプロセスPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{___2020,\n\taddress = {Tokyo},\n\ttitle = {展示空間を追体験するバーチャルミュージアム:日本科学未来館における協同デザインプロセス},\n\turl = {http://conference.vrsj.org/ac2020/program/doc/1A1-2_PR0166.pdf},\n\tabstract = {ミュージアム展示の鑑賞体験は、個々の展示物、展示空間全体、そして学芸員による解説といった複合的な情報によりもたらされる。全天球映像を駆使したVR技術の活用により、展示物のデジタルデータ化のみならず、展示空間を含む鑑賞体験も記録・再現可能となる。本研究では、日本科学未来館と協同で行った「鑑賞者視点のミュージアム展示アーカイブ」のデザインプロセスを報告する。また、制作したバーチャルミュージアムについて述べる。},\n\tlanguage = {Japanese},\n\turldate = {2020-12-18},\n\tbooktitle = {第{25回VR学会大会}},\n\tpublisher = {日本バーチャルリアリティ学会},\n\tauthor = {{佐々木 智也} and {瓜生 大輔} and {船津 武志} and {登嶋 健太} and {泉原 厚史} and {柏野 善大} and {檜山 敦} and {稲見 昌彦}},\n\tmonth = sep,\n\tyear = {2020},\n}\n\n
\n
\n\n\n
\n ミュージアム展示の鑑賞体験は、個々の展示物、展示空間全体、そして学芸員による解説といった複合的な情報によりもたらされる。全天球映像を駆使したVR技術の活用により、展示物のデジタルデータ化のみならず、展示空間を含む鑑賞体験も記録・再現可能となる。本研究では、日本科学未来館と協同で行った「鑑賞者視点のミュージアム展示アーカイブ」のデザインプロセスを報告する。また、制作したバーチャルミュージアムについて述べる。\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Multi-UAV based Autonomous Wilderness Search and Rescue using Target Iso-Probability Curves.\n \n \n \n\n\n \n Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n In Proceedings of the International Conference on Unmanned Aircraft Systems, pages 636–643, June 2019. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{kashino_multi-uav_2019,\n\ttitle = {Multi-{UAV} based {Autonomous} {Wilderness} {Search} and {Rescue} using {Target} {Iso}-{Probability} {Curves}},\n\tcopyright = {All rights reserved},\n\tdoi = {10.1109/ICUAS.2019.8798354},\n\tabstract = {The application of unmanned aerial vehicles (UAVs) to searches of lost persons in the wilderness can significantly contribute to the success of the missions. Maximizing the effectiveness of an autonomous multi-UAV search team, however, requires optimal task allocation between the team members, as well as the planning of the individual flight trajectories. This paper addresses this constrained resource-allocation optimization problem via the use of iso-probability curves that represent probabilistic target-location information in a search region growing with time. The optimization metric used is the allocation of the search effort proportional to the target location likelihood. The proposed method also avoids redundancy in coverage while planning the UAV trajectories. Numerous simulated search experiments, two of which are detail herein, were carried out to demonstrate our method's effectiveness in wilderness search and rescue (WiSAR) planning using a multi-UAV team. Extensive comparative studies were also conducted to validate the tangible superiority of our proposed method when compared to existing WiSAR techniques in the literature.},\n\tbooktitle = {Proceedings of the {International} {Conference} on {Unmanned} {Aircraft} {Systems}},\n\tauthor = {Kashino, Z. and Nejat, G. and Benhabib, B.},\n\tmonth = jun,\n\tyear = {2019},\n\tkeywords = {Autonomous mobile-target search, UAV trajectories, WiSAR, autonomous aerial vehicles, autonomous wilderness search, flight trajectories, iso-probability curves, multi-UAV task allocation, multi-robot systems, multiUAV search team, optimal task allocation, optimisation, optimization, path planning, probabilistic target-location information, probability, rescue robots, resource allocation, resource-allocation optimization problem, search effort, search region, simulated search experiments, target iso-probability curves, target location likelihood, target tracking, team members, trajectory optimisation (aerospace), unmanned aerial vehicles, wilderness search and rescue planning, wilderness search and rescue1},\n\tpages = {636--643},\n}\n\n
\n
\n\n\n
\n The application of unmanned aerial vehicles (UAVs) to searches of lost persons in the wilderness can significantly contribute to the success of the missions. Maximizing the effectiveness of an autonomous multi-UAV search team, however, requires optimal task allocation between the team members, as well as the planning of the individual flight trajectories. This paper addresses this constrained resource-allocation optimization problem via the use of iso-probability curves that represent probabilistic target-location information in a search region growing with time. The optimization metric used is the allocation of the search effort proportional to the target location likelihood. The proposed method also avoids redundancy in coverage while planning the UAV trajectories. Numerous simulated search experiments, two of which are detail herein, were carried out to demonstrate our method's effectiveness in wilderness search and rescue (WiSAR) planning using a multi-UAV team. Extensive comparative studies were also conducted to validate the tangible superiority of our proposed method when compared to existing WiSAR techniques in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Motion Control of a Wheeled Millirobot.\n \n \n \n\n\n \n Drisdelle, R.; Kashino, Z.; Pineros, L.; Kim, J. Y.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n In Proceedings of the International Conference of Control, Dynamic Systems, and Robotics, pages 124–1 – 124–6, Toronto, ON, Canada, August 2017. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{drisdelle_motion_2017,\n\taddress = {Toronto, ON, Canada},\n\ttitle = {Motion {Control} of a {Wheeled} {Millirobot}},\n\tcopyright = {All rights reserved},\n\tdoi = {10.11159/cdsr17.124},\n\tbooktitle = {Proceedings of the {International} {Conference} of {Control}, {Dynamic} {Systems}, and {Robotics}},\n\tauthor = {Drisdelle, Rachel and Kashino, Zendai and Pineros, Laura and Kim, Justin Y. and Nejat, Goldie and Benhabib, Beno},\n\tmonth = aug,\n\tyear = {2017},\n\tpages = {124--1 -- 124--6},\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A multi-robot sensor-delivery planning strategy for static-sensor networks.\n \n \n \n\n\n \n Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 6640–6647, September 2017. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{kashino_multi-robot_2017,\n\ttitle = {A multi-robot sensor-delivery planning strategy for static-sensor networks},\n\tcopyright = {All rights reserved},\n\tdoi = {10.1109/IROS.2017.8206578},\n\tabstract = {This paper discusses the time-phased deployment of wireless sensor networks, applied to surveillance areas growing in time. The focus herein is on the planning of the time-efficient delivery of static sensors to their designated nodes, given a network configuration. The novelty of the proposed strategy is in that it determines optimal delivery plans for spatio-temporally constrained static-sensor networks using multi-robot teams. The proposed sensor delivery planning strategy starts with an already determined (optimal) network plan specified by sensor placement locations (i.e., nodes) and deployment times. Thus, the goal at hand is to determine the optimal routes for the robots delivering the sensors to their intended locations just-in-time. The travel routes are, thus, determined to maximize spare time for the robots between the nodes. The problem is similar to the multiple travelling salesperson problem, but, with temporal constraints. Namely, sensors must be delivered to their designated nodes at designated (optimized) times in order to maintain the optimal deployment of the network configuration. Furthermore, the strategy is designed to be adaptive to new information that can become available during the search for the mobile target, allowing for replanning of the sensor network (i.e., new sensors locations and new deployment times). Numerous simulated experiments were conducted to validate the proposed strategy.},\n\tbooktitle = {Proceedings of the {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems}},\n\tauthor = {Kashino, Z. and Nejat, G. and Benhabib, B.},\n\tmonth = sep,\n\tyear = {2017},\n\tkeywords = {Capacitive sensors, Mobile communication, Planning, Robot sensing systems, Wireless sensor networks, mobile robots, multi-robot systems, multirobot sensor-delivery planning strategy, multirobot teams, network configuration, optimal delivery plans, optimal routes, optimisation, path planning, sensor placement, sensor placement locations, sensors locations, static-sensor networks, time-efficient delivery, wireless sensor networks},\n\tpages = {6640--6647},\n}\n\n
\n
\n\n\n
\n This paper discusses the time-phased deployment of wireless sensor networks, applied to surveillance areas growing in time. The focus herein is on the planning of the time-efficient delivery of static sensors to their designated nodes, given a network configuration. The novelty of the proposed strategy is in that it determines optimal delivery plans for spatio-temporally constrained static-sensor networks using multi-robot teams. The proposed sensor delivery planning strategy starts with an already determined (optimal) network plan specified by sensor placement locations (i.e., nodes) and deployment times. Thus, the goal at hand is to determine the optimal routes for the robots delivering the sensors to their intended locations just-in-time. The travel routes are, thus, determined to maximize spare time for the robots between the nodes. The problem is similar to the multiple travelling salesperson problem, but, with temporal constraints. Namely, sensors must be delivered to their designated nodes at designated (optimized) times in order to maintain the optimal deployment of the network configuration. Furthermore, the strategy is designed to be adaptive to new information that can become available during the search for the mobile target, allowing for replanning of the sensor network (i.e., new sensors locations and new deployment times). Numerous simulated experiments were conducted to validate the proposed strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n mROBerTO: A modular millirobot for swarm-behavior studies.\n \n \n \n\n\n \n Kim, J. Y.; Colaco, T.; Kashino, Z.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2109–2114, Daejeon, Korea, October 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{kim_mroberto:_2016,\n\taddress = {Daejeon, Korea},\n\ttitle = {{mROBerTO}: {A} modular millirobot for swarm-behavior studies},\n\tcopyright = {All rights reserved},\n\tshorttitle = {{mROBerTO}},\n\tdoi = {10.1109/IROS.2016.7759331},\n\tabstract = {Millirobots have increasingly become popular over the past several years, especially for swarm-behavior studies, allowing researchers to run experiments with a large number of units in limited workspaces. However, as these robots have become smaller in size, their sensory capabilities and battery life have been reduced. A number of these have also been customized, with few off-the shelf components, exhibiting integral (i.e., non-modular) designs. In response to the above concerns, this paper presents a novel open-source millirobot with a modular design based on the use of easily sourced elements and off-the-shelf components. The proposed milli-robot-Toronto (mROBerTO), is a 16×16 mm2 robot with a variety of sensors (including proximity, IMU, compass, ambient light, and camera). mROBerTO is capable of formation control using an IR emitter and detector add-on. It can also communicate via Bluetooth Smart, ANT+, or both concurrently. It is equipped with an ARM processor for handling complex tasks and has a flash memory of 256 KB with over-the-air programming capability.},\n\tbooktitle = {Proceedings of the {IEEE}/{RSJ} {International} {Conference} on {Intelligent} {Robots} and {Systems}},\n\tauthor = {Kim, J. Y. and Colaco, T. and Kashino, Z. and Nejat, G. and Benhabib, B.},\n\tmonth = oct,\n\tyear = {2016},\n\tkeywords = {Cameras, Robot vision systems, Wheels, formation control, mROBerTO, mobile robots, modular millirobot, novel open-source millirobot, swarm-behavior studies},\n\tpages = {2109--2114},\n}\n
\n
\n\n\n
\n Millirobots have increasingly become popular over the past several years, especially for swarm-behavior studies, allowing researchers to run experiments with a large number of units in limited workspaces. However, as these robots have become smaller in size, their sensory capabilities and battery life have been reduced. A number of these have also been customized, with few off-the shelf components, exhibiting integral (i.e., non-modular) designs. In response to the above concerns, this paper presents a novel open-source millirobot with a modular design based on the use of easily sourced elements and off-the-shelf components. The proposed milli-robot-Toronto (mROBerTO), is a 16×16 mm2 robot with a variety of sensors (including proximity, IMU, compass, ambient light, and camera). mROBerTO is capable of formation control using an IR emitter and detector add-on. It can also communicate via Bluetooth Smart, ANT+, or both concurrently. It is equipped with an ARM processor for handling complex tasks and has a flash memory of 256 KB with over-the-air programming capability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An adaptive static-sensor network deployment strategy for detecting mobile targets.\n \n \n \n\n\n \n Kashino, Z.; Vilela, J.; Kim, J. Y.; Nejat, G.; and Benhabib, B.\n\n\n \n\n\n\n In Proceedings of the IEEE International Symposium on Safety, Security, and Rescue Robotics, pages 1–8, Lausanne, Switzerland, October 2016. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{kashino_adaptive_2016,\n\taddress = {Lausanne, Switzerland},\n\ttitle = {An adaptive static-sensor network deployment strategy for detecting mobile targets},\n\tcopyright = {All rights reserved},\n\tdoi = {10.1109/SSRR.2016.7784269},\n\tabstract = {The mobile-target search problem has been, typically, addressed in the literature through the sole use of mobile agents. Recently, however, it has been shown that the use of static-sensor networks could significantly contribute to the likelihood of detecting a mobile target and in a shorter time. In this paper, thus, we propose a novel adaptive and optimal static-sensor network deployment strategy to detect un-trackable targets in unstructured environments. The strategy utilizes a probabilistic target-motion model representative of the demographic group to which the target belongs and realtime location history information to construct a target-location probability distribution function over the search region. The novelty of our strategy lies in the utilization of a time-varying target-location probability distribution in order to deploy sensors in a manner that is both maximally adaptive and optimal for every deployment. Network deployment for a wilderness search and rescue problem is also presented in detail as an example case. Furthermore, numerous factors that may influence the performance of our deployment strategy are discussed, including a network coverage comparative study.},\n\tbooktitle = {Proceedings of the {IEEE} {International} {Symposium} on {Safety}, {Security}, and {Rescue} {Robotics}},\n\tauthor = {Kashino, Z. and Vilela, J. and Kim, J. Y. and Nejat, G. and Benhabib, B.},\n\tmonth = oct,\n\tyear = {2016},\n\tkeywords = {Decision support systems, Robots, Safety, Security, adaptive static-sensor network deployment strategy, mobile agents, mobile robots, mobile targets detection, mobile-target search problem, optimal static-sensor network deployment strategy, probabilistic target-motion model, probability, rescue robots, robot vision, sensor fusion, target-location probability distribution function, wilderness search and rescue problem},\n\tpages = {1--8},\n}\n\n
\n
\n\n\n
\n The mobile-target search problem has been, typically, addressed in the literature through the sole use of mobile agents. Recently, however, it has been shown that the use of static-sensor networks could significantly contribute to the likelihood of detecting a mobile target and in a shorter time. In this paper, thus, we propose a novel adaptive and optimal static-sensor network deployment strategy to detect un-trackable targets in unstructured environments. The strategy utilizes a probabilistic target-motion model representative of the demographic group to which the target belongs and realtime location history information to construct a target-location probability distribution function over the search region. The novelty of our strategy lies in the utilization of a time-varying target-location probability distribution in order to deploy sensors in a manner that is both maximally adaptive and optimal for every deployment. Network deployment for a wilderness search and rescue problem is also presented in detail as an example case. Furthermore, numerous factors that may influence the performance of our deployment strategy are discussed, including a network coverage comparative study.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);