var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fniravatgit.github.io%2Fpatelnirav.com%2Fresources%2FNirav_Publicationst.bib&jsonp=1&group0=type&folding=1&css=niravatgit.github.io/patelnirav.com/css/style.css&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fniravatgit.github.io%2Fpatelnirav.com%2Fresources%2FNirav_Publicationst.bib&jsonp=1&group0=type&folding=1&css=niravatgit.github.io/patelnirav.com/css/style.css\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fniravatgit.github.io%2Fpatelnirav.com%2Fresources%2FNirav_Publicationst.bib&jsonp=1&group0=type&folding=1&css=niravatgit.github.io/patelnirav.com/css/style.css\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n article\n \n \n (27)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Adaptive Control Improves Sclera Force Safety in Robot-Assisted Eye Surgery: A Clinical Study.\n \n \n \n\n\n \n Ebrahimi, A.; Urias, M.; Patel, N.; Taylor, R. H.; Gehlbach, P. L.; and Iordachita, I.\n\n\n \n\n\n\n IEEE Transactions on Biomedical Engineering. 2021.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{ebrahimi2021adaptive,\nabstract = {The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue interaction forces. Tool shaft-to-sclera, and tool tip-to-surgical target, forces are rendered either markedly reduced or imperceptible to the surgeon. This blunting of human tactile sensory input is due to the inflexible mass and large inertia of the robotic arm as compared to the milli-Newton scale of the interaction forces encountered during ophthalmic surgery. The loss of human tactile feedback, as well as the comparatively high forces that are potentially imparted to the fragile tissues of the eye, identify a potential iatrogenic risk during robotic eye surgery. In this paper, we aim to evaluate two variants of an adaptive force control scheme implemented on the Steady-Hand Eye Robot (SHER) that are intended to mitigate the risk of unsafe scleral forces. The present study enrolled ten retina fellows and ophthalmology residents into a simulated procedure, which simply asked the trainees to follow retinal vessels in a model retina surgery environment, with and without robotic assistance. The study was approved by the Johns Hopkins University Institutional Review Board. For this purpose, we have developed a force-sensing (equipped with Fiber Bragg Grating (FBG)) instrument to attach to the robot. A piezo-actuated linear stage for creating random lateral motions to the eyeball phantom has been provided to simulate disturbances during surgery. The SHER and all of its dependencies were set up in an operating room in the Wilmer Eye Institute at the Johns Hopkins Hospital. The clinicians conducted robot-assisted experiments with the adaptive controls incorporated as well as freehand manipulations. The results indicate that the Adaptive Norm Control (ANC) method, is able to maintain scleral forces at predetermined safe levels better than even freehand manipulations. Novice clinicians in robot training however, subjectively preferred freehand maneuvers over robotic manipulations. Clinician preferences once highly skilled with the robot is not assessed in this study.},\nauthor = {Ebrahimi, Ali and Urias, Muller and Patel, Niravkumar and Taylor, Russell H. and Gehlbach, Peter L. and Iordachita, Iulian},\ndoi = {10.1109/TBME.2021.3071135},\nissn = {15582531},\njournal = {IEEE Transactions on Biomedical Engineering},\nkeywords = {FBG sensors,Force,Retina,Robot kinematics,Robot sensing systems,Robot-assisted surgery,Robots,Sclera force control,Surgery,Tools},\npublisher = {IEEE},\ntitle = {{Adaptive Control Improves Sclera Force Safety in Robot-Assisted Eye Surgery: A Clinical Study}},\nyear = {2021}\n}\n
\n
\n\n\n
\n The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue interaction forces. Tool shaft-to-sclera, and tool tip-to-surgical target, forces are rendered either markedly reduced or imperceptible to the surgeon. This blunting of human tactile sensory input is due to the inflexible mass and large inertia of the robotic arm as compared to the milli-Newton scale of the interaction forces encountered during ophthalmic surgery. The loss of human tactile feedback, as well as the comparatively high forces that are potentially imparted to the fragile tissues of the eye, identify a potential iatrogenic risk during robotic eye surgery. In this paper, we aim to evaluate two variants of an adaptive force control scheme implemented on the Steady-Hand Eye Robot (SHER) that are intended to mitigate the risk of unsafe scleral forces. The present study enrolled ten retina fellows and ophthalmology residents into a simulated procedure, which simply asked the trainees to follow retinal vessels in a model retina surgery environment, with and without robotic assistance. The study was approved by the Johns Hopkins University Institutional Review Board. For this purpose, we have developed a force-sensing (equipped with Fiber Bragg Grating (FBG)) instrument to attach to the robot. A piezo-actuated linear stage for creating random lateral motions to the eyeball phantom has been provided to simulate disturbances during surgery. The SHER and all of its dependencies were set up in an operating room in the Wilmer Eye Institute at the Johns Hopkins Hospital. The clinicians conducted robot-assisted experiments with the adaptive controls incorporated as well as freehand manipulations. The results indicate that the Adaptive Norm Control (ANC) method, is able to maintain scleral forces at predetermined safe levels better than even freehand manipulations. Novice clinicians in robot training however, subjectively preferred freehand maneuvers over robotic manipulations. Clinician preferences once highly skilled with the robot is not assessed in this study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Body-Mounted Robotic System for MRI-Guided Shoulder Arthrography: Cadaver and Clinical Workflow Studies.\n \n \n \n\n\n \n Patel, N.; Yan, J.; Li, G.; Monfaredi, R.; Priba, L.; Donald-Simpson, H.; Joy, J.; Dennison, A.; Melzer, A.; Sharma, K.; Iordachita, I.; and Cleary, K.\n\n\n \n\n\n\n Frontiers in Robotics and AI, 8: 125. 2021.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{patel2021body,\nabstract = {This paper presents an intraoperative MRI-guided, patient-mounted robotic system for shoulder arthrography procedures in pediatric patients. The robot is designed to be compact and lightweight and is constructed with nonmagnetic materials for MRI safety. Our goal is to transform the current two-step arthrography procedure (CT/x-ray-guided needle insertion followed by diagnostic MRI) into a streamlined single-step ionizing radiation-free procedure under MRI guidance. The MR-conditional robot was evaluated in a Thiel embalmed cadaver study and healthy volunteer studies. The robot was attached to the shoulder using straps and ten locations in the shoulder joint space were selected as targets. For the first target, contrast agent (saline) was injected to complete the clinical workflow. After each targeting attempt, a confirmation scan was acquired to analyze the needle placement accuracy. During the volunteer studies, a more comfortable and ergonomic shoulder brace was used, and the complete clinical workflow was followed to measure the total procedure time. In the cadaver study, the needle was successfully placed in the shoulder joint space in all the targeting attempts with translational and rotational accuracy of 2.07 ± 1.22 mm and 1.46 ± 1.06 degrees, respectively. The total time for the entire procedure was 94 min and the average time for each targeting attempt was 20 min in the cadaver study, while the average time for the entire workflow for the volunteer studies was 36 min. No image quality degradation due to the presence of the robot was detected. This Thiel-embalmed cadaver study along with the clinical workflow studies on human volunteers demonstrated the feasibility of using an MR-conditional, patient-mounted robotic system for MRI-guided shoulder arthrography procedure. Future work will be focused on moving the technology to clinical practice.},\nauthor = {Patel, Niravkumar and Yan, Jiawen and Li, Gang and Monfaredi, Reza and Priba, Lukasz and Donald-Simpson, Helen and Joy, Joyce and Dennison, Andrew and Melzer, Andreas and Sharma, Karun and Iordachita, Iulian and Cleary, Kevin},\ndoi = {10.3389/frobt.2021.667121},\nissn = {22969144},\njournal = {Frontiers in Robotics and AI},\npages = {125},\npublisher = {Frontiers},\ntitle = {{Body-Mounted Robotic System for MRI-Guided Shoulder Arthrography: Cadaver and Clinical Workflow Studies}},\nvolume = {8},\nyear = {2021}\n}\n
\n
\n\n\n
\n This paper presents an intraoperative MRI-guided, patient-mounted robotic system for shoulder arthrography procedures in pediatric patients. The robot is designed to be compact and lightweight and is constructed with nonmagnetic materials for MRI safety. Our goal is to transform the current two-step arthrography procedure (CT/x-ray-guided needle insertion followed by diagnostic MRI) into a streamlined single-step ionizing radiation-free procedure under MRI guidance. The MR-conditional robot was evaluated in a Thiel embalmed cadaver study and healthy volunteer studies. The robot was attached to the shoulder using straps and ten locations in the shoulder joint space were selected as targets. For the first target, contrast agent (saline) was injected to complete the clinical workflow. After each targeting attempt, a confirmation scan was acquired to analyze the needle placement accuracy. During the volunteer studies, a more comfortable and ergonomic shoulder brace was used, and the complete clinical workflow was followed to measure the total procedure time. In the cadaver study, the needle was successfully placed in the shoulder joint space in all the targeting attempts with translational and rotational accuracy of 2.07 ± 1.22 mm and 1.46 ± 1.06 degrees, respectively. The total time for the entire procedure was 94 min and the average time for each targeting attempt was 20 min in the cadaver study, while the average time for the entire workflow for the volunteer studies was 36 min. No image quality degradation due to the presence of the robot was detected. This Thiel-embalmed cadaver study along with the clinical workflow studies on human volunteers demonstrated the feasibility of using an MR-conditional, patient-mounted robotic system for MRI-guided shoulder arthrography procedure. Future work will be focused on moving the technology to clinical practice.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hybrid Robot-Assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries.\n \n \n \n\n\n \n Li, Z.; Shahbazi, M.; Patel, N.; O'Sullivan, E.; Zhang, H.; Vyas, K.; Chalasani, P.; Deguet, A.; Gehlbach, P. L.; Iordachita, I.; Yang, G.; and Taylor, R. H.\n\n\n \n\n\n\n IEEE Transactions on Medical Robotics and Bionics, 2(2): 176–187. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{li2020hybrid,\nabstract = {High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons' hand and patient movements also contribute to the deterioration of the image quality. Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworks - cooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks.},\narchivePrefix = {arXiv},\narxivId = {1909.06852},\nauthor = {Li, Zhaoshuo and Shahbazi, Mahya and Patel, Niravkumar and O'Sullivan, Eimear and Zhang, Haojie and Vyas, Khushi and Chalasani, Preetham and Deguet, Anton and Gehlbach, Peter L. and Iordachita, Iulian and Yang, Guang-Zhong and Taylor, Russell H.},\ndoi = {10.1109/tmrb.2020.2988312},\neprint = {1909.06852},\nissn = {2576-3202},\njournal = {IEEE Transactions on Medical Robotics and Bionics},\nnumber = {2},\npages = {176--187},\npublisher = {IEEE},\ntitle = {{Hybrid Robot-Assisted Frameworks for Endomicroscopy Scanning in Retinal Surgeries}},\nvolume = {2},\nyear = {2020}\n}\n
\n
\n\n\n
\n High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons' hand and patient movements also contribute to the deterioration of the image quality. Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworks - cooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An Integrated Robotic System for MRI-Guided Neuroablation: Preclinical Evaluation.\n \n \n \n\n\n \n Patel, N. A.; Nycz, C. J.; Carvalho, P. A.; Gandomi, K. Y.; Gondokaryono, R.; Li, G.; Heffter, T.; Burdette, E. C.; Pilitsis, J. G.; and Fischer, G. S.\n\n\n \n\n\n\n IEEE Transactions on Biomedical Engineering, 67(10): 2990–2999. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{patel2020integrated,\nabstract = {Objective: Treatment of brain tumors requires high precision in order to ensure sufficient treatment while minimizing damage to surrounding healthy tissue. Ablation of such tumors using needle-based therapeutic ultrasound (NBTU) under real-time magnetic resonance imaging (MRI) can fulfill this need. However, the constrained space and strong magnetic field in the MRI bore restricts patient access limiting precise placement of the NBTU ablation tool. A surgical robot compatible with use inside the bore of an MRI scanner can alleviate these challenges. Methods: We present preclinical trials of a robotic system for NBTU ablation of brain tumors under real-time MRI guidance. The system comprises of an updated robotic manipulator and corresponding control electronics, the NBTU ablation system and applications for planning, navigation and monitoring of the system. Results: The robotic system had a mean translational and rotational accuracy of 1.39 ± 0.64 mm and 1.27 ± \\;\\text{0.56}^{\\circ in gelatin phantoms and 3.13 ± 1.41 mm and 5.58 ± \\;\\text{3.59}^{\\circ in 10 porcine trials while causing a maximum reduction in signal to noise ratio (SNR) of 10.3%. Conclusion: The integrated robotic system can place NBTU ablator at a desired target location in porcine brain and monitor the ablation in realtime via magnetic resonance thermal imaging (MRTI). Significance: Further optimization of this system could result in a clinically viable system for use in human trials for various diagnostic or therapeutic neurosurgical interventions.}}},\nauthor = {Patel, Niravkumar A. and Nycz, Christopher J. and Carvalho, Paulo A. and Gandomi, Katie Y. and Gondokaryono, Radian and Li, Gang and Heffter, Tamas and Burdette, Everette Clif and Pilitsis, Julie G. and Fischer, Gregory S.},\ndoi = {10.1109/TBME.2020.2974583},\nissn = {15582531},\njournal = {IEEE Transactions on Biomedical Engineering},\nkeywords = {MRI,neurosurgery,robot,therapeutic ultrasound,tumor ablation},\nnumber = {10},\npages = {2990--2999},\npmid = {32078530},\npublisher = {IEEE},\ntitle = {{An Integrated Robotic System for MRI-Guided Neuroablation: Preclinical Evaluation}},\nvolume = {67},\nyear = {2020}\n}\n
\n
\n\n\n
\n Objective: Treatment of brain tumors requires high precision in order to ensure sufficient treatment while minimizing damage to surrounding healthy tissue. Ablation of such tumors using needle-based therapeutic ultrasound (NBTU) under real-time magnetic resonance imaging (MRI) can fulfill this need. However, the constrained space and strong magnetic field in the MRI bore restricts patient access limiting precise placement of the NBTU ablation tool. A surgical robot compatible with use inside the bore of an MRI scanner can alleviate these challenges. Methods: We present preclinical trials of a robotic system for NBTU ablation of brain tumors under real-time MRI guidance. The system comprises of an updated robotic manipulator and corresponding control electronics, the NBTU ablation system and applications for planning, navigation and monitoring of the system. Results: The robotic system had a mean translational and rotational accuracy of 1.39 ± 0.64 mm and 1.27 ±  \\text0.56^∘ in gelatin phantoms and 3.13 ± 1.41 mm and 5.58 ±  \\text3.59^∘ in 10 porcine trials while causing a maximum reduction in signal to noise ratio (SNR) of 10.3%. Conclusion: The integrated robotic system can place NBTU ablator at a desired target location in porcine brain and monitor the ablation in realtime via magnetic resonance thermal imaging (MRTI). Significance: Further optimization of this system could result in a clinically viable system for use in human trials for various diagnostic or therapeutic neurosurgical interventions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Automatic Light Pipe Actuating System for Bimanual Robot-Assisted Retinal Surgery.\n \n \n \n\n\n \n He, C.; Yang, E.; Patel, N.; Ebrahimi, A.; Shahbazi, M.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n IEEE/ASME Transactions on Mechatronics, 25(6): 2846–2857. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{He2020,\nabstract = {Retinal surgery is a bimanual operation in which surgeons operate with an instrument in their dominant hand (more capable hand) and simultaneously hold a light pipe (illuminating pipe) with their nondominant hand (less capable hand) to provide illumination inside the eye. Manually holding and adjusting the light pipe places an additional burden on the surgeon and increases the overall complexity of the procedure. To overcome these challenges, a robot-assisted automatic light pipe actuating system is proposed. A customized light pipe with force-sensing capability is mounted at the end effector of a follower robot and is actuated through a hybrid force-velocity controller to automatically illuminate the target area on the retinal surface by pivoting about the scleral port (incision on the sclera). Static following accuracy evaluation and dynamic light tracking experiments are carried out. The results show that the proposed system can successfully illuminate the desired area with negligible offset (the average offset is 2.45 mm with standard deviation of 1.33 mm). The average scleral forces are also below a specified threshold (50 mN). The proposed system not only can allow for increased focus on dominant hand instrument control, but also could be extended to three-arm procedures (two surgical instruments held by surgeon plus a robot-holding light pipe) in retinal surgery, potentially improving surgical efficiency and outcome.},\nauthor = {He, Changyan and Yang, Emily and Patel, Niravkumar and Ebrahimi, Ali and Shahbazi, Mahya and Gehlbach, Peter and Iordachita, Iulian},\ndoi = {10.1109/TMECH.2020.2996683},\nissn = {1941014X},\njournal = {IEEE/ASME Transactions on Mechatronics},\nkeywords = {Bimanual control,hybrid velocity-force control,light pipe actuating,robot-assisted retinal surgery},\nnumber = {6},\npages = {2846--2857},\ntitle = {{Automatic Light Pipe Actuating System for Bimanual Robot-Assisted Retinal Surgery}},\nvolume = {25},\nyear = {2020}\n}\n
\n
\n\n\n
\n Retinal surgery is a bimanual operation in which surgeons operate with an instrument in their dominant hand (more capable hand) and simultaneously hold a light pipe (illuminating pipe) with their nondominant hand (less capable hand) to provide illumination inside the eye. Manually holding and adjusting the light pipe places an additional burden on the surgeon and increases the overall complexity of the procedure. To overcome these challenges, a robot-assisted automatic light pipe actuating system is proposed. A customized light pipe with force-sensing capability is mounted at the end effector of a follower robot and is actuated through a hybrid force-velocity controller to automatically illuminate the target area on the retinal surface by pivoting about the scleral port (incision on the sclera). Static following accuracy evaluation and dynamic light tracking experiments are carried out. The results show that the proposed system can successfully illuminate the desired area with negligible offset (the average offset is 2.45 mm with standard deviation of 1.33 mm). The average scleral forces are also below a specified threshold (50 mN). The proposed system not only can allow for increased focus on dominant hand instrument control, but also could be extended to three-arm procedures (two surgical instruments held by surgeon plus a robot-holding light pipe) in retinal surgery, potentially improving surgical efficiency and outcome.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Mapping and Controlling Scleral Force Interactions in Bimanual Robot-assisted Retinal Surgery.\n \n \n \n\n\n \n Urias, M. G; Patel, N.; Ebrahimi, A.; Roizenblatt, M.; IORDACHITA, I.; and Gehlbach, P. L\n\n\n \n\n\n\n Investigative Ophthalmology & Visual Science, 61(7): 3725. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{urias2020mapping,\nauthor = {Urias, M{\\"{u}}ller G and Patel, Niravkumar and Ebrahimi, Ali and Roizenblatt, Marina and IORDACHITA, IULIAN and Gehlbach, Peter L},\njournal = {Investigative Ophthalmology & Visual Science},\nnumber = {7},\npages = {3725},\npublisher = {The Association for Research in Vision and Ophthalmology},\ntitle = {{Mapping and Controlling Scleral Force Interactions in Bimanual Robot-assisted Retinal Surgery}},\nvolume = {61},\nyear = {2020}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Fully Actuated Body-Mounted Robotic System for MRI-Guided Lower Back Pain Injections: Initial Phantom and Cadaver Studies.\n \n \n \n\n\n \n Li, G.; Patel, N. A.; Wang, Y.; Dumoulin, C.; Loew, W.; Loparo, O.; Schneider, K.; Sharma, K.; Cleary, K.; Fritz, J.; and Iordachita, I.\n\n\n \n\n\n\n IEEE Robotics and Automation Letters, 5(4): 5245–5251. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{li2020fully,\nabstract = {This letter reports the improved design, system integration, and initial experimental evaluation of a fully actuated body-mounted robotic system for real-time MRI-guided lower back pain injections. The 6-DOF robot is composed of a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module, which together provide a fully actuated manipulator that can operate inside the scanner bore during imaging. The system minimizes the need to move the patient in and out of the scanner during a procedure, and thus may shorten the procedure time and streamline the clinical workflow. The robot is devised with a compact and lightweight structure that can be attached directly to the patient's lower back via straps. This approach minimizes the effect of patient motion by allowing the robot to move with the patient. The robot is integrated with an image-based surgical planning module. A dedicated clinical workflow is proposed for robot-assisted lower back pain injections under real-time MRI guidance. Targeting accuracy of the system was evaluated with a real-time MRI-guided phantom study, demonstrating the mean absolute errors (MAE) of the tip position to be 1.50 \\pm 0.68 mm and of the needle angle to be 1.56 \\pm \\; \\text{0.93}^\\circ. An initial cadaver study was performed to validate the feasibility of the clinical workflow, indicating the maximum error of the position to be less than 1.90 mm and of the angle to be less than 3.14 ^\\circ.},\nauthor = {Li, Gang and Patel, Niravkumar A. and Wang, Yanzhou and Dumoulin, Charles and Loew, Wolfgang and Loparo, Olivia and Schneider, Katherine and Sharma, Karun and Cleary, Kevin and Fritz, Jan and Iordachita, Iulian},\ndoi = {10.1109/LRA.2020.3007459},\nissn = {23773766},\njournal = {IEEE Robotics and Automation Letters},\nkeywords = {MRI-guided robot,body-mounted robot,lower back pain injection,robot-assisted intervention},\nnumber = {4},\npages = {5245--5251},\npublisher = {IEEE},\ntitle = {{Fully Actuated Body-Mounted Robotic System for MRI-Guided Lower Back Pain Injections: Initial Phantom and Cadaver Studies}},\nvolume = {5},\nyear = {2020}\n}\n
\n
\n\n\n
\n This letter reports the improved design, system integration, and initial experimental evaluation of a fully actuated body-mounted robotic system for real-time MRI-guided lower back pain injections. The 6-DOF robot is composed of a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module, which together provide a fully actuated manipulator that can operate inside the scanner bore during imaging. The system minimizes the need to move the patient in and out of the scanner during a procedure, and thus may shorten the procedure time and streamline the clinical workflow. The robot is devised with a compact and lightweight structure that can be attached directly to the patient's lower back via straps. This approach minimizes the effect of patient motion by allowing the robot to move with the patient. The robot is integrated with an image-based surgical planning module. A dedicated clinical workflow is proposed for robot-assisted lower back pain injections under real-time MRI guidance. Targeting accuracy of the system was evaluated with a real-time MRI-guided phantom study, demonstrating the mean absolute errors (MAE) of the tip position to be 1.50 ± 0.68 mm and of the needle angle to be 1.56 ±   \\text0.93^∘. An initial cadaver study was performed to validate the feasibility of the clinical workflow, indicating the maximum error of the position to be less than 1.90 mm and of the angle to be less than 3.14 ^∘.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n MRI-guided lumbar spinal injections with body-mounted robotic system: cadaver studies.\n \n \n \n\n\n \n Li, G.; Patel, N. A.; Melzer, A.; Sharma, K.; Iordachita, I.; and Cleary, K.\n\n\n \n\n\n\n Minimally Invasive Therapy and Allied Technologies,1–9. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{li2020mri,\nabstract = {Introduction: This paper reports the system integration and cadaveric assessment of a body-mounted robotic system for MRI-guided lumbar spine injections. The system is developed to enable MR-guided interventions in closed bore magnet and avoid problems due to patient movement during cannula guidance. Material and methods: The robot is comprised by a lightweight and compact structure so that it can be mounted directly onto the lower back of a patient using straps. Therefore, it can minimize the influence of patient movement by moving with the patient. The MR-Conditional robot is integrated with an image-guided surgical planning workstation. A dedicated clinical workflow is created for the robot-assisted procedure to improve the conventional freehand MRI-guided procedure. Results: Cadaver studies were performed with both freehand and robot-assisted approaches to validate the feasibility of the clinical workflow and to assess the positioning accuracy of the robotic system. The experiment results demonstrate that the root mean square (RMS) error of the target position to be 2.57 ± 1.09 mm and of the insertion angle to be 2.17 ± 0.89°. Conclusion: The robot-assisted approach is able to provide more accurate and reproducible cannula placements than the freehand procedure, as well as to reduce the number of insertion attempts.},\nauthor = {Li, Gang and Patel, Niravkumar A. and Melzer, Andreas and Sharma, Karun and Iordachita, Iulian and Cleary, Kevin},\ndoi = {10.1080/13645706.2020.1799017},\nissn = {13652931},\njournal = {Minimally Invasive Therapy and Allied Technologies},\nkeywords = {MR image-guided therapy,MRI-guided robot,body-mounted robot,lumbar spine injections},\npages = {1--9},\npmid = {32729771},\npublisher = {Taylor & Francis},\ntitle = {{MRI-guided lumbar spinal injections with body-mounted robotic system: cadaver studies}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Introduction: This paper reports the system integration and cadaveric assessment of a body-mounted robotic system for MRI-guided lumbar spine injections. The system is developed to enable MR-guided interventions in closed bore magnet and avoid problems due to patient movement during cannula guidance. Material and methods: The robot is comprised by a lightweight and compact structure so that it can be mounted directly onto the lower back of a patient using straps. Therefore, it can minimize the influence of patient movement by moving with the patient. The MR-Conditional robot is integrated with an image-guided surgical planning workstation. A dedicated clinical workflow is created for the robot-assisted procedure to improve the conventional freehand MRI-guided procedure. Results: Cadaver studies were performed with both freehand and robot-assisted approaches to validate the feasibility of the clinical workflow and to assess the positioning accuracy of the robotic system. The experiment results demonstrate that the root mean square (RMS) error of the target position to be 2.57 ± 1.09 mm and of the insertion angle to be 2.17 ± 0.89°. Conclusion: The robot-assisted approach is able to provide more accurate and reproducible cannula placements than the freehand procedure, as well as to reduce the number of insertion attempts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Robotic retinal surgery impacts on scleral forces: In vivo study.\n \n \n \n\n\n \n Urias, M. G.; Patel, N.; Ebrahimi, A.; Iordachita, I.; and Gehlbach, P. L.\n\n\n \n\n\n\n Translational Vision Science and Technology, 9(10): 1–9. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{urias2020robotic,\nabstract = {Purpose: This study aims to map force interaction between instrument and sclera of in vivo rabbits during retinal procedures, and verify if a robotic active force control could prevent unwanted increase of forces on the sclera. Methods: Experiments consisted in the performance of intraocular movements of a force sensing instrument, adjacent to the retinal surface, in radial directions, from the center to the periphery and back, and compared manual manipulations with robotic assistance and also robotic assistance with an active force control. This protocol was approved by the Animal Use and Ethical Committee and experiments were according to ARVO Statement of Animal Use. Results: Mean forces using manual manipulations were 115 ± 51 mN. Using robotic assistance, mean forces were 118 ± 49 mN. Using an active force control method, overall mean forces reduced to 69 ± 15, with a statistical difference compared with other methods (P < 0.001). Comparing intraocular directions, superior sector required higher forces and the force control method reduced differences in forces between users and retained the same force pattern between them. Conclusions: Results validate that the introduction of robotic assistance might increase the dynamic interactions between instrument and sclera, and the addition of an active force control method reduces the forces at levels lower than manual manipulations. Translational Relevance: All marketing benefits from extreme accuracy and stability from robots, however, redundancy of safety mechanisms during intraocular manipula-tions, especially on force control and surgical awareness, would allow all utility of robotic assistance in ophthalmology.},\nauthor = {Urias, M{\\"{u}}ller G. and Patel, Niravkumar and Ebrahimi, Ali and Iordachita, Iulian and Gehlbach, Peter L.},\ndoi = {10.1167/tvst.9.10.2},\nissn = {21642591},\njournal = {Translational Vision Science and Technology},\nkeywords = {Microsurgery,Retina,Robotic surgical procedures},\nnumber = {10},\npages = {1--9},\npublisher = {The Association for Research in Vision and Ophthalmology},\ntitle = {{Robotic retinal surgery impacts on scleral forces: In vivo study}},\nvolume = {9},\nyear = {2020}\n}\n
\n
\n\n\n
\n Purpose: This study aims to map force interaction between instrument and sclera of in vivo rabbits during retinal procedures, and verify if a robotic active force control could prevent unwanted increase of forces on the sclera. Methods: Experiments consisted in the performance of intraocular movements of a force sensing instrument, adjacent to the retinal surface, in radial directions, from the center to the periphery and back, and compared manual manipulations with robotic assistance and also robotic assistance with an active force control. This protocol was approved by the Animal Use and Ethical Committee and experiments were according to ARVO Statement of Animal Use. Results: Mean forces using manual manipulations were 115 ± 51 mN. Using robotic assistance, mean forces were 118 ± 49 mN. Using an active force control method, overall mean forces reduced to 69 ± 15, with a statistical difference compared with other methods (P < 0.001). Comparing intraocular directions, superior sector required higher forces and the force control method reduced differences in forces between users and retained the same force pattern between them. Conclusions: Results validate that the introduction of robotic assistance might increase the dynamic interactions between instrument and sclera, and the addition of an active force control method reduces the forces at levels lower than manual manipulations. Translational Relevance: All marketing benefits from extreme accuracy and stability from robots, however, redundancy of safety mechanisms during intraocular manipula-tions, especially on force control and surgical awareness, would allow all utility of robotic assistance in ophthalmology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Body-Mounted Robotics for Interventional MRI Procedures.\n \n \n \n\n\n \n Li, G.; Patel, N. A.; Sharma, K.; Monfaredi, R.; Dumoulin, C.; Fritz, J.; Iordachita, I.; and Cleary, K.\n\n\n \n\n\n\n IEEE Transactions on Medical Robotics and Bionics, 2(4): 557–560. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{li2020body,\nabstract = {This article reports the development and initial cadaveric evaluation of a robotic framework for MRI-guided interventions using a body-mounted approach. The framework is developed based on modular design principles. The framework consists of a body-mounted needle placement manipulator, robot control software, robot controller, interventional planning workstation, and MRI scanner. The framework is modular in the sense that all components are connected independently, making it readily extensible and reconfigurable for supporting the clinical workflow of various interventional MRI procedures. Based on this framework we developed two body-mounted robots for musculoskeletal procedures. The first robot is a four-degree of freedom system called ArthroBot for shoulder arthrography in pediatric patients. The second robot is a six-degree of freedom system called PainBot for perineural injections used to treat pain in adult and pediatric patients. Body-mounted robots are designed with compact and lightweight structure so that they can be attached directly to the patient, which minimizes the effect of patient motion by allowing the robot to move with the patient. A dedicated clinical workflow is proposed for the MRI-guided musculoskeletal procedures using body-mounted robots. Initial cadaveric evaluations of both systems were performed to verify the feasibility of the systems and validate the clinical workflow.},\nauthor = {Li, Gang and Patel, Niravkumar A. and Sharma, Karun and Monfaredi, Reza and Dumoulin, Charles and Fritz, Jan and Iordachita, Iulian and Cleary, Kevin},\ndoi = {10.1109/tmrb.2020.3030532},\njournal = {IEEE Transactions on Medical Robotics and Bionics},\nnumber = {4},\npages = {557--560},\npublisher = {IEEE},\ntitle = {{Body-Mounted Robotics for Interventional MRI Procedures}},\nvolume = {2},\nyear = {2020}\n}\n
\n
\n\n\n
\n This article reports the development and initial cadaveric evaluation of a robotic framework for MRI-guided interventions using a body-mounted approach. The framework is developed based on modular design principles. The framework consists of a body-mounted needle placement manipulator, robot control software, robot controller, interventional planning workstation, and MRI scanner. The framework is modular in the sense that all components are connected independently, making it readily extensible and reconfigurable for supporting the clinical workflow of various interventional MRI procedures. Based on this framework we developed two body-mounted robots for musculoskeletal procedures. The first robot is a four-degree of freedom system called ArthroBot for shoulder arthrography in pediatric patients. The second robot is a six-degree of freedom system called PainBot for perineural injections used to treat pain in adult and pediatric patients. Body-mounted robots are designed with compact and lightweight structure so that they can be attached directly to the patient, which minimizes the effect of patient motion by allowing the robot to move with the patient. A dedicated clinical workflow is proposed for the MRI-guided musculoskeletal procedures using body-mounted robots. Initial cadaveric evaluations of both systems were performed to verify the feasibility of the systems and validate the clinical workflow.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n System Integration and Preliminary Clinical Evaluation of a Robotic System for MRI-Guided Transperineal Prostate Biopsy.\n \n \n \n \n\n\n \n Patel, N. A.; Li, G.; Shang, W.; Wartenberg, M.; Heffter, T.; Burdette, E. C.; Iordachita, I.; Tokuda, J.; Hata, N.; Tempany, C. M.; and Fischer, G. S.\n\n\n \n\n\n\n Journal of Medical Robotics Research, 04(02): 1950001. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"SystemPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{patel2018system,\nabstract = {This paper presents the development, preclinical evaluation, and preliminary clinical study of a robotic system for targeted transperineal prostate biopsy under direct interventional magnetic resonance imaging (MRI) guidance. The clinically integrated robotic system is developed based on a modular design approach, comprised of surgical navigation application, robot control software, MRI robot controller hardware, and robotic needle placement manipulator. The system provides enabling technologies for MRI-guided procedures. It can be easily transported and setup for supporting the clinical workflow of interventional procedures, and the system is readily extensible and reconfigurable to other clinical applications. Preclinical evaluation of the system is performed with phantom studies in a 3 Tesla MRI scanner, rehearsing the proposed clinical workflow, and demonstrating an in-plane targeting error of 1.5[Formula: see text]mm. The robotic system has been approved by the institutional review board (IRB) for clinical trials. A preliminary clinical study is conducted with the patient consent, demonstrating the targeting errors at two biopsy target sites to be 4.0[Formula: see text]mm and 3.7[Formula: see text]mm, which is sufficient to target a clinically significant tumor foci. First-in-human trials to evaluate the system's effectiveness and accuracy for MR image-guided prostate biopsy are underway.},\nauthor = {Patel, Niravkumar A. and Li, Gang and Shang, Weijian and Wartenberg, Marek and Heffter, Tamas and Burdette, Everette C. and Iordachita, Iulian and Tokuda, Junichi and Hata, Nobuhiko and Tempany, Clare M. and Fischer, Gregory S.},\ndoi = {10.1142/s2424905x19500016},\nissn = {2424-905X},\njournal = {Journal of Medical Robotics Research},\nnumber = {02},\npages = {1950001},\npublisher = {World Scientific},\ntitle = {{System Integration and Preliminary Clinical Evaluation of a Robotic System for MRI-Guided Transperineal Prostate Biopsy}},\nurl = {https://doi.org/10.1142/S2424905X19500016},\nvolume = {04},\nyear = {2019}\n}\n
\n
\n\n\n
\n This paper presents the development, preclinical evaluation, and preliminary clinical study of a robotic system for targeted transperineal prostate biopsy under direct interventional magnetic resonance imaging (MRI) guidance. The clinically integrated robotic system is developed based on a modular design approach, comprised of surgical navigation application, robot control software, MRI robot controller hardware, and robotic needle placement manipulator. The system provides enabling technologies for MRI-guided procedures. It can be easily transported and setup for supporting the clinical workflow of interventional procedures, and the system is readily extensible and reconfigurable to other clinical applications. Preclinical evaluation of the system is performed with phantom studies in a 3 Tesla MRI scanner, rehearsing the proposed clinical workflow, and demonstrating an in-plane targeting error of 1.5[Formula: see text]mm. The robotic system has been approved by the institutional review board (IRB) for clinical trials. A preliminary clinical study is conducted with the patient consent, demonstrating the targeting errors at two biopsy target sites to be 4.0[Formula: see text]mm and 3.7[Formula: see text]mm, which is sufficient to target a clinically significant tumor foci. First-in-human trials to evaluate the system's effectiveness and accuracy for MR image-guided prostate biopsy are underway.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robotic Assisted MRI-guided interventional interstitial MR-guided focused ultrasound ablation in a swine model.\n \n \n \n \n\n\n \n MacDonell, J.; Patel, N.; Fischer, G.; Burdette, E. C.; Qian, J.; Chumbalkar, V.; Ghoshal, G.; Heffter, T.; Williams, E.; Gounis, M.; King, R.; Thibodeau, J.; Bogdanov, G.; Brooks, O. W.; Langan, E.; Hwang, R.; and Pilitsis, J. G.\n\n\n \n\n\n\n Neurosurgery, 84(5): 1138–1147. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"RoboticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{macdonell2018robotic,\nabstract = {Background: Ablative lesions are current treatments for epilepsy and brain tumors. Interstitial magnetic resonance (MR) guided focused ultrasound (iMRgFUS) may be an alternate ablation technique which limits thermal tissue charring as compared to laser therapy (LITT) and can produce larger ablation patterns nearer the surface than transcranial MR guided focused ultrasound (tcMRgFUS). Objective: To describe our experience with interstitial focused ultrasound (iFUS) ablations in swine, using MR-guided robotically assisted (MRgRA) delivery. Methods: In an initial 3 animals, we optimized the workflow of the robot in the MR suite and made modifications to the robotic arm to allow range of motion. Then, 6 farm pigs (4 acute, 2 survival) underwent 7 iMRgFUS ablations using MRgRA. We altered dosing to explore differences between thermal dosing in brain as compared to other tissues. Imaging was compared to gross examination. Results: Our work culminated in adjustments to the MRgRA, iMRgFUS probes, and dosing, culminating in 2 survival surgeries; swine had ablations with no neurological sequelae at 2 wk postprocedure. Immediately following iMRgFUS therapy, diffusionweighted imaging, and T1 weighted MR were accurate reflections of the ablation volume. T2 and fluid-attenuated inversion-recovery (FLAIR) images were accurate reflections of ablation volume 1-wk postprocedure. Conclusion:We successfully performedMRgRA iFUS ablation in swine and found intraoperative and postoperative imaging to correlate with histological examination. These data are useful to validate our system and to guide imaging follow-up for thermal ablation lesions in brain tissue from our therapy, tcMRgFUS, and LITT.},\nauthor = {MacDonell, Jacquelyn and Patel, Niravkumar and Fischer, Gregory and Burdette, E. Clif and Qian, Jiang and Chumbalkar, Vaibhav and Ghoshal, Goutam and Heffter, Tamas and Williams, Emery and Gounis, Matthew and King, Robert and Thibodeau, Juliette and Bogdanov, Gene and Brooks, Olivia W. and Langan, Erin and Hwang, Roy and Pilitsis, Julie G.},\ndoi = {10.1093/neuros/nyy266},\nissn = {15244040},\njournal = {Neurosurgery},\nkeywords = {Brain tumor,High intensity focused ultrasound,Interstitial focused ultrasound,MRI-Guided,Neural ablation,Robot assisted surgery},\nnumber = {5},\npages = {1138--1147},\npmid = {29905844},\ntitle = {{Robotic Assisted MRI-guided interventional interstitial MR-guided focused ultrasound ablation in a swine model}},\nurl = {https://doi.org/10.1093/neuros/nyy266},\nvolume = {84},\nyear = {2019}\n}\n
\n
\n\n\n
\n Background: Ablative lesions are current treatments for epilepsy and brain tumors. Interstitial magnetic resonance (MR) guided focused ultrasound (iMRgFUS) may be an alternate ablation technique which limits thermal tissue charring as compared to laser therapy (LITT) and can produce larger ablation patterns nearer the surface than transcranial MR guided focused ultrasound (tcMRgFUS). Objective: To describe our experience with interstitial focused ultrasound (iFUS) ablations in swine, using MR-guided robotically assisted (MRgRA) delivery. Methods: In an initial 3 animals, we optimized the workflow of the robot in the MR suite and made modifications to the robotic arm to allow range of motion. Then, 6 farm pigs (4 acute, 2 survival) underwent 7 iMRgFUS ablations using MRgRA. We altered dosing to explore differences between thermal dosing in brain as compared to other tissues. Imaging was compared to gross examination. Results: Our work culminated in adjustments to the MRgRA, iMRgFUS probes, and dosing, culminating in 2 survival surgeries; swine had ablations with no neurological sequelae at 2 wk postprocedure. Immediately following iMRgFUS therapy, diffusionweighted imaging, and T1 weighted MR were accurate reflections of the ablation volume. T2 and fluid-attenuated inversion-recovery (FLAIR) images were accurate reflections of ablation volume 1-wk postprocedure. Conclusion:We successfully performedMRgRA iFUS ablation in swine and found intraoperative and postoperative imaging to correlate with histological examination. These data are useful to validate our system and to guide imaging follow-up for thermal ablation lesions in brain tissue from our therapy, tcMRgFUS, and LITT.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Preliminary study of an RNN-based active interventional robotic system (AIRS) in retinal microsurgery.\n \n \n \n \n\n\n \n He, C.; Patel, N.; Ebrahimi, A.; Kobilarov, M.; and Iordachita, I.\n\n\n \n\n\n\n International Journal of Computer Assisted Radiology and Surgery,1–10. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"PreliminaryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 6 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{he2018NNControl,\nabstract = {Purpose: Retinal microsurgery requires highly dexterous and precise maneuvering of instruments inserted into the eyeball through the sclerotomy port. During such procedures, the sclera can potentially be injured from extreme tool-to-sclera contact force caused by surgeon's unintentional misoperations. Methods: We present an active interventional robotic system to prevent such iatrogenic accidents by enabling the robotic system to actively counteract the surgeon's possible unsafe operations in advance of their occurrence. Relying on a novel force sensing tool to measure and collect scleral forces, we construct a recurrent neural network with long short-term memory unit to oversee surgeon's operation and predict possible unsafe scleral forces up to the next 200 ms. We then apply a linear admittance control to actuate the robot to reduce the undesired scleral force. The system is implemented using an existing “steady hand” eye robot platform. The proposed method is evaluated on an artificial eye phantom by performing a “vessel following” mock retinal surgery operation. Results: Empirical validation over multiple trials indicates that the proposed active interventional robotic system could help to reduce the number of unsafe manipulation events. Conclusions: We develop an active interventional robotic system to actively prevent surgeon's unsafe operations in retinal surgery. The result of the evaluation experiments shows that the proposed system can improve the surgeon's performance.},\nauthor = {He, Changyan and Patel, Niravkumar and Ebrahimi, Ali and Kobilarov, Marin and Iordachita, Iulian},\ndoi = {10.1007/s11548-019-01947-9},\nissn = {18616429},\njournal = {International Journal of Computer Assisted Radiology and Surgery},\nkeywords = {Interventional system,Medical robot,Recurrent neural network,Retinal surgery},\npages = {1--10},\npmid = {30887423},\npublisher = {Springer},\ntitle = {{Preliminary study of an RNN-based active interventional robotic system (AIRS) in retinal microsurgery}},\nurl = {https://doi.org/10.1007/s11548-019-01947-9},\nyear = {2019}\n}\n
\n
\n\n\n
\n Purpose: Retinal microsurgery requires highly dexterous and precise maneuvering of instruments inserted into the eyeball through the sclerotomy port. During such procedures, the sclera can potentially be injured from extreme tool-to-sclera contact force caused by surgeon's unintentional misoperations. Methods: We present an active interventional robotic system to prevent such iatrogenic accidents by enabling the robotic system to actively counteract the surgeon's possible unsafe operations in advance of their occurrence. Relying on a novel force sensing tool to measure and collect scleral forces, we construct a recurrent neural network with long short-term memory unit to oversee surgeon's operation and predict possible unsafe scleral forces up to the next 200 ms. We then apply a linear admittance control to actuate the robot to reduce the undesired scleral force. The system is implemented using an existing “steady hand” eye robot platform. The proposed method is evaluated on an artificial eye phantom by performing a “vessel following” mock retinal surgery operation. Results: Empirical validation over multiple trials indicates that the proposed active interventional robotic system could help to reduce the number of unsafe manipulation events. Conclusions: We develop an active interventional robotic system to actively prevent surgeon's unsafe operations in retinal surgery. The result of the evaluation experiments shows that the proposed system can improve the surgeon's performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Robotic assistance affects manipulation skills in bimanual retinal surgery simulation: a tool-to-sclera force study.\n \n \n \n\n\n \n He, C.; Roizenblatt, M.; Patel, N.; Ebrahimi, A.; Gehlbach, P. L; and Iordachita, I.\n\n\n \n\n\n\n Investigative Ophthalmology & Visual Science, 60(9): 5798. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{he2019robotic,\nauthor = {He, Changyan and Roizenblatt, Marina and Patel, Niravkumar and Ebrahimi, Ali and Gehlbach, Peter L and Iordachita, Iulian},\njournal = {Investigative Ophthalmology & Visual Science},\nnumber = {9},\npages = {5798},\npublisher = {The Association for Research in Vision and Ophthalmology},\ntitle = {{Robotic assistance affects manipulation skills in bimanual retinal surgery simulation: a tool-to-sclera force study}},\nvolume = {60},\nyear = {2019}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Artificial intelligence, robotics and eye surgery: Are we overfitted?.\n \n \n \n\n\n \n Urias, M. G.; Patel, N.; He, C.; Ebrahimi, A.; Kim, J. W.; Iordachita, I.; and Gehlbach, P. L.\n\n\n \n\n\n\n International Journal of Retina and Vitreous, 5(1): 1–4. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{urias2019artificial,\nabstract = {Eye surgery, specifically retinal micro-surgery involves sensory and motor skill that approaches human boundaries and physiological limits for steadiness, accuracy, and the ability to detect the small forces involved. Despite assumptions as to the benefit of robots in surgery and also despite great development effort, numerous challenges to the full development and adoption of robotic assistance in surgical ophthalmology, remain. Historically, the first in-human-robot-Assisted retinal surgery occurred nearly 30 years after the first experimental papers on the subject. Similarly, artificial intelligence emerged decades ago and it is only now being more fully realized in ophthalmology. The delay between conception and application has in part been due to the necessary technological advances required to implement new processing strategies. Chief among these has been the better matched processing power of specialty graphics processing units for machine learning. Transcending the classic concept of robots performing repetitive tasks, artificial intelligence and machine learning are related concepts that has proven their abilities to design concepts and solve problems. The implication of such abilities being that future machines may further intrude on the domain of heretofore "human-reserved" tasks. Although the potential of artificial intelligence/machine learning is profound, present marketing promises and hype exceeds its stage of development, analogous to the seventieth century mathematical "boom" with algebra. Nevertheless robotic systems augmented by machine learning may eventually improve robot-Assisted retinal surgery and could potentially transform the discipline. This commentary analyzes advances in retinal robotic surgery, its current drawbacks and limitations, and the potential role of artificial intelligence in robotic retinal surgery.},\nauthor = {Urias, M{\\"{u}}ller G. and Patel, Niravkumar and He, Changyan and Ebrahimi, Ali and Kim, Ji Woong and Iordachita, Iulian and Gehlbach, Peter L.},\ndoi = {10.1186/s40942-019-0202-y},\nissn = {20569920},\njournal = {International Journal of Retina and Vitreous},\nkeywords = {Artificial intelligence,Ophthalmology,Retina,Robotic surgical procedures,Robotics},\nnumber = {1},\npages = {1--4},\npublisher = {BioMed Central},\ntitle = {{Artificial intelligence, robotics and eye surgery: Are we overfitted?}},\nvolume = {5},\nyear = {2019}\n}\n
\n
\n\n\n
\n Eye surgery, specifically retinal micro-surgery involves sensory and motor skill that approaches human boundaries and physiological limits for steadiness, accuracy, and the ability to detect the small forces involved. Despite assumptions as to the benefit of robots in surgery and also despite great development effort, numerous challenges to the full development and adoption of robotic assistance in surgical ophthalmology, remain. Historically, the first in-human-robot-Assisted retinal surgery occurred nearly 30 years after the first experimental papers on the subject. Similarly, artificial intelligence emerged decades ago and it is only now being more fully realized in ophthalmology. The delay between conception and application has in part been due to the necessary technological advances required to implement new processing strategies. Chief among these has been the better matched processing power of specialty graphics processing units for machine learning. Transcending the classic concept of robots performing repetitive tasks, artificial intelligence and machine learning are related concepts that has proven their abilities to design concepts and solve problems. The implication of such abilities being that future machines may further intrude on the domain of heretofore \"human-reserved\" tasks. Although the potential of artificial intelligence/machine learning is profound, present marketing promises and hype exceeds its stage of development, analogous to the seventieth century mathematical \"boom\" with algebra. Nevertheless robotic systems augmented by machine learning may eventually improve robot-Assisted retinal surgery and could potentially transform the discipline. This commentary analyzes advances in retinal robotic surgery, its current drawbacks and limitations, and the potential role of artificial intelligence in robotic retinal surgery.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Toward Safe Retinal Microsurgery: Development and Evaluation of an RNN-Based Active Interventional Control Framework.\n \n \n \n\n\n \n He, C.; Patel, N.; Shahbazi, M.; Yang, Y.; Gehlbach, P.; Kobilarov, M.; and Iordachita, I.\n\n\n \n\n\n\n IEEE Transactions on Biomedical Engineering, 67(4): 966–977. 2020.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{he2019toward,\nabstract = {Objective: Robotics-assisted retinal microsurgery provides several benefits including improvement of manipulation precision. The assistance provided to the surgeons by current robotic frameworks is, however, a 'passive' support, e.g., by damping hand tremor. Intelligent assistance and active guidance are, however, lacking in the existing robotic frameworks. In this paper, an active interventional control framework (AICF) has been presented to increase operation safety by actively intervening the operation to avoid exertion of excessive forces to the sclera. Methods: AICF consists of the following four components: first, the steady-hand eye robot as the robotic module; second, a sensorized tool to measure tool-to-sclera forces; third, a recurrent neural network to predict occurrence of undesired events based on a short history of time series of sensor measurements; and finally, a variable admittance controller to command the robot away from the undesired instances. Results: A set of user studies were conducted involving 14 participants (with four surgeons). The users were asked to perform a vessel-following task on an eyeball phantom with the assistance of AICF as well as other two benchmark approaches, i.e., auditory feedback (AF) and real-time force feedback (RF). Statistical analysis shows that AICF results in a significant reduction of proportion of undesired instances to about 2.5%, compared with 38.4% and 26.2% using AF and RF, respectively. Conclusion: AICF can effectively predict excessive-force instances and augment performance of the user to avoid undesired events during robot-assisted microsurgical tasks. Significance: The proposed system may be extended to other fields of microsurgery and may potentially reduce tissue injury.},\nauthor = {He, Changyan and Patel, Niravkumar and Shahbazi, Mahya and Yang, Yang and Gehlbach, Peter and Kobilarov, Marin and Iordachita, Iulian},\ndoi = {10.1109/TBME.2019.2926060},\nissn = {15582531},\njournal = {IEEE Transactions on Biomedical Engineering},\nkeywords = {Medical robotics,recurrent neural network,retinal surgery,safety in microsurgery},\nnumber = {4},\npages = {966--977},\npmid = {31265381},\npublisher = {IEEE},\ntitle = {{Toward Safe Retinal Microsurgery: Development and Evaluation of an RNN-Based Active Interventional Control Framework}},\nvolume = {67},\nyear = {2020}\n}\n
\n
\n\n\n
\n Objective: Robotics-assisted retinal microsurgery provides several benefits including improvement of manipulation precision. The assistance provided to the surgeons by current robotic frameworks is, however, a 'passive' support, e.g., by damping hand tremor. Intelligent assistance and active guidance are, however, lacking in the existing robotic frameworks. In this paper, an active interventional control framework (AICF) has been presented to increase operation safety by actively intervening the operation to avoid exertion of excessive forces to the sclera. Methods: AICF consists of the following four components: first, the steady-hand eye robot as the robotic module; second, a sensorized tool to measure tool-to-sclera forces; third, a recurrent neural network to predict occurrence of undesired events based on a short history of time series of sensor measurements; and finally, a variable admittance controller to command the robot away from the undesired instances. Results: A set of user studies were conducted involving 14 participants (with four surgeons). The users were asked to perform a vessel-following task on an eyeball phantom with the assistance of AICF as well as other two benchmark approaches, i.e., auditory feedback (AF) and real-time force feedback (RF). Statistical analysis shows that AICF results in a significant reduction of proportion of undesired instances to about 2.5%, compared with 38.4% and 26.2% using AF and RF, respectively. Conclusion: AICF can effectively predict excessive-force instances and augment performance of the user to avoid undesired events during robot-assisted microsurgical tasks. Significance: The proposed system may be extended to other fields of microsurgery and may potentially reduce tissue injury.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Preclinical evaluation of an integrated robotic system for magnetic resonance imaging guided shoulder arthrography.\n \n \n \n\n\n \n Patel, N.; Yan, J.; Monfaredi, R.; Sharma, K.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n Journal of Medical Imaging, 6(02): 1. 2019.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{patel2019preclinical,\nabstract = {{\\textcopyright} 2019 Society of Photo-Optical Instrumentation Engineers (SPIE). Shoulder arthrography is a diagnostic procedure which involves injecting a contrast agent into the joint space for enhanced visualization of anatomical structures. Typically, a contrast agent is injected under fluoroscopy or computed tomography (CT) guidance, resulting in exposure to ionizing radiation, which should be avoided especially in pediatric patients. The patient then waits for the next available magnetic resonance imaging (MRI) slot for obtaining high-resolution anatomical images for diagnosis, which can result in long procedure times. Performing the contrast agent injection under MRI guidance could overcome both these issues. However, it comes with the challenges of the MRI environment including high magnetic field strength, limited ergonomic patient access, and lack of real-time needle guidance. We present the development of an integrated robotic system to perform shoulder arthrography procedures under intraoperative MRI guidance, eliminating fluoroscopy/CT guidance and patient transportation from the fluoroscopy/CT room to the MRI suite. The average accuracy of the robotic manipulator in benchtop experiments is 0.90 mm and 1.04 deg, whereas the average accuracy of the integrated system in MRI phantom experiments is 1.92 mm and 1.28 deg at the needle tip. Based on the American Society for Testing and Materials (ASTM) tests performed, the system is classified as MR conditional.},\nauthor = {Patel, Niravkumar and Yan, Jiawen and Monfaredi, Reza and Sharma, Karun and Cleary, Kevin and Iordachita, Iulian},\ndoi = {10.1117/1.jmi.6.2.025006},\nissn = {2329-4302},\njournal = {Journal of Medical Imaging},\nnumber = {02},\npages = {1},\npublisher = {International Society for Optics and Photonics},\ntitle = {{Preclinical evaluation of an integrated robotic system for magnetic resonance imaging guided shoulder arthrography}},\nvolume = {6},\nyear = {2019}\n}\n
\n
\n\n\n
\n © 2019 Society of Photo-Optical Instrumentation Engineers (SPIE). Shoulder arthrography is a diagnostic procedure which involves injecting a contrast agent into the joint space for enhanced visualization of anatomical structures. Typically, a contrast agent is injected under fluoroscopy or computed tomography (CT) guidance, resulting in exposure to ionizing radiation, which should be avoided especially in pediatric patients. The patient then waits for the next available magnetic resonance imaging (MRI) slot for obtaining high-resolution anatomical images for diagnosis, which can result in long procedure times. Performing the contrast agent injection under MRI guidance could overcome both these issues. However, it comes with the challenges of the MRI environment including high magnetic field strength, limited ergonomic patient access, and lack of real-time needle guidance. We present the development of an integrated robotic system to perform shoulder arthrography procedures under intraoperative MRI guidance, eliminating fluoroscopy/CT guidance and patient transportation from the fluoroscopy/CT room to the MRI suite. The average accuracy of the robotic manipulator in benchtop experiments is 0.90 mm and 1.04 deg, whereas the average accuracy of the integrated system in MRI phantom experiments is 1.92 mm and 1.28 deg at the needle tip. Based on the American Society for Testing and Materials (ASTM) tests performed, the system is classified as MR conditional.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Magnetic resonance-guided interstitial high-intensity focused ultrasound for brain tumor ablation.\n \n \n \n \n\n\n \n MacDonell, J.; Patel, N.; Rubino, S.; Ghoshal, G.; Fischer, G.; Clif Burdette, E.; Hwang, R.; and Pilitsis, J. G.\n\n\n \n\n\n\n Neurosurgical Focus, 44(2): E11. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"MagneticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{macdonell2018magnetic,\nabstract = {Currently, treatment of brain tumors is limited to resection, chemotherapy, and radiotherapy. Thermal ablation has been recently explored. High-intensity focused ultrasound (HIFU) is being explored as an alternative. Specifically, the authors propose delivering HIFU internally to the tumor with an MRI-guided robotic assistant (MRgRA). The advantage of the authors' interstitial device over external MRI-guided HIFU (MRgHIFU) is that it allows for conformal, precise ablation and concurrent tissue sampling. The authors describe their workflow for MRgRA HIFU delivery.},\nauthor = {MacDonell, Jacquelyn and Patel, Niravkumar and Rubino, Sebastian and Ghoshal, Goutam and Fischer, Gregory and {Clif Burdette}, E. and Hwang, Roy and Pilitsis, Julie G.},\ndoi = {10.3171/2017.11.FOCUS17613},\nissn = {10920684},\njournal = {Neurosurgical Focus},\nkeywords = {Brain tumor,High-intensity focused ultrasound,MRI guided,Neural ablation},\nnumber = {2},\npages = {E11},\npmid = {29385926},\npublisher = {American Association of Neurological Surgeons},\ntitle = {{Magnetic resonance-guided interstitial high-intensity focused ultrasound for brain tumor ablation}},\nurl = {https://doi.org/10.3171/2017.11.FOCUS17613},\nvolume = {44},\nyear = {2018}\n}\n
\n
\n\n\n
\n Currently, treatment of brain tumors is limited to resection, chemotherapy, and radiotherapy. Thermal ablation has been recently explored. High-intensity focused ultrasound (HIFU) is being explored as an alternative. Specifically, the authors propose delivering HIFU internally to the tumor with an MRI-guided robotic assistant (MRgRA). The advantage of the authors' interstitial device over external MRI-guided HIFU (MRgHIFU) is that it allows for conformal, precise ablation and concurrent tissue sampling. The authors describe their workflow for MRgRA HIFU delivery.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Closed-Loop Active Compensation for Needle Deflection and Target Shift During Cooperatively Controlled Robotic Needle Insertion.\n \n \n \n \n\n\n \n Wartenberg, M.; Schornak, J.; Gandomi, K.; Carvalho, P.; Nycz, C.; Patel, N.; Iordachita, I.; Tempany, C.; Hata, N.; Tokuda, J.; and Fischer, G. S.\n\n\n \n\n\n\n Annals of Biomedical Engineering, 46(10): 1582–1594. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Closed-LoopPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{wartenberg2018closed,\nabstract = {Intra-operative imaging is sometimes available to assist needle biopsy, but typical open-loop insertion does not account for unmodeled needle deflection or target shift. Closed-loop image-guided compensation for deviation from an initial straight-line trajectory through rotational control of an asymmetric tip can reduce targeting error. Incorporating robotic closed-loop control often reduces physician interaction with the patient, but by pairing closed-loop trajectory compensation with hands-on cooperatively controlled insertion, a physician's control of the procedure can be maintained while incorporating benefits of robotic accuracy. A series of needle insertions were performed with a typical 18G needle using closed-loop active compensation under both fully autonomous and user-directed cooperative control. We demonstrated equivalent improvement in accuracy while maintaining physician-in-the-loop control with no statistically significant difference (p > 0.05) in the targeting accuracy between any pair of autonomous or individual cooperative sets, with average targeting accuracy of 3.56 mmrms. With cooperatively controlled insertions and target shift between 1 and 10 mm introduced upon needle contact, the system was able to effectively compensate up to the point where error approached a maximum curvature governed by bending mechanics. These results show closed-loop active compensation can enhance targeting accuracy, and that the improvement can be maintained under user directed cooperative insertion.},\nauthor = {Wartenberg, Marek and Schornak, Joseph and Gandomi, Katie and Carvalho, Paulo and Nycz, Chris and Patel, Niravkumar and Iordachita, Iulian and Tempany, Clare and Hata, Nobuhiko and Tokuda, Junichi and Fischer, Gregory S.},\ndoi = {10.1007/s10439-018-2070-2},\nissn = {15739686},\njournal = {Annals of Biomedical Engineering},\nkeywords = {Image-guided therapy,Medical robotics,Needle steering,Teleoperation},\nnumber = {10},\npages = {1582--1594},\npmid = {29926303},\npublisher = {Springer},\ntitle = {{Closed-Loop Active Compensation for Needle Deflection and Target Shift During Cooperatively Controlled Robotic Needle Insertion}},\nurl = {https://doi.org/10.1007/s10439-018-2070-2},\nvolume = {46},\nyear = {2018}\n}\n
\n
\n\n\n
\n Intra-operative imaging is sometimes available to assist needle biopsy, but typical open-loop insertion does not account for unmodeled needle deflection or target shift. Closed-loop image-guided compensation for deviation from an initial straight-line trajectory through rotational control of an asymmetric tip can reduce targeting error. Incorporating robotic closed-loop control often reduces physician interaction with the patient, but by pairing closed-loop trajectory compensation with hands-on cooperatively controlled insertion, a physician's control of the procedure can be maintained while incorporating benefits of robotic accuracy. A series of needle insertions were performed with a typical 18G needle using closed-loop active compensation under both fully autonomous and user-directed cooperative control. We demonstrated equivalent improvement in accuracy while maintaining physician-in-the-loop control with no statistically significant difference (p > 0.05) in the targeting accuracy between any pair of autonomous or individual cooperative sets, with average targeting accuracy of 3.56 mmrms. With cooperatively controlled insertions and target shift between 1 and 10 mm introduced upon needle contact, the system was able to effectively compensate up to the point where error approached a maximum curvature governed by bending mechanics. These results show closed-loop active compensation can enhance targeting accuracy, and that the improvement can be maintained under user directed cooperative insertion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Quantitative Evaluation of Tool-to-Sclera Forces, in a Model of Retinal Microsurgery.\n \n \n \n\n\n \n Roizenblatt, M.; Ebrahimi, A.; He, C.; Patel, N.; Iordachita, I.; and Gehlbach, P. L\n\n\n \n\n\n\n Investigative Ophthalmology & Visual Science, 59(9): 5926. 2018.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{roizenblatt2018quantitative,\nauthor = {Roizenblatt, Marina and Ebrahimi, Ali and He, Changyan and Patel, Niravkumar and Iordachita, Iulian and Gehlbach, Peter L},\njournal = {Investigative Ophthalmology & Visual Science},\nnumber = {9},\npages = {5926},\npublisher = {The Association for Research in Vision and Ophthalmology},\ntitle = {{Quantitative Evaluation of Tool-to-Sclera Forces, in a Model of Retinal Microsurgery}},\nvolume = {59},\nyear = {2018}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Evaluation of robot-assisted MRI-guided prostate biopsy: Needle path analysis during clinical trials.\n \n \n \n\n\n \n Moreira, P.; Patel, N.; Wartenberg, M.; Li, G.; Tuncali, K.; Heffter, T.; Burdette, E. C.; Iordachita, I.; Fischer, G. S.; Hata, N.; Tempany, C. M.; and Tokuda, J.\n\n\n \n\n\n\n Physics in Medicine and Biology, 63(20). 2018.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{moreira2018evaluation,\nabstract = {While the interaction between a needle and the surrounding tissue is known to cause a significant targeting error in prostate biopsy leading to false-negative results, few studies have demonstrated how it impacts in the actual procedure. We performed a pilot study on robot-assisted MRI-guided prostate biopsy with an emphasis on the in-depth analysis of the needle-tissue interaction in vivo. The data were acquired during in-bore transperineal prostate biopsies in patients using a 4 degrees-of-freedom (DoF) MRI-compatible robot. The anatomical structures in the pelvic area and the needle path were reconstructed from MR images, and quantitatively analyzed. We analyzed each structure individually and also proposed a mathematical model to investigate the influence of those structures in the targeting error using the mixed-model regression. The median targeting error in 188 insertions (27 patients) was 6.3 mm. Both the individual anatomical structure analysis and the mixed-model analysis showed that the deviation resulted from the contact between the needle and the skin as the main source of error. On contrary, needle bending inside the tissue (expressed as needle curvature) did not vary among insertions with targeting errors above and below the average. The analysis indicated that insertions crossing the bulbospongiosus presented a targeting error lower than the average. The mixed-model analysis demonstrated that the distance between the needle guide and the patient skin, the deviation at the entry point, and the path length inside the pelvic diaphragm had a statistically significant contribution to the targeting error (p < 0.05). Our results indicate that the errors associated with the elastic contact between the needle and the skin were more prominent than the needle bending along the insertion. Our findings will help to improve the preoperative planning of transperineal prostate biopsies.},\nauthor = {Moreira, Pedro and Patel, Niravkumar and Wartenberg, Marek and Li, Gang and Tuncali, Kemal and Heffter, Tamas and Burdette, Everette C. and Iordachita, Iulian and Fischer, Gregory S. and Hata, Nobuhiko and Tempany, Clare M. and Tokuda, Junichi},\ndoi = {10.1088/1361-6560/aae214},\nissn = {13616560},\njournal = {Physics in Medicine and Biology},\nkeywords = {in-bore prostate biopsy,needle deflection,needle path analysis,robot-assisted biopsy},\nnumber = {20},\npmid = {30226214},\npublisher = {IOP Publishing},\ntitle = {{Evaluation of robot-assisted MRI-guided prostate biopsy: Needle path analysis during clinical trials}},\nvolume = {63},\nyear = {2018}\n}\n
\n
\n\n\n
\n While the interaction between a needle and the surrounding tissue is known to cause a significant targeting error in prostate biopsy leading to false-negative results, few studies have demonstrated how it impacts in the actual procedure. We performed a pilot study on robot-assisted MRI-guided prostate biopsy with an emphasis on the in-depth analysis of the needle-tissue interaction in vivo. The data were acquired during in-bore transperineal prostate biopsies in patients using a 4 degrees-of-freedom (DoF) MRI-compatible robot. The anatomical structures in the pelvic area and the needle path were reconstructed from MR images, and quantitatively analyzed. We analyzed each structure individually and also proposed a mathematical model to investigate the influence of those structures in the targeting error using the mixed-model regression. The median targeting error in 188 insertions (27 patients) was 6.3 mm. Both the individual anatomical structure analysis and the mixed-model analysis showed that the deviation resulted from the contact between the needle and the skin as the main source of error. On contrary, needle bending inside the tissue (expressed as needle curvature) did not vary among insertions with targeting errors above and below the average. The analysis indicated that insertions crossing the bulbospongiosus presented a targeting error lower than the average. The mixed-model analysis demonstrated that the distance between the needle guide and the patient skin, the deviation at the entry point, and the path length inside the pelvic diaphragm had a statistically significant contribution to the targeting error (p < 0.05). Our results indicate that the errors associated with the elastic contact between the needle and the skin were more prominent than the needle bending along the insertion. Our findings will help to improve the preoperative planning of transperineal prostate biopsies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Evaluation of robot-assisted MRI-guided prostate biopsy: Needle path analysis during clinical trials.\n \n \n \n\n\n \n Moreira, P.; Patel, N.; Wartenberg, M.; Li, G.; Tuncali, K.; Heffter, T.; Burdette, E. C.; Iordachita, I.; Fischer, G. S.; Hata, N.; Tempany, C. M.; and Tokuda, J.\n\n\n \n\n\n\n Physics in Medicine and Biology, 63(20). 2018.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{moreira2018evaluation,\nabstract = {While the interaction between a needle and the surrounding tissue is known to cause a significant targeting error in prostate biopsy leading to false-negative results, few studies have demonstrated how it impacts in the actual procedure. We performed a pilot study on robot-assisted MRI-guided prostate biopsy with an emphasis on the in-depth analysis of the needle-tissue interaction in vivo. The data were acquired during in-bore transperineal prostate biopsies in patients using a 4 degrees-of-freedom (DoF) MRI-compatible robot. The anatomical structures in the pelvic area and the needle path were reconstructed from MR images, and quantitatively analyzed. We analyzed each structure individually and also proposed a mathematical model to investigate the influence of those structures in the targeting error using the mixed-model regression. The median targeting error in 188 insertions (27 patients) was 6.3 mm. Both the individual anatomical structure analysis and the mixed-model analysis showed that the deviation resulted from the contact between the needle and the skin as the main source of error. On contrary, needle bending inside the tissue (expressed as needle curvature) did not vary among insertions with targeting errors above and below the average. The analysis indicated that insertions crossing the bulbospongiosus presented a targeting error lower than the average. The mixed-model analysis demonstrated that the distance between the needle guide and the patient skin, the deviation at the entry point, and the path length inside the pelvic diaphragm had a statistically significant contribution to the targeting error (p < 0.05). Our results indicate that the errors associated with the elastic contact between the needle and the skin were more prominent than the needle bending along the insertion. Our findings will help to improve the preoperative planning of transperineal prostate biopsies.},\nauthor = {Moreira, Pedro and Patel, Niravkumar and Wartenberg, Marek and Li, Gang and Tuncali, Kemal and Heffter, Tamas and Burdette, Everette C. and Iordachita, Iulian and Fischer, Gregory S. and Hata, Nobuhiko and Tempany, Clare M. and Tokuda, Junichi},\ndoi = {10.1088/1361-6560/aae214},\nissn = {13616560},\njournal = {Physics in Medicine and Biology},\nkeywords = {in-bore prostate biopsy,needle deflection,needle path analysis,robot-assisted biopsy},\nnumber = {20},\npmid = {30226214},\npublisher = {IOP Publishing},\ntitle = {{Evaluation of robot-assisted MRI-guided prostate biopsy: Needle path analysis during clinical trials}},\nvolume = {63},\nyear = {2018}\n}\n
\n
\n\n\n
\n While the interaction between a needle and the surrounding tissue is known to cause a significant targeting error in prostate biopsy leading to false-negative results, few studies have demonstrated how it impacts in the actual procedure. We performed a pilot study on robot-assisted MRI-guided prostate biopsy with an emphasis on the in-depth analysis of the needle-tissue interaction in vivo. The data were acquired during in-bore transperineal prostate biopsies in patients using a 4 degrees-of-freedom (DoF) MRI-compatible robot. The anatomical structures in the pelvic area and the needle path were reconstructed from MR images, and quantitatively analyzed. We analyzed each structure individually and also proposed a mathematical model to investigate the influence of those structures in the targeting error using the mixed-model regression. The median targeting error in 188 insertions (27 patients) was 6.3 mm. Both the individual anatomical structure analysis and the mixed-model analysis showed that the deviation resulted from the contact between the needle and the skin as the main source of error. On contrary, needle bending inside the tissue (expressed as needle curvature) did not vary among insertions with targeting errors above and below the average. The analysis indicated that insertions crossing the bulbospongiosus presented a targeting error lower than the average. The mixed-model analysis demonstrated that the distance between the needle guide and the patient skin, the deviation at the entry point, and the path length inside the pelvic diaphragm had a statistically significant contribution to the targeting error (p < 0.05). Our results indicate that the errors associated with the elastic contact between the needle and the skin were more prominent than the needle bending along the insertion. Our findings will help to improve the preoperative planning of transperineal prostate biopsies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication.\n \n \n \n\n\n \n He, C; Ebrahimi, A; Roizenblatt, M; Patel, N; Yang, Y; Gehlbach, P L; and Iordachita, I\n\n\n \n\n\n\n RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication. 2018.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{he20182018,\nauthor = {He, C and Ebrahimi, A and Roizenblatt, M and Patel, N and Yang, Y and Gehlbach, P L and Iordachita, I},\nisbn = {9781538679807},\njournal = {RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication},\ntitle = {{RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication}},\nyear = {2018}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device.\n \n \n \n \n\n\n \n Su, H.; Shang, W.; Li, G.; Patel, N.; and Fischer, G. S.\n\n\n \n\n\n\n Annals of Biomedical Engineering, 45(8): 1917–1928. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{su2017mri,\nabstract = {This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.},\nauthor = {Su, Hao and Shang, Weijian and Li, Gang and Patel, Niravkumar and Fischer, Gregory S.},\ndoi = {10.1007/s10439-017-1839-z},\nissn = {15739686},\njournal = {Annals of Biomedical Engineering},\nkeywords = {Haptics,Image-guided surgery,MR-conditional,MRI-compatible robot,Percutaneous interventions,Teleoperation},\nnumber = {8},\npages = {1917--1928},\npmid = {28447178},\npublisher = {Springer},\ntitle = {{An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device}},\nurl = {https://doi.org/10.1007/s10439-017-1839-z},\nvolume = {45},\nyear = {2017}\n}\n
\n
\n\n\n
\n This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.\n \n \n \n \n\n\n \n Frank, T.; Krieger, A.; Leonard, S.; Patel, N. A.; and Tokuda, J.\n\n\n \n\n\n\n International Journal of Computer Assisted Radiology and Surgery, 12(8): 1451–1460. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"ROS-IGTL-Bridge:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{frank2017ros,\nabstract = {Purpose: With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. Methods: A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Results: Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. Conclusion: The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.},\nauthor = {Frank, Tobias and Krieger, Axel and Leonard, Simon and Patel, Niravkumar A. and Tokuda, Junichi},\ndoi = {10.1007/s11548-017-1618-1},\nissn = {18616429},\njournal = {International Journal of Computer Assisted Radiology and Surgery},\nkeywords = {Image-guided therapy,Interface,OpenIGTLink,ROS,Surgical robot},\nnumber = {8},\npages = {1451--1460},\npmid = {28567563},\npublisher = {Springer},\ntitle = {{ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment}},\nurl = {https://doi.org/10.1007/s11548-017-1618-1},\nvolume = {12},\nyear = {2017}\n}\n
\n
\n\n\n
\n Purpose: With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. Methods: A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Results: Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. Conclusion: The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance.\n \n \n \n \n\n\n \n Patel, N A\n\n\n \n\n\n\n . 2017.\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{patel2017towards,\nauthor = {Patel, N A},\npublisher = {Worcester Polytechnic Institute},\ntitle = {{Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance}},\nurl = {https://digitalcommons.wpi.edu/etd-dissertations/130/},\nyear = {2017}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n In-bore prostate transperineal interventions with an MRI-guided parallel manipulator: system development and preliminary evaluation.\n \n \n \n \n\n\n \n Eslami, S.; Shang, W.; Li, G.; Patel, N.; Fischer, G. S.; Tokuda, J.; Hata, N.; Tempany, C. M.; and Iordachita, I.\n\n\n \n\n\n\n International Journal of Medical Robotics and Computer Assisted Surgery, 12(2): 199–213. 2016.\n \n\n\n\n
\n\n\n\n \n \n \"In-borePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{eslami2016bore,\nabstract = {Background: Robot-assisted minimally-invasive surgery is well recognized as a feasible solution for diagnosis and treatment of prostate cancer in humans. Methods: This paper discusses the kinematics of a parallel 4 Degrees-of-Freedom (DOF) surgical manipulator designed for minimally invasive in-bore prostate percutaneous interventions through the patient's perineum. The proposed manipulator takes advantage of four sliders actuated by MRI-compatible piezoelectric motors and incremental rotary encoders. Errors, mostly originating from the design and manufacturing process, need to be identified and reduced before the robot is deployed in clinical trials. Results: The manipulator has undergone several experiments to evaluate the repeatability and accuracy (about 1 mm in air (in x or y direction) at the needle's reference point) of needle placement, which is an essential concern in percutaneous prostate interventions. Conclusion: The acquired results endorse the sustainability, precision and reliability of the manipulator. Copyright {\\textcopyright} 2015 John Wiley & Sons, Ltd.},\nauthor = {Eslami, Sohrab and Shang, Weijian and Li, Gang and Patel, Nirav and Fischer, Gregory S. and Tokuda, Junichi and Hata, Nobuhiko and Tempany, Clare M. and Iordachita, Iulian},\ndoi = {10.1002/rcs.1671},\nissn = {1478596X},\njournal = {International Journal of Medical Robotics and Computer Assisted Surgery},\nkeywords = {MRI compatible,biopsy,calibration assessment,parallel manipulator,prostate transperineal intervention},\nnumber = {2},\npages = {199--213},\npmid = {26111458},\npublisher = {Wiley Online Library},\ntitle = {{In-bore prostate transperineal interventions with an MRI-guided parallel manipulator: system development and preliminary evaluation}},\nurl = {https://doi.org/10.1002/rcs.1671},\nvolume = {12},\nyear = {2016}\n}\n
\n
\n\n\n
\n Background: Robot-assisted minimally-invasive surgery is well recognized as a feasible solution for diagnosis and treatment of prostate cancer in humans. Methods: This paper discusses the kinematics of a parallel 4 Degrees-of-Freedom (DOF) surgical manipulator designed for minimally invasive in-bore prostate percutaneous interventions through the patient's perineum. The proposed manipulator takes advantage of four sliders actuated by MRI-compatible piezoelectric motors and incremental rotary encoders. Errors, mostly originating from the design and manufacturing process, need to be identified and reduced before the robot is deployed in clinical trials. Results: The manipulator has undergone several experiments to evaluate the repeatability and accuracy (about 1 mm in air (in x or y direction) at the needle's reference point) of needle placement, which is an essential concern in percutaneous prostate interventions. Conclusion: The acquired results endorse the sustainability, precision and reliability of the manipulator. Copyright © 2015 John Wiley & Sons, Ltd.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n inproceedings\n \n \n (33)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n In-Bore Experimental Validation of Active Compensation and Membrane Puncture Detection for Targeted MRI-Guided Robotic Prostate Biopsy.\n \n \n \n\n\n \n Wartenberg, M.; Gandomi, K.; Carvalho, P.; Schornak, J.; Patel, N.; Iordachita, I.; Tempany, C.; Hata, N.; Tokuda, J.; and Fischer, G. S.\n\n\n \n\n\n\n In Proceedings of the 2018 International Symposium on Experimental Robotics, volume 11, pages 34–44, 2020. Springer Nature\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2020bore,\nabstract = {It is estimated that in the United States there will be 164,690 new cases and 29,430 deaths from prostate cancer in 2018 [1]. Trans-Rectal Ultrasound (TRUS) has typically been used to facilitate sampling of up to twenty biopsy cores, but due to variable prostate size this technique often still misses clinically significant cancers [2]. Instead, MRI provides higher image quality and multiparametric imaging, allowing for procedures with fewer needle insertions via direct targeting of suspicious lesions.},\nauthor = {Wartenberg, Marek and Gandomi, Katie and Carvalho, Paulo and Schornak, Joseph and Patel, Niravkumar and Iordachita, Iulian and Tempany, Clare and Hata, Nobuhiko and Tokuda, Junichi and Fischer, Gregory S.},\nbooktitle = {Proceedings of the 2018 International Symposium on Experimental Robotics},\ndoi = {10.1007/978-3-030-33950-0_4},\nissn = {25111264},\norganization = {Springer Nature},\npages = {34--44},\ntitle = {{In-Bore Experimental Validation of Active Compensation and Membrane Puncture Detection for Targeted MRI-Guided Robotic Prostate Biopsy}},\nvolume = {11},\nyear = {2020}\n}\n
\n
\n\n\n
\n It is estimated that in the United States there will be 164,690 new cases and 29,430 deaths from prostate cancer in 2018 [1]. Trans-Rectal Ultrasound (TRUS) has typically been used to facilitate sampling of up to twenty biopsy cores, but due to variable prostate size this technique often still misses clinically significant cancers [2]. Instead, MRI provides higher image quality and multiparametric imaging, allowing for procedures with fewer needle insertions via direct targeting of suspicious lesions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Spotlight-based 3D Instrument Guidance for Retinal Surgery.\n \n \n \n\n\n \n Zhou, M.; Wu, J.; Ebrahimi, A.; Patel, N.; He, C.; Gehlbach, P.; Taylor, R. H.; Knoll, A.; Nasseri, M. A.; and Iordachita, I.\n\n\n \n\n\n\n In 2020 International Symposium on Medical Robotics, ISMR 2020, pages 69–75, 2020. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{zhou2020spotlight,\nabstract = {Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-Assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-To-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.},\nauthor = {Zhou, Mingchuan and Wu, Jiahao and Ebrahimi, Ali and Patel, Niravkumar and He, Changyan and Gehlbach, Peter and Taylor, Russell H. and Knoll, Alois and Nasseri, M. Ali and Iordachita, Iulian},\nbooktitle = {2020 International Symposium on Medical Robotics, ISMR 2020},\ndoi = {10.1109/ISMR48331.2020.9312952},\nisbn = {9781728154886},\npages = {69--75},\ntitle = {{Spotlight-based 3D Instrument Guidance for Retinal Surgery}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-Assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-To-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Real Time Prediction of Sclera Force with LSTM Neural Networks in Robot-Assisted Retinal Surgery.\n \n \n \n\n\n \n He, C. Y.; Patel, N.; Kobilarov, M.; and Iordachita, I.\n\n\n \n\n\n\n In Applied Mechanics and Materials, volume 896, pages 183–194, 2020. Trans Tech Publications Ltd\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{he2020real,\nabstract = {Retinal microsurgery is one of the most technically demanding surgeries, during which the surgical tool needs to be inserted into the eyeball and is constantly constrained by the sclerotomy port. During the surgery, any unexpected manipulation could cause extreme tool-sclera contact force leading to sclera damage. Although, a robot assistant could reduce hand tremor and improve the tool positioning accuracy, it cannot prevent or alarm the surgeon about the upcoming danger caused by surgeon's misoperations, i.e., applying excessive force on the sclera. In this paper, we present a new method based on a Long Short Term Memory recurrent neural network for predicting the user behavior, i.e., the contact force between the tool and sclera (sclera force) and the insertion depth of the tool from sclera contact point (insertion depth) in real time (40Hz). The predicted force information is provided to the user through auditory feedback to alarm any unexpected sclera force. The user behavior data is collected in a mock retinal surgical operation on a dry eye phantom with Steady Hand Eye Robot and a novel multi-function sensing tool. The Long Short Term Memory recurrent neural network is trained on the collected time series of sclera force and insertion depth. The network can predict the sclera force and insertion depth 100 milliseconds in the future with 95.29% and 96.57% accuracy, respectively, and can help reduce the fraction of unsafe sclera forces from 40.19% to 15.43%.},\nauthor = {He, Chang Yan and Patel, Niravkumar and Kobilarov, Marin and Iordachita, Iulian},\nbooktitle = {Applied Mechanics and Materials},\ndoi = {10.4028/www.scientific.net/amm.896.183},\norganization = {Trans Tech Publications Ltd},\npages = {183--194},\ntitle = {{Real Time Prediction of Sclera Force with LSTM Neural Networks in Robot-Assisted Retinal Surgery}},\nvolume = {896},\nyear = {2020}\n}\n
\n
\n\n\n
\n Retinal microsurgery is one of the most technically demanding surgeries, during which the surgical tool needs to be inserted into the eyeball and is constantly constrained by the sclerotomy port. During the surgery, any unexpected manipulation could cause extreme tool-sclera contact force leading to sclera damage. Although, a robot assistant could reduce hand tremor and improve the tool positioning accuracy, it cannot prevent or alarm the surgeon about the upcoming danger caused by surgeon's misoperations, i.e., applying excessive force on the sclera. In this paper, we present a new method based on a Long Short Term Memory recurrent neural network for predicting the user behavior, i.e., the contact force between the tool and sclera (sclera force) and the insertion depth of the tool from sclera contact point (insertion depth) in real time (40Hz). The predicted force information is provided to the user through auditory feedback to alarm any unexpected sclera force. The user behavior data is collected in a mock retinal surgical operation on a dry eye phantom with Steady Hand Eye Robot and a novel multi-function sensing tool. The Long Short Term Memory recurrent neural network is trained on the collected time series of sclera force and insertion depth. The network can predict the sclera force and insertion depth 100 milliseconds in the future with 95.29% and 96.57% accuracy, respectively, and can help reduce the fraction of unsafe sclera forces from 40.19% to 15.43%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Scleral Force Evaluation during Vitreoretinal Surgery: In an in Vivo Rabbit Eye Model.\n \n \n \n\n\n \n Patel, N.; Urias, M.; Ebrahimi, A.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2020-July, pages 6049–6053, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2020scleral,\nabstract = {During vitreoretinal surgery, the surgeon is required to precisely manipulate multiple tools in a confined intraocular environment, while the tool tip to retina contact forces are at the limit of human sensation limits. During typical vitrectomy procedures, the surgeon inserts various tools through small incisions performed on the sclera of the eye (sclerotomies), and manipulates them to perform surgical tasks. During intraocular procedures, tool-tissue interactions occur at the sclerotomy ports and at the tool-tip when it contacts retina. Measuring such interactions may be valuable for providing force feedback necessary for robotic guidance. In this paper, we measure and analyze force measurements at the sclerotomy ports. To the best of our knowledge, this is the first time that the scleral forces are measured in an in vivo eye model. A force sensing instrument utilizing Fiber Bragg Grating (FBG) strain sensors was used to measure the scleral forces while two retinal surgeons performed intraocular tool manipulation (ITM) task in rabbit eyes as well as a dry phantom. The mean of the measured sclera forces were 129.11 mN and 80.45 mN in in vivo and dry phantom experiments, respectively.},\nauthor = {Patel, Niravkumar and Urias, Muller and Ebrahimi, Ali and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC44109.2020.9176402},\nisbn = {9781728119908},\nissn = {1557170X},\norganization = {IEEE},\npages = {6049--6053},\ntitle = {{Scleral Force Evaluation during Vitreoretinal Surgery: In an in Vivo Rabbit Eye Model}},\nvolume = {2020-July},\nyear = {2020}\n}\n
\n
\n\n\n
\n During vitreoretinal surgery, the surgeon is required to precisely manipulate multiple tools in a confined intraocular environment, while the tool tip to retina contact forces are at the limit of human sensation limits. During typical vitrectomy procedures, the surgeon inserts various tools through small incisions performed on the sclera of the eye (sclerotomies), and manipulates them to perform surgical tasks. During intraocular procedures, tool-tissue interactions occur at the sclerotomy ports and at the tool-tip when it contacts retina. Measuring such interactions may be valuable for providing force feedback necessary for robotic guidance. In this paper, we measure and analyze force measurements at the sclerotomy ports. To the best of our knowledge, this is the first time that the scleral forces are measured in an in vivo eye model. A force sensing instrument utilizing Fiber Bragg Grating (FBG) strain sensors was used to measure the scleral forces while two retinal surgeons performed intraocular tool manipulation (ITM) task in rabbit eyes as well as a dry phantom. The mean of the measured sclera forces were 129.11 mN and 80.45 mN in in vivo and dry phantom experiments, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Comparison of Manual and Robot Assisted Retinal Vein Cannulation in Chicken Chorioallantoic Membrane.\n \n \n \n\n\n \n Patel, N.; Urias, M.; He, C.; Gehlbach, P. L.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2020-July, pages 5101–5105, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2020comparison,\nabstract = {Retinal vein occlusion (RVO) is a vision threatening condition occurring in the central or the branch retinal veins. Risk factors include but are not limited to hypercoagulability, thrombus or other cause of low blood flow. Current clinically proven treatment options limit complications of vein occlusion without treating the causative occlusion. In recent years, a more direct approach called Retinal Vein Cannulation (RVC) has been explored both in animal and human eye models. Though RVC has demonstrated potential efficacy, it remains a challenging and risky procedure that demands precise needle manipulation to achieve safely. During RVC, a thin cannula (diameter 70-110 $\\mu$m) is delicately inserted into a retinal vein. Its intraluminal position is maintained for up to 2 minutes while infusion of a therapeutic drug occurs. Because the tool-tissue interaction forces at the needle tip are well below human tactile perception, a robotic assistant combined with a force sensing microneedle could alleviate the challenges of RVC. In this paper we present a comparative study of manual and robot assisted retinal vein cannulation in chicken chorioallantoic membrane (CAM) using a force sensing microneedle tool. The results indicate that the average puncture force and average force during the infusion period are larger in manual mode than in robot assisted mode. Moreover, retinal vein cannulation was more stable during infusion, in robot assisted mode.},\nauthor = {Patel, Niravkumar and Urias, Muller and He, Changyan and Gehlbach, Peter L. and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC44109.2020.9176853},\nisbn = {9781728119908},\nissn = {1557170X},\norganization = {IEEE},\npages = {5101--5105},\ntitle = {{A Comparison of Manual and Robot Assisted Retinal Vein Cannulation in Chicken Chorioallantoic Membrane}},\nvolume = {2020-July},\nyear = {2020}\n}\n
\n
\n\n\n
\n Retinal vein occlusion (RVO) is a vision threatening condition occurring in the central or the branch retinal veins. Risk factors include but are not limited to hypercoagulability, thrombus or other cause of low blood flow. Current clinically proven treatment options limit complications of vein occlusion without treating the causative occlusion. In recent years, a more direct approach called Retinal Vein Cannulation (RVC) has been explored both in animal and human eye models. Though RVC has demonstrated potential efficacy, it remains a challenging and risky procedure that demands precise needle manipulation to achieve safely. During RVC, a thin cannula (diameter 70-110 $μ$m) is delicately inserted into a retinal vein. Its intraluminal position is maintained for up to 2 minutes while infusion of a therapeutic drug occurs. Because the tool-tissue interaction forces at the needle tip are well below human tactile perception, a robotic assistant combined with a force sensing microneedle could alleviate the challenges of RVC. In this paper we present a comparative study of manual and robot assisted retinal vein cannulation in chicken chorioallantoic membrane (CAM) using a force sensing microneedle tool. The results indicate that the average puncture force and average force during the infusion period are larger in manual mode than in robot assisted mode. Moreover, retinal vein cannulation was more stable during infusion, in robot assisted mode.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Stochastic Force-based Insertion Depth and Tip Position Estimations of Flexible FBG-Equipped Instruments in Robotic Retinal Surgery.\n \n \n \n\n\n \n Ebrahimi, A.; Alambeigi, F.; Sefati, S.; Patel, N.; He, C.; Gehlbach, P. L.; and Iordachita, I.\n\n\n \n\n\n\n In IEEE/ASME Transactions on Mechatronics, pages 1–1, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2020kalman,\nannote = {(Under Review)},\nauthor = {Ebrahimi, Ali and Alambeigi, Farshid and Sefati, Shahriar and Patel, Niravkumar and He, Changyan and Gehlbach, Peter Louis and Iordachita, Iulian},\nbooktitle = {IEEE/ASME Transactions on Mechatronics},\ndoi = {10.1109/tmech.2020.3022830},\nissn = {1083-4435},\npages = {1--1},\npublisher = {IEEE},\ntitle = {{Stochastic Force-based Insertion Depth and Tip Position Estimations of Flexible FBG-Equipped Instruments in Robotic Retinal Surgery}},\nyear = {2020}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n An optimized tilt mechanism for a new steady-hand eye robot.\n \n \n \n\n\n \n Wu, J.; Li, G.; Urias, M.; Patel, N. A.; Liu, Y. H.; Gehlbach, P.; Taylor, R. H.; and Iordachita, I.\n\n\n \n\n\n\n In IEEE International Conference on Intelligent Robots and Systems, pages 3105–3111, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wu2020optimized,\nabstract = {Robot-assisted vitreoretinal surgery can filter surgeons' hand tremors and provide safe, accurate tool manipulation. In this paper, we report the design, optimization, and evaluation of a novel tilt mechanism for a new Steady-Hand Eye Robot (SHER). The new tilt mechanism features a four-bar linkage design and has a compact structure. Its kinematic configuration is optimized to minimize the required linear range of motion (LRM) for implementing a virtual remote center-of-motion (V-RCM) while tilting a surgical tool. Due to the different optimization constraints for the robots at the left and right sides of the human head, two configurations of this tilt mechanism are proposed. Experimental results show that the optimized tilt mechanism requires a significantly smaller LRM (e.g. 5.08 mm along Z direction and 8.77 mm along Y direction for left side robot) as compared to the slider-crank tilt mechanism used in the previous SHER (32.39 mm along Z direction and 21.10 mm along Y direction). The feasibility of the proposed tilt mechanism is verified in a mock bilateral robot-assisted vitreoretinal surgery. The ergonomically acceptable robot postures needed to access the surgical field is also determined.},\nauthor = {Wu, Jiahao and Li, Gang and Urias, Muller and Patel, Niravkumar A. and Liu, Yun Hui and Gehlbach, Peter and Taylor, Russell H. and Iordachita, Iulian},\nbooktitle = {IEEE International Conference on Intelligent Robots and Systems},\ndoi = {10.1109/IROS45743.2020.9340741},\nisbn = {9781728162126},\nissn = {21530866},\norganization = {IEEE},\npages = {3105--3111},\ntitle = {{An optimized tilt mechanism for a new steady-hand eye robot}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Robot-assisted vitreoretinal surgery can filter surgeons' hand tremors and provide safe, accurate tool manipulation. In this paper, we report the design, optimization, and evaluation of a novel tilt mechanism for a new Steady-Hand Eye Robot (SHER). The new tilt mechanism features a four-bar linkage design and has a compact structure. Its kinematic configuration is optimized to minimize the required linear range of motion (LRM) for implementing a virtual remote center-of-motion (V-RCM) while tilting a surgical tool. Due to the different optimization constraints for the robots at the left and right sides of the human head, two configurations of this tilt mechanism are proposed. Experimental results show that the optimized tilt mechanism requires a significantly smaller LRM (e.g. 5.08 mm along Z direction and 8.77 mm along Y direction for left side robot) as compared to the slider-crank tilt mechanism used in the previous SHER (32.39 mm along Z direction and 21.10 mm along Y direction). The feasibility of the proposed tilt mechanism is verified in a mock bilateral robot-assisted vitreoretinal surgery. The ergonomically acceptable robot postures needed to access the surgical field is also determined.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Auditory feedback effectiveness for enabling safe sclera force in robot-assisted vitreoretinal surgery: A multi-user study.\n \n \n \n\n\n \n Ebrahimi, A.; Roizenblatt, M.; Patel, N.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In IEEE International Conference on Intelligent Robots and Systems, pages 3274–3280, 2020. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2020auditory,\nabstract = {Robot-assisted retinal surgery has become increasingly prevalent in recent years in part due to the potential for robots to help surgeons improve the safety of an immensely delicate and difficult set of tasks. The integration of robots into retinal surgery has resulted in diminished surgeon perception of tool-to-tissue interaction forces due to robot's stiffness. The tactile perception of these interaction forces (sclera force) has long been a crucial source of feedback for surgeons who rely on them to guide surgical maneuvers and to prevent damaging forces from being applied to the eye. This problem is exacerbated when there are unfavorable sclera forces originating from patient movements (dynamic eyeball manipulation) during surgery which may cause the sclera forces to increase even drastically. In this study we aim at evaluating the efficacy of providing warning auditory feedback based on the level of sclera force measured by force sensing instruments. The intent is to enhance safety during dynamic eye manipulations in robot-assisted retinal surgery. The disturbances caused by lateral movement of patient's head are simulated using a piezo-actuated linear stage. The Johns Hopkins Steady-Hand Eye Robot (SHER), is then used in a multi-user experiment. Twelve participants are asked to perform a mock retinal surgery by following painted vessels inside an eye phantom using a force sensing instrument while auditory feedback is provided. The results indicate that the users are able to handle the eye motion disturbances while maintaining the sclera forces within safe boundaries when audio feedback is provided.},\nauthor = {Ebrahimi, Ali and Roizenblatt, Marina and Patel, Niravkumar and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {IEEE International Conference on Intelligent Robots and Systems},\ndoi = {10.1109/IROS45743.2020.9341350},\nisbn = {9781728162126},\nissn = {21530866},\norganization = {IEEE},\npages = {3274--3280},\ntitle = {{Auditory feedback effectiveness for enabling safe sclera force in robot-assisted vitreoretinal surgery: A multi-user study}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Robot-assisted retinal surgery has become increasingly prevalent in recent years in part due to the potential for robots to help surgeons improve the safety of an immensely delicate and difficult set of tasks. The integration of robots into retinal surgery has resulted in diminished surgeon perception of tool-to-tissue interaction forces due to robot's stiffness. The tactile perception of these interaction forces (sclera force) has long been a crucial source of feedback for surgeons who rely on them to guide surgical maneuvers and to prevent damaging forces from being applied to the eye. This problem is exacerbated when there are unfavorable sclera forces originating from patient movements (dynamic eyeball manipulation) during surgery which may cause the sclera forces to increase even drastically. In this study we aim at evaluating the efficacy of providing warning auditory feedback based on the level of sclera force measured by force sensing instruments. The intent is to enhance safety during dynamic eye manipulations in robot-assisted retinal surgery. The disturbances caused by lateral movement of patient's head are simulated using a piezo-actuated linear stage. The Johns Hopkins Steady-Hand Eye Robot (SHER), is then used in a multi-user experiment. Twelve participants are asked to perform a mock retinal surgery by following painted vessels inside an eye phantom using a force sensing instrument while auditory feedback is provided. The results indicate that the users are able to handle the eye motion disturbances while maintaining the sclera forces within safe boundaries when audio feedback is provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Fully Actuated Body-Mounted Robotic Assistant for MRI-Guided Low Back Pain Injection.\n \n \n \n\n\n \n Li, G.; Patel, N. A.; Liu, W.; Wu, D.; Sharma, K.; Cleary, K.; Fritz, J.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings - IEEE International Conference on Robotics and Automation, pages 5495–5501, 2020. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Li2020,\nabstract = {This paper reports the development of a fully actuated body-mounted robotic assistant for MRI-guided low back pain injection. The robot is designed with a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module. The 6-DOF fully actuated robot can operate inside the scanner bore during imaging; hence, minimizing the need of moving the patient in or out of the scanner during the procedure, and thus potentially reducing the procedure time and streamlining the workflow. The robot is built with a lightweight and compact structure that can be attached directly to the patient's lower back using straps; therefore, attenuating the effect of patient motion by moving with the patient. The novel remote actuation design of the needle driver module with beaded chain transmission can reduce the weight and profile on the patient, as well as minimize the imaging degradation caused by the actuation electronics. The free space positioning accuracy of the system was evaluated with an optical tracking system, demonstrating the mean absolute errors (MAE) of the tip position to be 0.99±0.46 mm and orientation to be 0.99±0.65°. Qualitative imaging quality evaluation was performed± on a human volunteer, revealing minimal visible image degradation that should not affect the procedure. The mounting stability of the system was assessed on a human volunteer, indicating the 3D position variation of target movement with respect to the robot frame to be less than 0.7 mm.},\nauthor = {Li, Gang and Patel, Niravkumar A. and Liu, Weiqiang and Wu, Di and Sharma, Karun and Cleary, Kevin and Fritz, Jan and Iordachita, Iulian},\nbooktitle = {Proceedings - IEEE International Conference on Robotics and Automation},\ndoi = {10.1109/ICRA40945.2020.9197534},\nisbn = {9781728173955},\nissn = {10504729},\npages = {5495--5501},\ntitle = {{A Fully Actuated Body-Mounted Robotic Assistant for MRI-Guided Low Back Pain Injection}},\nyear = {2020}\n}\n
\n
\n\n\n
\n This paper reports the development of a fully actuated body-mounted robotic assistant for MRI-guided low back pain injection. The robot is designed with a 4-DOF needle alignment module and a 2-DOF remotely actuated needle driver module. The 6-DOF fully actuated robot can operate inside the scanner bore during imaging; hence, minimizing the need of moving the patient in or out of the scanner during the procedure, and thus potentially reducing the procedure time and streamlining the workflow. The robot is built with a lightweight and compact structure that can be attached directly to the patient's lower back using straps; therefore, attenuating the effect of patient motion by moving with the patient. The novel remote actuation design of the needle driver module with beaded chain transmission can reduce the weight and profile on the patient, as well as minimize the imaging degradation caused by the actuation electronics. The free space positioning accuracy of the system was evaluated with an optical tracking system, demonstrating the mean absolute errors (MAE) of the tip position to be 0.99±0.46 mm and orientation to be 0.99±0.65°. Qualitative imaging quality evaluation was performed± on a human volunteer, revealing minimal visible image degradation that should not affect the procedure. The mounting stability of the system was assessed on a human volunteer, indicating the 3D position variation of target movement with respect to the robot frame to be less than 0.7 mm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Force-based Safe Vein Cannulation in Robot-Assisted Retinal Surgery: A Preliminary Study.\n \n \n \n\n\n \n Wu, J.; He, C.; Zhou, M.; Ebrahimi, A.; Urias, M.; Patel, N. A.; Liu, Y. H.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In 2020 International Symposium on Medical Robotics, ISMR 2020, pages 8–14, 2020. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Wu2020,\nabstract = {Retinal vein cannulation (RVC) is a potential treatment for retinal vein occlusion (RVO). Manual surgery has limitations in RVC due to extremely small vessels and instruments involved, as well as the presence of physiological hand tremor. Robot-Assisted retinal surgery may be a better approach to smooth and accurate instrument manipulation during this procedure. Motion of the retina and cornea related to heartbeat may be associated with unexpected forces between the tool and eyeball. In this paper, we propose a force-based control strategy to automatically compensate for the movement of the retina maintaining the tip force and sclera force in a predetermined small range. A dual force-sensing tool is used to monitor the tip force, sclera force and tool insertion depth, which will be used to derive a desired joint velocity for the robot via a modified admittance controller. Then the tool is manipulated to compensate for the movement of the retina as well as reduce the tip force and sclera force. Quantitative experiments are conducted to verify the efficacy of the control strategy and a user study is also conducted by a retinal surgeon to demonstrate the advantages of our automatic compensation approach.},\nauthor = {Wu, Jiahao and He, Changyan and Zhou, Mingchuan and Ebrahimi, Ali and Urias, Muller and Patel, Niravkumar A. and Liu, Yun Hui and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {2020 International Symposium on Medical Robotics, ISMR 2020},\ndoi = {10.1109/ISMR48331.2020.9312945},\nisbn = {9781728154886},\npages = {8--14},\ntitle = {{Force-based Safe Vein Cannulation in Robot-Assisted Retinal Surgery: A Preliminary Study}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Retinal vein cannulation (RVC) is a potential treatment for retinal vein occlusion (RVO). Manual surgery has limitations in RVC due to extremely small vessels and instruments involved, as well as the presence of physiological hand tremor. Robot-Assisted retinal surgery may be a better approach to smooth and accurate instrument manipulation during this procedure. Motion of the retina and cornea related to heartbeat may be associated with unexpected forces between the tool and eyeball. In this paper, we propose a force-based control strategy to automatically compensate for the movement of the retina maintaining the tip force and sclera force in a predetermined small range. A dual force-sensing tool is used to monitor the tip force, sclera force and tool insertion depth, which will be used to derive a desired joint velocity for the robot via a modified admittance controller. Then the tool is manipulated to compensate for the movement of the retina as well as reduce the tip force and sclera force. Quantitative experiments are conducted to verify the efficacy of the control strategy and a user study is also conducted by a retinal surgeon to demonstrate the advantages of our automatic compensation approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n FBG-based Kalman Filtering and Control of Tool Insertion Depth for Safe Robot-Assisted Vitrectomy.\n \n \n \n\n\n \n Ebrahimi, A.; Urias, M.; Patel, N.; Gehlbach, P.; Alambeigi, F.; and Iordachita, I.\n\n\n \n\n\n\n In 2020 International Symposium on Medical Robotics, ISMR 2020, pages 146–151, 2020. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{Ebrahimi2020,\nabstract = {Vitrectomy is that portion of retinal surgery in which the vitreous gel is removed either as a definitive treatment or to provide direct tool access to the retina. This procedure should be conducted prior to several eye surgeries in order to provide better access to the eyeball posterior. It is a relatively repeatable and straight forward procedure that lends itself to robotic assistance or potentially autonomous performance if tool contact with critical structures can be avoided. One of the detrimental incidences that can occur during the robot-Assisted vitrectomy is when the robot penetrates the tool more than allowed boundaries into the eyeball toward retina. In this paper, we provide filtering and control to guide instrument insertion depth in order to avoid tool-To-retina contact. For this purpose, first the tool insertion depth measurement is improved using a Kalman filtering (KF) algorithm. This improved measurement is then used in an adaptive control strategy by which the robot reduces the tool insertion depth based on a predefined and safe trajectory for it, when safe boundaries are overstepped. The performance of the insertion depth safety control system is then compared to one in which the insertion depth is not passed through a Kalman filter prior to being fed to the control system. Our results indicate that applying KF in the adaptive control of the robot enhances procedure safety and enables the robot to always keep the tool insertion depth under the safe levels.},\nauthor = {Ebrahimi, Ali and Urias, Muller and Patel, Niravkumar and Gehlbach, Peter and Alambeigi, Farshid and Iordachita, Iulian},\nbooktitle = {2020 International Symposium on Medical Robotics, ISMR 2020},\ndoi = {10.1109/ISMR48331.2020.9312931},\nisbn = {9781728154886},\npages = {146--151},\ntitle = {{FBG-based Kalman Filtering and Control of Tool Insertion Depth for Safe Robot-Assisted Vitrectomy}},\nyear = {2020}\n}\n
\n
\n\n\n
\n Vitrectomy is that portion of retinal surgery in which the vitreous gel is removed either as a definitive treatment or to provide direct tool access to the retina. This procedure should be conducted prior to several eye surgeries in order to provide better access to the eyeball posterior. It is a relatively repeatable and straight forward procedure that lends itself to robotic assistance or potentially autonomous performance if tool contact with critical structures can be avoided. One of the detrimental incidences that can occur during the robot-Assisted vitrectomy is when the robot penetrates the tool more than allowed boundaries into the eyeball toward retina. In this paper, we provide filtering and control to guide instrument insertion depth in order to avoid tool-To-retina contact. For this purpose, first the tool insertion depth measurement is improved using a Kalman filtering (KF) algorithm. This improved measurement is then used in an adaptive control strategy by which the robot reduces the tool insertion depth based on a predefined and safe trajectory for it, when safe boundaries are overstepped. The performance of the insertion depth safety control system is then compared to one in which the insertion depth is not passed through a Kalman filter prior to being fed to the control system. Our results indicate that applying KF in the adaptive control of the robot enhances procedure safety and enables the robot to always keep the tool insertion depth under the safe levels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sclera Force Control in Robot-assisted Eye Surgery: Adaptive Force Control vs. Auditory Feedback.\n \n \n \n\n\n \n Ebrahimi, A.; He, C.; Patel, N.; Kobilarov, M.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In 2019 International Symposium on Medical Robotics, ISMR 2019, pages 1–7, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2019sclera,\nabstract = {Surgeon hand tremor limits human capability during microsurgical procedures such as those that treat the eye. In contrast, elimination of hand tremor through the introduction of microsurgical robots diminishes the surgeons tactile perception of useful and familiar tool-to-sclera forces. While the large mass and inertia of eye surgical robot prevents surgeon microtremor, loss of perception of small scleral forces may put the sclera at risk of injury. In this paper, we have applied and compared two different methods to assure the safety of sclera tissue during robot-assisted eye surgery. In the active control method, an adaptive force control strategy is implemented on the Steady-Hand Eye Robot in order to control the magnitude of scleral forces when they exceed safe boundaries. This autonomous force compensation is then compared to a passive force control method in which the surgeon performs manual adjustments in response to the provided audio feedback proportional to the magnitude of sclera force. A pilot study with three users indicate that the active control method is potentially more efficient.},\narchivePrefix = {arXiv},\narxivId = {1901.03307},\nauthor = {Ebrahimi, Ali and He, Changyan and Patel, Niravkumar and Kobilarov, Marin and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {2019 International Symposium on Medical Robotics, ISMR 2019},\ndoi = {10.1109/ISMR.2019.8710205},\neprint = {1901.03307},\nisbn = {9781538678251},\norganization = {IEEE},\npages = {1--7},\ntitle = {{Sclera Force Control in Robot-assisted Eye Surgery: Adaptive Force Control vs. Auditory Feedback}},\nyear = {2019}\n}\n
\n
\n\n\n
\n Surgeon hand tremor limits human capability during microsurgical procedures such as those that treat the eye. In contrast, elimination of hand tremor through the introduction of microsurgical robots diminishes the surgeons tactile perception of useful and familiar tool-to-sclera forces. While the large mass and inertia of eye surgical robot prevents surgeon microtremor, loss of perception of small scleral forces may put the sclera at risk of injury. In this paper, we have applied and compared two different methods to assure the safety of sclera tissue during robot-assisted eye surgery. In the active control method, an adaptive force control strategy is implemented on the Steady-Hand Eye Robot in order to control the magnitude of scleral forces when they exceed safe boundaries. This autonomous force compensation is then compared to a passive force control method in which the surgeon performs manual adjustments in response to the provided audio feedback proportional to the magnitude of sclera force. A pilot study with three users indicate that the active control method is potentially more efficient.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Adaptive control of sclera force and insertion depth for safe robot-assisted retinal surgery.\n \n \n \n\n\n \n Ebrahimi, A.; Patel, N.; He, C.; Gehlbach, P.; Kobilarov, M.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings - IEEE International Conference on Robotics and Automation, volume 2019-May, pages 9073–9079, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2019adaptive,\nabstract = {One of the significant challenges of moving from manual to robot-assisted retinal surgery is the loss of perception of forces applied to the sclera (sclera forces) by the surgical tools. This damping of force feedback is primarily due to the stiffness and inertia of the robot. The diminished perception of tool-to-eye interactions might put the eye tissue at high risk of injury due to excessive sclera forces or extreme insertion of the tool into the eye. In the present study therefore a 1-dimensional adaptive control method is customized for 3-dimensional control of sclera force components and tool insertion depth and then implemented on the velocity-controlled Johns Hopkins Steady-Hand Eye Robot. The control method enables the robot to perform autonomous motions to make the sclera force and/or insertion depth of the tool tip to follow pre-defined desired and safe trajectories when they exceed safe bounds. A robotic light pipe holding application in retinal surgery is also investigated using the adaptive control method. The implementation results indicate that the adaptive control is able to achieve the imposed safety margins and prevent sclera forces and insertion depth from exceeding safe boundaries.},\nauthor = {Ebrahimi, Ali and Patel, Niravkumar and He, Changyan and Gehlbach, Peter and Kobilarov, Marin and Iordachita, Iulian},\nbooktitle = {Proceedings - IEEE International Conference on Robotics and Automation},\ndoi = {10.1109/ICRA.2019.8793658},\nisbn = {9781538660263},\nissn = {10504729},\norganization = {IEEE},\npages = {9073--9079},\ntitle = {{Adaptive control of sclera force and insertion depth for safe robot-assisted retinal surgery}},\nvolume = {2019-May},\nyear = {2019}\n}\n
\n
\n\n\n
\n One of the significant challenges of moving from manual to robot-assisted retinal surgery is the loss of perception of forces applied to the sclera (sclera forces) by the surgical tools. This damping of force feedback is primarily due to the stiffness and inertia of the robot. The diminished perception of tool-to-eye interactions might put the eye tissue at high risk of injury due to excessive sclera forces or extreme insertion of the tool into the eye. In the present study therefore a 1-dimensional adaptive control method is customized for 3-dimensional control of sclera force components and tool insertion depth and then implemented on the velocity-controlled Johns Hopkins Steady-Hand Eye Robot. The control method enables the robot to perform autonomous motions to make the sclera force and/or insertion depth of the tool tip to follow pre-defined desired and safe trajectories when they exceed safe bounds. A robotic light pipe holding application in retinal surgery is also investigated using the adaptive control method. The implementation results indicate that the adaptive control is able to achieve the imposed safety margins and prevent sclera forces and insertion depth from exceeding safe boundaries.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Enabling technology for safe robot-assisted retinal surgery: Early warning for unsafe scleral force.\n \n \n \n\n\n \n He, C.; Patel, N.; Iordachita, I.; and Kobilarov, M.\n\n\n \n\n\n\n In Proceedings - IEEE International Conference on Robotics and Automation, volume 2019-May, pages 3889–3894, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{he2019enabling,\nabstract = {Retinal microsurgery is technically demanding and requires high surgical skill with very little room for manipulation error. During surgery the tool needs to be inserted into the eyeball while maintaining constant contact with the sclera. Any unexpected manipulation could cause extreme tool-sclera contact force (scleral force) thus damage the sclera. The introduction of robotic assistance could enhance and expand the surgeon's manipulation capabilities during surgery. However, the potential intra-operative danger from surgeon's mis-operations remains difficult to detect and prevent by existing robotic systems. Therefore, we propose a method to predict imminent unsafe manipulation in robot-assisted retinal surgery and generate feedback to the surgeon via auditory substitution. The surgeon could then react to the possible unsafe events in advance. This work specifically focuses on minimizing sclera damage using a force-sensing tool calibrated to measure small scleral forces. A recurrent neural network is designed and trained to predict the force safety status up to 500 milliseconds in the future. The system is implemented using an existing 'steady hand' eye robot. A vessel following manipulation task is designed and performed on a dry eye phantom to emulate the retinal surgery and to analyze the proposed method. Finally, preliminary validation experiments are performed by five users, the results of which indicate that the proposed early warning system could help to reduce the number of unsafe manipulation events.},\nauthor = {He, Changyan and Patel, Niravkumar and Iordachita, Iulian and Kobilarov, Marin},\nbooktitle = {Proceedings - IEEE International Conference on Robotics and Automation},\ndoi = {10.1109/ICRA.2019.8794427},\nisbn = {9781538660263},\nissn = {10504729},\norganization = {IEEE},\npages = {3889--3894},\ntitle = {{Enabling technology for safe robot-assisted retinal surgery: Early warning for unsafe scleral force}},\nvolume = {2019-May},\nyear = {2019}\n}\n
\n
\n\n\n
\n Retinal microsurgery is technically demanding and requires high surgical skill with very little room for manipulation error. During surgery the tool needs to be inserted into the eyeball while maintaining constant contact with the sclera. Any unexpected manipulation could cause extreme tool-sclera contact force (scleral force) thus damage the sclera. The introduction of robotic assistance could enhance and expand the surgeon's manipulation capabilities during surgery. However, the potential intra-operative danger from surgeon's mis-operations remains difficult to detect and prevent by existing robotic systems. Therefore, we propose a method to predict imminent unsafe manipulation in robot-assisted retinal surgery and generate feedback to the surgeon via auditory substitution. The surgeon could then react to the possible unsafe events in advance. This work specifically focuses on minimizing sclera damage using a force-sensing tool calibrated to measure small scleral forces. A recurrent neural network is designed and trained to predict the force safety status up to 500 milliseconds in the future. The system is implemented using an existing 'steady hand' eye robot. A vessel following manipulation task is designed and performed on a dry eye phantom to emulate the retinal surgery and to analyze the proposed method. Finally, preliminary validation experiments are performed by five users, the results of which indicate that the proposed early warning system could help to reduce the number of unsafe manipulation events.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Remotely Actuated Needle Driving Device for MRI-Guided Percutaneous Interventions: Force and Accuracy Evaluation.\n \n \n \n\n\n \n Wu, D.; Li, G.; Patel, N.; Yan, J.; Kim, G. H.; Monfaredi, R.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages 1985–1989, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wu2019remotely,\nabstract = {This paper presents a 2 degrees-of-freedom (DOF) remotely actuated needle driving device for Magnetic Resonance Imaging (MRI) guided pain injections. The device is evaluated in phantom studies under real-time MRI guidance. The force and torque asserted by the device on the 4-DOF base robot are measured. The needle driving device consists of a needle driver, a 1.2-meter long beaded chain transmission, an actuation box, a robot controller and a Graphical User Interface (GUI). The needle driver can fit within a typical MRI scanner bore and is remotely actuated at the end of the MRI table through a novel beaded chain transmission. The remote actuation mechanism significantly reduces the weight and size of the needle driver at the patient end as well as the artifacts introduced by the motors. The clinician can manually steer the needle by rotating the knobs on the actuation box or remotely through a software interface in the MRI console room. The force and torque resulting from the needle driver in various configurations both in static and dynamic status were measured and reported. An accuracy experiment in the MRI environment under real-time image feedback demonstrates a small mean targeting error (<1.5 mm) in a phantom study.},\nauthor = {Wu, Di and Li, Gang and Patel, Niravkumar and Yan, Jiawen and Kim, Gyeong Hu and Monfaredi, Reza and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2019.8857260},\nisbn = {9781538613115},\nissn = {1557170X},\norganization = {IEEE},\npages = {1985--1989},\npmid = {31946289},\ntitle = {{Remotely Actuated Needle Driving Device for MRI-Guided Percutaneous Interventions: Force and Accuracy Evaluation}},\nyear = {2019}\n}\n
\n
\n\n\n
\n This paper presents a 2 degrees-of-freedom (DOF) remotely actuated needle driving device for Magnetic Resonance Imaging (MRI) guided pain injections. The device is evaluated in phantom studies under real-time MRI guidance. The force and torque asserted by the device on the 4-DOF base robot are measured. The needle driving device consists of a needle driver, a 1.2-meter long beaded chain transmission, an actuation box, a robot controller and a Graphical User Interface (GUI). The needle driver can fit within a typical MRI scanner bore and is remotely actuated at the end of the MRI table through a novel beaded chain transmission. The remote actuation mechanism significantly reduces the weight and size of the needle driver at the patient end as well as the artifacts introduced by the motors. The clinician can manually steer the needle by rotating the knobs on the actuation box or remotely through a software interface in the MRI console room. The force and torque resulting from the needle driver in various configurations both in static and dynamic status were measured and reported. An accuracy experiment in the MRI environment under real-time image feedback demonstrates a small mean targeting error (<1.5 mm) in a phantom study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Body-Mounted MRI-Conditional Parallel Robot for Percutaneous Interventions Structural Improvement, Calibration, and Accuracy Analysis.\n \n \n \n\n\n \n Yan, J.; Patel, N.; Di Wu, G. L.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages 1990–1993, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{yan2019body,\nabstract = {To assist in percutaneous interventions in the lower back under magnetic resonance imaging guidance, a 4 degree-of-freedom body-mounted parallel robot is developed. The robot structure is improved comparatively to a previously developed robot, to increase the stability, enhance accuracy, and streamline the assembly and calibration process. The optimized assembly and calibration workflows are carried out, and the system accuracy is evaluated. The results demonstrate that the system positioning and angular accuracy are 2.28±1.1 mm and 1.94±1.01 degrees respectively. The results show that the new system has a promising and consistent behavior.},\nauthor = {Yan, Jiawen and Patel, Niravkumar and {Di Wu}, Gang Li and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2019.8857667},\nisbn = {9781538613115},\nissn = {1557170X},\norganization = {IEEE},\npages = {1990--1993},\npmid = {31946290},\ntitle = {{Body-Mounted MRI-Conditional Parallel Robot for Percutaneous Interventions Structural Improvement, Calibration, and Accuracy Analysis}},\nyear = {2019}\n}\n
\n
\n\n\n
\n To assist in percutaneous interventions in the lower back under magnetic resonance imaging guidance, a 4 degree-of-freedom body-mounted parallel robot is developed. The robot structure is improved comparatively to a previously developed robot, to increase the stability, enhance accuracy, and streamline the assembly and calibration process. The optimized assembly and calibration workflows are carried out, and the system accuracy is evaluated. The results demonstrate that the system positioning and angular accuracy are 2.28±1.1 mm and 1.94±1.01 degrees respectively. The results show that the new system has a promising and consistent behavior.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Shoulder-mounted Robot for MRI-Guided Arthrography: Clinically Optimized System.\n \n \n \n\n\n \n Kim, G. H.; Patel, N.; Yan, J.; Wu, D.; Li, G.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, pages 1977–1980, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{kim2019shoulder,\nabstract = {This paper introduces our compact and lightweight patient-mounted MRI-compatible 4 degree-of-freedom (DOF) robot with an improved transmission system for MRI-guided arthrography procedures. This robot could make the traditional two-stage arthrography procedure (fluoroscopy-guided needle insertion followed by a diagnostic MRI scan) simpler by converting it to a one-stage procedure but more accurate with an optimized system. The new transmission system is proposed, using different mechanical components, to result in higher accuracy of needle insertion. The results of a recent accuracy study are reported. Experimental results show that the new system has an error of 1.7 mm in positioning the needle tip at a depth of 50 mm, which indicates high accuracy.},\nauthor = {Kim, Gyeong Hu and Patel, Niravkumar and Yan, Jiawen and Wu, Di and Li, Gang and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2019.8856630},\nisbn = {9781538613115},\nissn = {1557170X},\norganization = {IEEE},\npages = {1977--1980},\npmid = {31946287},\ntitle = {{Shoulder-mounted Robot for MRI-Guided Arthrography: Clinically Optimized System}},\nyear = {2019}\n}\n
\n
\n\n\n
\n This paper introduces our compact and lightweight patient-mounted MRI-compatible 4 degree-of-freedom (DOF) robot with an improved transmission system for MRI-guided arthrography procedures. This robot could make the traditional two-stage arthrography procedure (fluoroscopy-guided needle insertion followed by a diagnostic MRI scan) simpler by converting it to a one-stage procedure but more accurate with an optimized system. The new transmission system is proposed, using different mechanical components, to result in higher accuracy of needle insertion. The results of a recent accuracy study are reported. Experimental results show that the new system has an error of 1.7 mm in positioning the needle tip at a depth of 50 mm, which indicates high accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Towards securing the sclera against patient involuntary head movement in robotic retinal surgery.\n \n \n \n\n\n \n Ebrahimi, A.; Urias, M.; Patel, N.; He, C.; Taylor, R. H.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In 2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2019romanem,\nabstract = {Retinal surgery involves manipulating very delicate tissues within the confined area of eyeball. In such demanding practices, patient involuntary head movement might abruptly raise tool-to-eyeball interaction forces which would be detrimental to eye. This study is aimed at implementing different force control strategies and evaluating how they contribute to attaining sclera force safety while patient head drift is present. To simulate patient head movement, a piezoelectric-actuated linear stage is used to produce random motions in a single direction in random time intervals. Having an eye phantom attached to the linear stage then an experienced eye surgeon is asked to manipulate the eye and repeat a mock surgical task both with and without the assist of the Steady-Hand Eye Robot. For the freehand case, warning sounds were provided to the surgeon as auditory feedback to alert him about excessive slclra forces. For the robot-assisted experiments two variants of an adaptive sclera force control and a virtual fixture method were deployed to see how they can maintain eye safety under head drift circumstances. The results indicate that the developed robot control strategies are able to compensate for head drift and keep the sclera forces under safe levels as well as the free hand operation.},\nauthor = {Ebrahimi, Ali and Urias, Muller and Patel, Niravkumar and He, Changyan and Taylor, Russell H. and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {2019 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019},\ndoi = {10.1109/RO-MAN46459.2019.8956341},\nisbn = {9781728126227},\npublisher = {IEEE},\ntitle = {{Towards securing the sclera against patient involuntary head movement in robotic retinal surgery}},\nyear = {2019}\n}\n
\n
\n\n\n
\n Retinal surgery involves manipulating very delicate tissues within the confined area of eyeball. In such demanding practices, patient involuntary head movement might abruptly raise tool-to-eyeball interaction forces which would be detrimental to eye. This study is aimed at implementing different force control strategies and evaluating how they contribute to attaining sclera force safety while patient head drift is present. To simulate patient head movement, a piezoelectric-actuated linear stage is used to produce random motions in a single direction in random time intervals. Having an eye phantom attached to the linear stage then an experienced eye surgeon is asked to manipulate the eye and repeat a mock surgical task both with and without the assist of the Steady-Hand Eye Robot. For the freehand case, warning sounds were provided to the surgeon as auditory feedback to alert him about excessive slclra forces. For the robot-assisted experiments two variants of an adaptive sclera force control and a virtual fixture method were deployed to see how they can maintain eye safety under head drift circumstances. The results indicate that the developed robot control strategies are able to compensate for head drift and keep the sclera forces under safe levels as well as the free hand operation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sclera Force Evaluation during Vitreoretinal Surgeries in Ex Vivo Porcine Eye Model.\n \n \n \n\n\n \n Patel, N.; Urias, M.; Ebrahimi, A.; He, C.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of IEEE Sensors, volume 2019-Octob, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2019sensorexvivo,\nabstract = {Vitreoretinal surgery is among the most challenging microsurgical procedures as it requires precise tool manipulation in a constrained environment, while the tool-tissue interaction forces are at the human perception limits. While tool tip forces are certainly important, the scleral forces at the tool insertion ports are also important. Clinicians often rely on these forces to manipulate the eyeball position during surgery. Measuring sclera forces could enable valuable sensory input to avoid tissue damage, especially for a cooperatively controlled robotic assistant that otherwise removes the sensation of these familiar intraoperative forces. Previously, our group has measured sclera forces in phantom experiments. However, to the best of our knowledge, there are no published data measuring scleral forces in biological (ex-vivo/in-vivo) eye models. In this paper, we measured sclera forces in ex-vivo porcine eye model. A Fiber Bragg Grating (FBG) based force sensing instrument with a diameter of $\\sim$900 $\\mu$m and a resolution of $\\sim$1 mN was used to measure the forces while the clinician-subject followed retinal vessels in manual and robot-assisted modes. Analysis of measured forces show that the average sclera force in manual mode was 133.74 mN while in robot-assisted mode was 146.03 mN.},\nauthor = {Patel, Niravkumar and Urias, Muller and Ebrahimi, Ali and He, Changyan and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {Proceedings of IEEE Sensors},\ndoi = {10.1109/SENSORS43011.2019.8956820},\nisbn = {9781728116341},\nissn = {21689229},\npublisher = {IEEE},\ntitle = {{Sclera Force Evaluation during Vitreoretinal Surgeries in Ex Vivo Porcine Eye Model}},\nvolume = {2019-Octob},\nyear = {2019}\n}\n
\n
\n\n\n
\n Vitreoretinal surgery is among the most challenging microsurgical procedures as it requires precise tool manipulation in a constrained environment, while the tool-tissue interaction forces are at the human perception limits. While tool tip forces are certainly important, the scleral forces at the tool insertion ports are also important. Clinicians often rely on these forces to manipulate the eyeball position during surgery. Measuring sclera forces could enable valuable sensory input to avoid tissue damage, especially for a cooperatively controlled robotic assistant that otherwise removes the sensation of these familiar intraoperative forces. Previously, our group has measured sclera forces in phantom experiments. However, to the best of our knowledge, there are no published data measuring scleral forces in biological (ex-vivo/in-vivo) eye models. In this paper, we measured sclera forces in ex-vivo porcine eye model. A Fiber Bragg Grating (FBG) based force sensing instrument with a diameter of $∼$900 $μ$m and a resolution of $∼$1 mN was used to measure the forces while the clinician-subject followed retinal vessels in manual and robot-assisted modes. Analysis of measured forces show that the average sclera force in manual mode was 133.74 mN while in robot-assisted mode was 146.03 mN.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Novel Semi-Autonomous Control Framework for Retina Confocal Endomicroscopy Scanning.\n \n \n \n\n\n \n Li, Z.; Yang, G. Z.; Taylor, R. H.; Shahbazi, M.; Patel, N.; Sullivan, E. O.; Zhang, H.; Vyas, K.; Chalasani, P.; Gehlbach, P. L.; and Iordachita, I.\n\n\n \n\n\n\n In IEEE International Conference on Intelligent Robots and Systems, pages 7083–7090, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{li2019confocaleyerobot,\nabstract = {In this paper, a novel semi-autonomous control framework is presented for enabling probe-based confocal laser endomicroscopy (pCLE) scan of the retinal tissue. With pCLE, retinal layers such as nerve fiber layer (NFL) and retinal ganglion cell (RGC) can be scanned and characterized in real-time for an improved diagnosis and surgical outcome prediction. However, the limited field of view of the pCLE system and the micron-scale optimal focus distance of the probe, which are in the order of physiological hand tremor, act as barriers to successful manual scan of retinal tissue.Therefore, a novel sensorless framework is proposed for real-time semi-autonomous endomicroscopy scanning during retinal surgery. The framework consists of the Steady-Hand Eye Robot (SHER) integrated with a pCLE system, where the motion of the probe is controlled semi-autonomously. Through a hybrid motion control strategy, the system autonomously controls the confocal probe to optimize the sharpness and quality of the pCLE images, while providing the surgeon with the ability to scan the tissue in a tremor-free manner. Effectiveness of the proposed architecture is validated through experimental evaluations as well as a user study involving 9 participants. It is shown through statistical analyses that the proposed framework can reduce the work load experienced by the users in a statistically-significant manner, while also enhancing their performance in retaining pCLE images with optimized quality.},\nauthor = {Li, Zhaoshuo and Yang, Guang Zhong and Taylor, Russell H. and Shahbazi, Mahya and Patel, Niravkumar and Sullivan, Eimear O. and Zhang, Haojie and Vyas, Khushi and Chalasani, Preetham and Gehlbach, Peter L. and Iordachita, Iulian},\nbooktitle = {IEEE International Conference on Intelligent Robots and Systems},\ndoi = {10.1109/IROS40897.2019.8967751},\nisbn = {9781728140049},\nissn = {21530866},\norganization = {IEEE},\npages = {7083--7090},\ntitle = {{A Novel Semi-Autonomous Control Framework for Retina Confocal Endomicroscopy Scanning}},\nyear = {2019}\n}\n
\n
\n\n\n
\n In this paper, a novel semi-autonomous control framework is presented for enabling probe-based confocal laser endomicroscopy (pCLE) scan of the retinal tissue. With pCLE, retinal layers such as nerve fiber layer (NFL) and retinal ganglion cell (RGC) can be scanned and characterized in real-time for an improved diagnosis and surgical outcome prediction. However, the limited field of view of the pCLE system and the micron-scale optimal focus distance of the probe, which are in the order of physiological hand tremor, act as barriers to successful manual scan of retinal tissue.Therefore, a novel sensorless framework is proposed for real-time semi-autonomous endomicroscopy scanning during retinal surgery. The framework consists of the Steady-Hand Eye Robot (SHER) integrated with a pCLE system, where the motion of the probe is controlled semi-autonomously. Through a hybrid motion control strategy, the system autonomously controls the confocal probe to optimize the sharpness and quality of the pCLE images, while providing the surgeon with the ability to scan the tissue in a tremor-free manner. Effectiveness of the proposed architecture is validated through experimental evaluations as well as a user study involving 9 participants. It is shown through statistical analyses that the proposed framework can reduce the work load experienced by the users in a statistically-significant manner, while also enhancing their performance in retaining pCLE images with optimized quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Remotely Actuated Needle Driving Device for MRI-Guided Percutaneous Interventions.\n \n \n \n\n\n \n Wu, D.; Li, G.; Patel, N.; Yan, J.; Monfaredi, R.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In 2019 International Symposium on Medical Robotics, ISMR 2019, pages 1–7, 2019. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{wu2019remotely,\nabstract = {In this paper we introduce a remotely actuated MRI-compatible needle driving device for pain injections in the lower back. This device is able to manipulate the needle inside the closed-bore MRI scanner under the control of the interventional radiologist inside both the scanner room and the console room. The device consists of a 2 degrees of freedom (DOF) needle driver and an actuation box. The 2-DOF needle driver is placed inside the scanner bore and driven by the actuation box settled at the end of the table through a beaded chain transmission. This novel remote actuation design could reduce the weight and profile of the needle driver that is mounted on the patient, as well as minimize the potential imaging noise introduced by the actuation electronics. The actuation box is designed to perform needle intervention in both manual and motorized fashion by utilizing a mode switch mechanism. A mechanical hard stop is also incorporated to improve the device's safety. The bench-top accuracy evaluation of the device demonstrated a small mean needle placement error < 1 mm) in a phantom study.},\nauthor = {Wu, Di and Li, Gang and Patel, Niravkumar and Yan, Jiawen and Monfaredi, Reza and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {2019 International Symposium on Medical Robotics, ISMR 2019},\ndoi = {10.1109/ISMR.2019.8710176},\nisbn = {9781538678251},\nkeywords = {MRI-guided intervention,beaded chain transmission,needle driving device,pain management,remote actuation},\norganization = {IEEE},\npages = {1--7},\ntitle = {{Remotely Actuated Needle Driving Device for MRI-Guided Percutaneous Interventions}},\nyear = {2019}\n}\n
\n
\n\n\n
\n In this paper we introduce a remotely actuated MRI-compatible needle driving device for pain injections in the lower back. This device is able to manipulate the needle inside the closed-bore MRI scanner under the control of the interventional radiologist inside both the scanner room and the console room. The device consists of a 2 degrees of freedom (DOF) needle driver and an actuation box. The 2-DOF needle driver is placed inside the scanner bore and driven by the actuation box settled at the end of the table through a beaded chain transmission. This novel remote actuation design could reduce the weight and profile of the needle driver that is mounted on the patient, as well as minimize the potential imaging noise introduced by the actuation electronics. The actuation box is designed to perform needle intervention in both manual and motorized fashion by utilizing a mode switch mechanism. A mechanical hard stop is also incorporated to improve the device's safety. The bench-top accuracy evaluation of the device demonstrated a small mean needle placement error < 1 mm) in a phantom study.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robotic system for MRI-guided shoulder arthrography: Accuracy evaluation.\n \n \n \n \n\n\n \n Patel, N. A.; Azimi, E.; Monfaredi, R.; Sharma, K.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In 2018 International Symposium on Medical Robotics, ISMR 2018, volume 2018-Janua, pages 1–6, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"RoboticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2018robotic,\nabstract = {This paper introduces a body mounted robotic system for MRI-guided shoulder arthrography in pediatric patients. This robotic manipulator is optimized for being accurate yet light enough to perform the contrast agent injection and joint examination imaging inside the MRI bore. The robotic manipulator has 4 degrees of freedom (DOF) providing accurate insertion trajectory of the injection needle. In shoulder arthrography procedures, contrast agent is injected under fluoroscope guidance resulting in radiation exposure which should be avoided for pediatric patients. Also after contrast agent injection typically MRI images are acquired for examination resulting in two stage procedure. The presented system allows clinicians to perform both contrast agent injection and joint examination under MRI guidance, hence completely eliminating radiation exposure from fluoroscope guidance and patient movement from X-Ray/CT room to MRI suite. The presented system contains no ferrous components and is considered MR-Conditional. The bench-top accuracy evaluation of the robotic manipulator shows average pose error of 1.22 mm in position and 1 degree in orientation at the needle tip.},\nauthor = {Patel, Niravkumar A. and Azimi, Ehsan and Monfaredi, Reza and Sharma, Karun and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {2018 International Symposium on Medical Robotics, ISMR 2018},\ndoi = {10.1109/ISMR.2018.8333299},\nisbn = {9781538625125},\norganization = {IEEE},\npages = {1--6},\ntitle = {{Robotic system for MRI-guided shoulder arthrography: Accuracy evaluation}},\nurl = {https://doi.org/10.1109/ISMR.2018.8333299},\nvolume = {2018-Janua},\nyear = {2018}\n}\n
\n
\n\n\n
\n This paper introduces a body mounted robotic system for MRI-guided shoulder arthrography in pediatric patients. This robotic manipulator is optimized for being accurate yet light enough to perform the contrast agent injection and joint examination imaging inside the MRI bore. The robotic manipulator has 4 degrees of freedom (DOF) providing accurate insertion trajectory of the injection needle. In shoulder arthrography procedures, contrast agent is injected under fluoroscope guidance resulting in radiation exposure which should be avoided for pediatric patients. Also after contrast agent injection typically MRI images are acquired for examination resulting in two stage procedure. The presented system allows clinicians to perform both contrast agent injection and joint examination under MRI guidance, hence completely eliminating radiation exposure from fluoroscope guidance and patient movement from X-Ray/CT room to MRI suite. The presented system contains no ferrous components and is considered MR-Conditional. The bench-top accuracy evaluation of the robotic manipulator shows average pose error of 1.22 mm in position and 1 degree in orientation at the needle tip.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Sclera Force Feedback for Enabling Safe Robot-Assisted Vitreoretinal Surgery.\n \n \n \n \n\n\n \n Ebrahimi, A.; He, C.; Roizenblatt, M.; Patel, N.; Sefati, S.; Gehlbach, P.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2018-July, pages 3650–3655, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{ebrahimi2018real,\nabstract = {One of the major yet little recognized challenges in robotic vitreoretinal surgery is the matter of tool forces applied to the sclera. Tissue safety, coordinated tool use and interactions between tool tip and shaft forces are little studied. The introduction of robotic assist has further diminished the surgeon's ability to perceive scleral forces. Microsurgical tools capable of measuring such small forces integrated with robotmanipulators may therefore improve functionality and safety by providing sclera force feedback to the surgeon. In this paper, using a force-sensing tool, we have conducted robotassisted eye manipulation experiments to evaluate the utility of providing scleral force feedback. The work assesses 1) passive audio feedback and 2) active haptic feedback and evaluates the impact of these feedbacks on scleral forces in excess of aboundary. The results show that in presence of passive or active feedback, the duration of experiment increases, while the duration for which scleral forces exceed a safe threshold decreases.},\nauthor = {Ebrahimi, Ali and He, Changyan and Roizenblatt, Marina and Patel, Niravkumar and Sefati, Shahriar and Gehlbach, Peter and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2018.8513255},\nisbn = {9781538636466},\nissn = {1557170X},\norganization = {IEEE},\npages = {3650--3655},\npmid = {30441165},\ntitle = {{Real-Time Sclera Force Feedback for Enabling Safe Robot-Assisted Vitreoretinal Surgery}},\nurl = {https://doi.org/10.1109/EMBC.2018.8513255},\nvolume = {2018-July},\nyear = {2018}\n}\n
\n
\n\n\n
\n One of the major yet little recognized challenges in robotic vitreoretinal surgery is the matter of tool forces applied to the sclera. Tissue safety, coordinated tool use and interactions between tool tip and shaft forces are little studied. The introduction of robotic assist has further diminished the surgeon's ability to perceive scleral forces. Microsurgical tools capable of measuring such small forces integrated with robotmanipulators may therefore improve functionality and safety by providing sclera force feedback to the surgeon. In this paper, using a force-sensing tool, we have conducted robotassisted eye manipulation experiments to evaluate the utility of providing scleral force feedback. The work assesses 1) passive audio feedback and 2) active haptic feedback and evaluates the impact of these feedbacks on scleral forces in excess of aboundary. The results show that in presence of passive or active feedback, the duration of experiment increases, while the duration for which scleral forces exceed a safe threshold decreases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of a Force-Sensing Handheld Robot for Assisted Retinal Vein Cannulation∗.\n \n \n \n \n\n\n \n Gonenc, B.; Patel, N.; and Iordachita, I.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2018-July, pages 1–5, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{gonenc2018evaluation,\nabstract = {Approximately 16.4 million people are affected by retinal vein occlusion (RVO) resulting from hypercoagulability, low blood flow or thrombosis in the central or the branched retinal veins. Most common current treatments for RVO aim to limit the damage. In recent years, an experimental procedure, retinal vein cannulation (RVC) has been studied in animal models as well as human eye models. RVC is a procedure for targeted delivery of a therapeutic agent into the occluded retinal vein for dissolving the thrombi. Although effective treatment has been demonstrated via RVC, performing this procedure manually still remains at the limits of human skills. RVC requires to precisely insert a thin cannula into a delicate thin retinal vein, and to maintain it inside the vein throughout the infusion. The needle-vein interaction forces are too small to sense even by an expert surgeon. In this work, we present an evaluation study of a handheld robotic assistant with a force-sensing microneedle for RVC. The system actively cancels hand tremor, detects venous puncture based on detected tool-tissue forces, and stabilizes the needle after venous puncture for reduced trauma and prolonged infusion. Experiments are performed cannulating the vasculature in fertilized chicken eggs. Results show 100% success in venous puncture detection and significantly reduced cannula position drift via the stabilization aid of the robotic system.},\nauthor = {Gonenc, Berk and Patel, Niravkumar and Iordachita, Iulian},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2018.8513304},\nisbn = {9781538636466},\nissn = {1557170X},\norganization = {IEEE},\npages = {1--5},\npmid = {30440317},\ntitle = {{Evaluation of a Force-Sensing Handheld Robot for Assisted Retinal Vein Cannulation∗}},\nurl = {https://doi.org/10.1109/EMBC.2018.8513304},\nvolume = {2018-July},\nyear = {2018}\n}\n
\n
\n\n\n
\n Approximately 16.4 million people are affected by retinal vein occlusion (RVO) resulting from hypercoagulability, low blood flow or thrombosis in the central or the branched retinal veins. Most common current treatments for RVO aim to limit the damage. In recent years, an experimental procedure, retinal vein cannulation (RVC) has been studied in animal models as well as human eye models. RVC is a procedure for targeted delivery of a therapeutic agent into the occluded retinal vein for dissolving the thrombi. Although effective treatment has been demonstrated via RVC, performing this procedure manually still remains at the limits of human skills. RVC requires to precisely insert a thin cannula into a delicate thin retinal vein, and to maintain it inside the vein throughout the infusion. The needle-vein interaction forces are too small to sense even by an expert surgeon. In this work, we present an evaluation study of a handheld robotic assistant with a force-sensing microneedle for RVC. The system actively cancels hand tremor, detects venous puncture based on detected tool-tissue forces, and stabilizes the needle after venous puncture for reduced trauma and prolonged infusion. Experiments are performed cannulating the vasculature in fertilized chicken eggs. Results show 100% success in venous puncture detection and significantly reduced cannula position drift via the stabilization aid of the robotic system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n User Behavior Evaluation in Robot-Assisted Retinal Surgery.\n \n \n \n\n\n \n He, C.; Ebrahimi, A.; Roizenblatt, M.; Patel, N.; Yang, Y.; Gehlbach, P. L.; and Iordachita, I.\n\n\n \n\n\n\n In RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication, pages 174–179, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{he2018user,\nabstract = {Retinal microsurgery is technically demanding and requires high surgical skill with very little room for manipulation error. The introduction of robotic assistance has the potential to enhance and expand a surgeon's manipulation capabilities during retinal surgery, i.e., improve precision, cancel physiological hand tremor, and provide sensing information. However, surgeon performance may also be negatively impacted by robotic assistance due to robot structural stiffness and nonintuitive controls. In complying with robotic constraints, the surgeon loses the dexterity of the human hand. In this paper, we present a preliminary experimental study to evaluate user behavior when affected by robotic assistance during mock retinal surgery. In these experiments user behavior is characterized by measuring the forces applied by the user to the sclera, the tool insertion/retraction speed, the tool insertion depth relative to the scleral entry point, and the duration of surgery. The users' behavior data is collected during three mock retinal surgery tasks with four users. Each task is conducted using both freehand and robot-assisted techniques. The univariate user behavior and the correlations of multiple parameters of user behavior are analyzed. The results show that robot assistance prolongs the duration of the surgery and increases the manipulation forces applied to sclera, but refines the insertion velocity and eliminates hand tremor.},\nauthor = {He, Changyan and Ebrahimi, Ali and Roizenblatt, Marina and Patel, Niravkumar and Yang, Yang and Gehlbach, Peter L. and Iordachita, Iulian},\nbooktitle = {RO-MAN 2018 - 27th IEEE International Symposium on Robot and Human Interactive Communication},\ndoi = {10.1109/ROMAN.2018.8525638},\nisbn = {9781538679807},\norganization = {IEEE},\npages = {174--179},\ntitle = {{User Behavior Evaluation in Robot-Assisted Retinal Surgery}},\nyear = {2018}\n}\n
\n
\n\n\n
\n Retinal microsurgery is technically demanding and requires high surgical skill with very little room for manipulation error. The introduction of robotic assistance has the potential to enhance and expand a surgeon's manipulation capabilities during retinal surgery, i.e., improve precision, cancel physiological hand tremor, and provide sensing information. However, surgeon performance may also be negatively impacted by robotic assistance due to robot structural stiffness and nonintuitive controls. In complying with robotic constraints, the surgeon loses the dexterity of the human hand. In this paper, we present a preliminary experimental study to evaluate user behavior when affected by robotic assistance during mock retinal surgery. In these experiments user behavior is characterized by measuring the forces applied by the user to the sclera, the tool insertion/retraction speed, the tool insertion depth relative to the scleral entry point, and the duration of surgery. The users' behavior data is collected during three mock retinal surgery tasks with four users. Each task is conducted using both freehand and robot-assisted techniques. The univariate user behavior and the correlations of multiple parameters of user behavior are analyzed. The results show that robot assistance prolongs the duration of the surgery and increases the manipulation forces applied to sclera, but refines the insertion velocity and eliminates hand tremor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Body-Mounted Robot for Image-Guided Percutaneous Interventions: Mechanical Design and Preliminary Accuracy Evaluation.\n \n \n \n\n\n \n Patel, N. A.; Yan, J.; Levi, D.; Monfaredi, R.; Cleary, K.; and Iordachita, I.\n\n\n \n\n\n\n In IEEE International Conference on Intelligent Robots and Systems, pages 1443–1448, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2018body,\nabstract = {This paper presents a body-mounted, four degree-of-freedom (4-DOF) parallel mechanism robot for image-guided percutaneous interventions. The design of the robot is optimized to be light weight and compact such that it could be mounted to the patient body. It has a modular design that can be adopted for assisting various image-guided, needle-based percutaneous interventions such as arthrography, biopsy and brachytherapy seed placement. The robot mechanism and the control system are designed and manufactured with components compatible with imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). The current version of the robot presented in this paper is optimized for shoulder arthrography under MRI guidance; a Z-shaped fiducial frame is attached to the robot, providing accurate and repeatable robot registration with the MR scanner coordinate system. Here we present the mechanical design of the manipulator, robot kinematics, robot calibration procedure, and preliminary bench-top accuracy assessment. The bench-top accuracy evaluation of the robotic manipulator shows average translational error of 1.01 mm and 0.96 mm in X and Z axes, respectively, and average rotational error of 3.06 degrees and 2.07 degrees about the X and Z axes, respectively.},\nauthor = {Patel, Niravkumar A. and Yan, Jiawen and Levi, David and Monfaredi, Reza and Cleary, Kevin and Iordachita, Iulian},\nbooktitle = {IEEE International Conference on Intelligent Robots and Systems},\ndoi = {10.1109/IROS.2018.8593807},\nisbn = {9781538680940},\nissn = {21530866},\norganization = {IEEE},\npages = {1443--1448},\ntitle = {{Body-Mounted Robot for Image-Guided Percutaneous Interventions: Mechanical Design and Preliminary Accuracy Evaluation}},\nyear = {2018}\n}\n
\n
\n\n\n
\n This paper presents a body-mounted, four degree-of-freedom (4-DOF) parallel mechanism robot for image-guided percutaneous interventions. The design of the robot is optimized to be light weight and compact such that it could be mounted to the patient body. It has a modular design that can be adopted for assisting various image-guided, needle-based percutaneous interventions such as arthrography, biopsy and brachytherapy seed placement. The robot mechanism and the control system are designed and manufactured with components compatible with imaging modalities including Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). The current version of the robot presented in this paper is optimized for shoulder arthrography under MRI guidance; a Z-shaped fiducial frame is attached to the robot, providing accurate and repeatable robot registration with the MR scanner coordinate system. Here we present the mechanical design of the manipulator, robot kinematics, robot calibration procedure, and preliminary bench-top accuracy assessment. The bench-top accuracy evaluation of the robotic manipulator shows average translational error of 1.01 mm and 0.96 mm in X and Z axes, respectively, and average rotational error of 3.06 degrees and 2.07 degrees about the X and Z axes, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Towards Bimanual Robot-Assisted Retinal Surgery: Tool-to-Sclera Force Evaluation.\n \n \n \n\n\n \n He, C.; Roizenblatt, M.; Patel, N.; Ebrahimi, A.; Yang, Y.; Gehlbach, P. L.; and Lordachita, L.\n\n\n \n\n\n\n In Proceedings of IEEE Sensors, volume 2018-Octob, pages 1–4, 2018. IEEE\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{he2018towards,\nabstract = {The performance of retinal microsurgery often requires the coordinated use of both hands. During bimanual retinal surgery, dominant hand performance may be negatively impacted by poor non-dominant hand assistance. Therefore understanding bimanual latent determinants, and establishing safety criteria for bimanual manipulation is relevant to robotic development and to eventual patient care. In this paper, we present a preliminary study to quantitatively evaluate one aspect of bimanual tool use in retinal surgery. Two force sensing tools were designed and fabricated using fiber Bragg grating sensors. Tool-to-sclera contact force is measured using the developed tools and analyzed. The tool forces were recorded during five basic surgical maneuvers typical of retinal surgery. Two subjects are involved in experiments, including one clinician and one engineer. For comparison, all manipulations were replicated under robot-assisted conditions. The results indicate that the average tool-to-sclera force recorded from the dominant hand tool is significantly higher than that from the non-dominant hand tool (\\pmb p=0.004). Moreover, the average forces under robot-assisted conditions with the present steady hand robot is notably higher than freehand conditions (\\pmb p=0.01). The forces obtained from the dominant and not-dominant hand instruments indicate a weak correlation.},\nauthor = {He, Changyan and Roizenblatt, Marina and Patel, Niravkumar and Ebrahimi, Ali and Yang, Yang and Gehlbach, Peter L. and Lordachita, Lulian},\nbooktitle = {Proceedings of IEEE Sensors},\ndoi = {10.1109/ICSENS.2018.8589810},\nisbn = {9781538647073},\nissn = {21689229},\nkeywords = {bimanual manipulation,robot-assisted retinal surgery,tool-to-sclera force},\norganization = {IEEE},\npages = {1--4},\ntitle = {{Towards Bimanual Robot-Assisted Retinal Surgery: Tool-to-Sclera Force Evaluation}},\nvolume = {2018-Octob},\nyear = {2018}\n}\n
\n
\n\n\n
\n The performance of retinal microsurgery often requires the coordinated use of both hands. During bimanual retinal surgery, dominant hand performance may be negatively impacted by poor non-dominant hand assistance. Therefore understanding bimanual latent determinants, and establishing safety criteria for bimanual manipulation is relevant to robotic development and to eventual patient care. In this paper, we present a preliminary study to quantitatively evaluate one aspect of bimanual tool use in retinal surgery. Two force sensing tools were designed and fabricated using fiber Bragg grating sensors. Tool-to-sclera contact force is measured using the developed tools and analyzed. The tool forces were recorded during five basic surgical maneuvers typical of retinal surgery. Two subjects are involved in experiments, including one clinician and one engineer. For comparison, all manipulations were replicated under robot-assisted conditions. The results indicate that the average tool-to-sclera force recorded from the dominant hand tool is significantly higher than that from the non-dominant hand tool (±b p=0.004). Moreover, the average forces under robot-assisted conditions with the present steady hand robot is notably higher than freehand conditions (±b p=0.01). The forces obtained from the dominant and not-dominant hand instruments indicate a weak correlation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n In-Bore Experimental Validation of Active Compensation and Membrane Puncture Detection for Targeted MRI-Guided Robotic Prostate Biopsy.\n \n \n \n\n\n \n Wartenberg, M.; Gandomi, K.; Carvalho, P.; Schornak, J.; Patel, N.; Iordachita, I.; Tempany, C.; Hata, N.; Tokuda, J.; and Fischer, G. S.\n\n\n \n\n\n\n In International Symposium on Experimental Robotics, pages 34–44, 2020. Springer, Cham\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wartenberg2018bore,\nabstract = {It is estimated that in the United States there will be 164,690 new cases and 29,430 deaths from prostate cancer in 2018 [1]. Trans-Rectal Ultrasound (TRUS) has typically been used to facilitate sampling of up to twenty biopsy cores, but due to variable prostate size this technique often still misses clinically significant cancers [2]. Instead, MRI provides higher image quality and multiparametric imaging, allowing for procedures with fewer needle insertions via direct targeting of suspicious lesions.},\nauthor = {Wartenberg, Marek and Gandomi, Katie and Carvalho, Paulo and Schornak, Joseph and Patel, Niravkumar and Iordachita, Iulian and Tempany, Clare and Hata, Nobuhiko and Tokuda, Junichi and Fischer, Gregory S.},\nbooktitle = {International Symposium on Experimental Robotics},\ndoi = {10.1007/978-3-030-33950-0_4},\nissn = {25111264},\norganization = {Springer, Cham},\npages = {34--44},\ntitle = {{In-Bore Experimental Validation of Active Compensation and Membrane Puncture Detection for Targeted MRI-Guided Robotic Prostate Biopsy}},\nyear = {2020}\n}\n
\n
\n\n\n
\n It is estimated that in the United States there will be 164,690 new cases and 29,430 deaths from prostate cancer in 2018 [1]. Trans-Rectal Ultrasound (TRUS) has typically been used to facilitate sampling of up to twenty biopsy cores, but due to variable prostate size this technique often still misses clinically significant cancers [2]. Instead, MRI provides higher image quality and multiparametric imaging, allowing for procedures with fewer needle insertions via direct targeting of suspicious lesions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mechanical validation of an MRI compatible stereotactic neurosurgery robot in preparation for pre-clinical trials.\n \n \n \n \n\n\n \n Nycz, C. J.; Gondokaryono, R.; Carvalho, P.; Patel, N.; Wartenberg, M.; Pilitsis, J. G.; and Fischer, G. S.\n\n\n \n\n\n\n In IEEE International Conference on Intelligent Robots and Systems, volume 2017-Septe, pages 1677–1684, 2017. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"MechanicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{nycz2017mechanical,\nabstract = {The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism's accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it's axis with accuracy of 1.37 ± 0.06mm and 0.79° ± 0.41°, inserting it along it's axis with an accuracy of 0.06 ± 0.07mm, and rotating it about it's axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot's presence in the MRI bore, < 10.3% reduction in SNR from running the robot's motors during a scan, and no visible paramagnetic artifacts.},\nauthor = {Nycz, Christopher J. and Gondokaryono, Radian and Carvalho, Paulo and Patel, Nirav and Wartenberg, Marek and Pilitsis, Julie G. and Fischer, Gregory S.},\nbooktitle = {IEEE International Conference on Intelligent Robots and Systems},\ndoi = {10.1109/IROS.2017.8205979},\nisbn = {9781538626825},\nissn = {21530866},\norganization = {IEEE},\npages = {1677--1684},\ntitle = {{Mechanical validation of an MRI compatible stereotactic neurosurgery robot in preparation for pre-clinical trials}},\nurl = {https://doi.org/10.1109/IROS.2017.8205979},\nvolume = {2017-Septe},\nyear = {2017}\n}\n
\n
\n\n\n
\n The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism's accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it's axis with accuracy of 1.37 ± 0.06mm and 0.79° ± 0.41°, inserting it along it's axis with an accuracy of 0.06 ± 0.07mm, and rotating it about it's axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot's presence in the MRI bore, < 10.3% reduction in SNR from running the robot's motors during a scan, and no visible paramagnetic artifacts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Closed-loop Autonomous Needle Steering during Cooperatively Controlled Needle Insertions for MRI-guided Pelvic Interventions.\n \n \n \n\n\n \n Wartenberg, M; Schornak, J; Carvalho, P; Patel, N; Iordachita, I; Tempany, C; Hata, N; Tokuda, J; and Fischer, G.\n\n\n \n\n\n\n In The 10th Hamlyn Symposium on Medical Robotics, pages 33–34, 2017. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wartenberg2017closed,\nauthor = {Wartenberg, M and Schornak, J and Carvalho, P and Patel, N and Iordachita, I and Tempany, C and Hata, N and Tokuda, J and Fischer, G.S},\nbooktitle = {The 10th Hamlyn Symposium on Medical Robotics},\ndoi = {10.31256/hsmr2017.17},\npages = {33--34},\ntitle = {{Closed-loop Autonomous Needle Steering during Cooperatively Controlled Needle Insertions for MRI-guided Pelvic Interventions}},\nyear = {2017}\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards synergistic control of hands-on needle insertion with automated needle steering for MRI-guided prostate interventions.\n \n \n \n \n\n\n \n Wartenberg, M.; Patel, N.; Li, G.; and Fischer, G. S.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2016-Octob, pages 5116–5119, 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{wartenberg2016towards,\nabstract = {A significant hurdle of accurate needle tip placement in percutaneous needle-based prostate interventions is unmodeled needle deflection and tissue deformation during insertion. This paper introduces a robotic platform for developing synergistic, cooperatively controlled needle insertion algorithms decoupled from closed-loop image-guided needle steering. Shared control of the surgical workspace through human-robot synergy creates a balance between the accuracy of robotic autonomy while still providing ultimate control of the procedure to the physician. Validation tests were performed using camera-based image-guided feedback control of needle steering with cooperative hands-on needle insertion. Locations were targeted inside a transparent gelatin phantom with an average total error of 2.68 ± 0.34mm and in-plane error of 2.59 ± 0.30mm.},\nauthor = {Wartenberg, Marek and Patel, Niravkumar and Li, Gang and Fischer, Gregory S.},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2016.7591878},\nisbn = {9781457702204},\nissn = {1557170X},\norganization = {IEEE},\npages = {5116--5119},\npmid = {28269418},\ntitle = {{Towards synergistic control of hands-on needle insertion with automated needle steering for MRI-guided prostate interventions}},\nurl = {https://doi.org/10.1109/EMBC.2016.7591878},\nvolume = {2016-Octob},\nyear = {2016}\n}\n
\n
\n\n\n
\n A significant hurdle of accurate needle tip placement in percutaneous needle-based prostate interventions is unmodeled needle deflection and tissue deformation during insertion. This paper introduces a robotic platform for developing synergistic, cooperatively controlled needle insertion algorithms decoupled from closed-loop image-guided needle steering. Shared control of the surgical workspace through human-robot synergy creates a balance between the accuracy of robotic autonomy while still providing ultimate control of the procedure to the physician. Validation tests were performed using camera-based image-guided feedback control of needle steering with cooperative hands-on needle insertion. Locations were targeted inside a transparent gelatin phantom with an average total error of 2.68 ± 0.34mm and in-plane error of 2.59 ± 0.30mm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Closed-loop asymmetric-tip needle steering under continuous intraoperative MRI guidance.\n \n \n \n \n\n\n \n Patel, N. A.; Van Katwijk, T.; Li, G.; Moreira, P.; Shang, W.; Misra, S.; and Fischer, G. S.\n\n\n \n\n\n\n In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, volume 2015-Novem, pages 4869–4874, 2015. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Closed-loopPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2015closed,\nabstract = {Magnetic resonance imaging (MRI) provides excellent image contrast for various types of tissues, making it a suitable choice over other imaging modalities for various image-guided needle interventions. Furthermore, robot-assistance is maturing for surgical procedures such as percutaneous prostate and brain interventions. Although MRI-guided, robot-assisted needle interventions are approaching clinical usage, they are still typically open-loop in nature due to the lack of continuous intraoperative needle tracking. Closed-loop needle-based procedures can improve the accuracy of needle tip placement by correcting the needle trajectory during insertion. This paper proposes a system for robot-assisted, flexible asymmetric-tipped needle interventions under continuous intraoperative MRI guidance. A flexible needle's insertion depth and rotation angle are manipulated by an MRI-compatible robot in the bore of the MRI scanner during continuous multi-planar image acquisition to reach a desired target location. Experiments are performed on gelatin phantoms to assess the accuracy of needle placement into the target location. The system was able to successfully utilize live MR imaging to guide the path of the needle, and results show an average total targeting error of 2.5±0.47mm, with an average in-plane error of 2.09±0.33mm.},\nauthor = {Patel, Niravkumar A. and {Van Katwijk}, Tim and Li, Gang and Moreira, Pedro and Shang, Weijian and Misra, Sarthak and Fischer, Gregory S.},\nbooktitle = {Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS},\ndoi = {10.1109/EMBC.2015.7319484},\nisbn = {9781424492718},\nissn = {1557170X},\norganization = {IEEE},\npages = {4869--4874},\npmid = {26737384},\ntitle = {{Closed-loop asymmetric-tip needle steering under continuous intraoperative MRI guidance}},\nurl = {https://doi.org/10.1109/EMBC.2015.7319484},\nvolume = {2015-Novem},\nyear = {2015}\n}\n
\n
\n\n\n
\n Magnetic resonance imaging (MRI) provides excellent image contrast for various types of tissues, making it a suitable choice over other imaging modalities for various image-guided needle interventions. Furthermore, robot-assistance is maturing for surgical procedures such as percutaneous prostate and brain interventions. Although MRI-guided, robot-assisted needle interventions are approaching clinical usage, they are still typically open-loop in nature due to the lack of continuous intraoperative needle tracking. Closed-loop needle-based procedures can improve the accuracy of needle tip placement by correcting the needle trajectory during insertion. This paper proposes a system for robot-assisted, flexible asymmetric-tipped needle interventions under continuous intraoperative MRI guidance. A flexible needle's insertion depth and rotation angle are manipulated by an MRI-compatible robot in the bore of the MRI scanner during continuous multi-planar image acquisition to reach a desired target location. Experiments are performed on gelatin phantoms to assess the accuracy of needle placement into the target location. The system was able to successfully utilize live MR imaging to guide the path of the needle, and results show an average total targeting error of 2.5±0.47mm, with an average in-plane error of 2.09±0.33mm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two legged robot design , simulation and realization.\n \n \n \n \n\n\n \n Patel, N. A.; Pradhan Dr., S. N.; and Shah Prof., K. D.\n\n\n \n\n\n\n In ICARA 2009 - Proceedings of the 4th International Conference on Autonomous Robots and Agents, pages 426–429, 2009. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"TwoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@inproceedings{patel2009two,\nabstract = {This paper introduces a systematic approach to design and realize a two-legged robot. Design and system configuration of a two legged robot are explained in detail. Development process of the robot has been divided in to three phases, (1) Design (2) Verification and (3) Realization, while robot's architecture consists of three subsystems, (1) Mechanical subsystem (2) Electronics subsystem and (3) Software subsystem. Each of these subsystem is explained in detail. During design phase mechanical and electronic subsystems of the robot have been designed, design of mechanical subsystem focuses on identifying actuation mechanism and material to be used for links, stepper motors have been chosen as actuators to reduce control mechanism complexity. Design of electronics subsystem focuses on identifying appropriate mechanism to control each actuator and providing interface to software subsystem, for controlling each stepper motor, controller A- 3982 from Allegro Microsystems while to control each stepper motor controller, microcontroller AT89S52 from Atmel has been selected. In verification phase by using simulator(part of software subsystem) each subsystem has been verified for desired requirements and performance. From simulation results it is confirmed that robot will be able to walk. Realization of the physical robot is under progress. {\\textcopyright}2009 IEEE.},\nauthor = {Patel, Nirav A. and {Pradhan Dr.}, S. N. and {Shah Prof.}, K. D.},\nbooktitle = {ICARA 2009 - Proceedings of the 4th International Conference on Autonomous Robots and Agents},\ndoi = {10.1109/ICARA.2000.4803964},\nisbn = {9781424427130},\norganization = {IEEE},\npages = {426--429},\ntitle = {{Two legged robot design , simulation and realization}},\nurl = {https://doi.org/10.1109/ICARA.2000.4803964},\nyear = {2009}\n}\n
\n
\n\n\n
\n This paper introduces a systematic approach to design and realize a two-legged robot. Design and system configuration of a two legged robot are explained in detail. Development process of the robot has been divided in to three phases, (1) Design (2) Verification and (3) Realization, while robot's architecture consists of three subsystems, (1) Mechanical subsystem (2) Electronics subsystem and (3) Software subsystem. Each of these subsystem is explained in detail. During design phase mechanical and electronic subsystems of the robot have been designed, design of mechanical subsystem focuses on identifying actuation mechanism and material to be used for links, stepper motors have been chosen as actuators to reduce control mechanism complexity. Design of electronics subsystem focuses on identifying appropriate mechanism to control each actuator and providing interface to software subsystem, for controlling each stepper motor, controller A- 3982 from Allegro Microsystems while to control each stepper motor controller, microcontroller AT89S52 from Atmel has been selected. In verification phase by using simulator(part of software subsystem) each subsystem has been verified for desired requirements and performance. From simulation results it is confirmed that robot will be able to walk. Realization of the physical robot is under progress. ©2009 IEEE.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);