Deep-LfD: Deep robot learning from demonstrations. Esfahani, A. G., Sasikolomi, K. N., Hashempour, H., & Zhong, F. Software Impacts, 9:100087, Elsevier, August, 2021.
Deep-LfD: Deep robot learning from demonstrations [link]Paper  doi  abstract   bibtex   
Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.
@article{lincoln45212,
          volume = {9},
           month = {August},
          author = {Amir Ghalamzan Esfahani and Kiyanoush Nazari Sasikolomi and Hamidreza Hashempour and Fangxun Zhong},
           title = {Deep-LfD: Deep robot learning from demonstrations},
       publisher = {Elsevier},
            year = {2021},
         journal = {Software Impacts},
             doi = {10.1016/j.simpa.2021.100087},
           pages = {100087},
        keywords = {ARRAY(0x56546f0152a8)},
             url = {https://eprints.lincoln.ac.uk/id/eprint/45212/},
        abstract = {Like other robot learning from demonstration (LfD) approaches, deep-LfD builds a task model from sample demonstrations. However, unlike conventional LfD, the deep-LfD model learns the relation between high dimensional visual sensory information and robot trajectory/path. This paper presents a dataset of successful needle insertion by da Vinci Research Kit into deformable objects based on which several deep-LfD models are built as a benchmark of models learning robot controller for the needle insertion task.}
}

Downloads: 0