Integrated vision-based system for efficient, semi-automated control of a robotic manipulator. Jiang, H., Wachs, J., J., P., & Duerstock, B., B., S. International Journal of Intelligent Computing and Cybernetics, 7(3):253-266, 2014.
doi  abstract   bibtex   
Purpose – The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller. Design/methodology/approach – Two Kinects cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator’s face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject’s face. Findings – The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects. Originality/value – Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.
@article{
 title = {Integrated vision-based system for efficient, semi-automated control of a robotic manipulator},
 type = {article},
 year = {2014},
 keywords = {Gesture recognition,Object recognition,Spinal cord injuries,Wheelchair-mounted robotic arm},
 pages = {253-266},
 volume = {7},
 id = {913aaf2f-6fb4-35bc-afce-5252f1277abb},
 created = {2021-06-04T19:36:49.862Z},
 file_attached = {false},
 profile_id = {f6c02e5e-2d2f-3786-8fa8-871d32fc2b9b},
 last_modified = {2021-06-07T19:16:59.985Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {true},
 hidden = {false},
 citation_key = {Jiang2014b},
 folder_uuids = {b43d1b86-b425-4322-b575-14547700e015},
 private_publication = {false},
 abstract = {Purpose – The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller. Design/methodology/approach – Two Kinects cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator’s face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject’s face. Findings – The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects. Originality/value – Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.},
 bibtype = {article},
 author = {Jiang, Hairong and Wachs, J.P. Juan P. and Duerstock, B.S. Bradley S.},
 doi = {10.1108/IJICC-09-2013-0042},
 journal = {International Journal of Intelligent Computing and Cybernetics},
 number = {3}
}

Downloads: 0