Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images. Ran, L., Zhang, Y., Zhang, Q., & Yang, T. Sensors, 17(6):1341, June, 2017.
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images [link]Paper  doi  abstract   bibtex   
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360◦ fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
@article{ran_convolutional_2017,
	title = {Convolutional {Neural} {Network}-{Based} {Robot} {Navigation} {Using} {Uncalibrated} {Spherical} {Images}},
	volume = {17},
	issn = {1424-8220},
	url = {http://www.mdpi.com/1424-8220/17/6/1341},
	doi = {10.3390/s17061341},
	abstract = {Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360◦ fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.},
	language = {en},
	number = {6},
	urldate = {2018-09-25},
	journal = {Sensors},
	author = {Ran, Lingyan and Zhang, Yanning and Zhang, Qilin and Yang, Tao},
	month = jun,
	year = {2017},
	pages = {1341},
}

Downloads: 0