Research and application of obstacle avoidance method for inspection robot based on road scene understanding
-
摘要:
为提升巡检机器人的导航避障能力,本文将深度学习技术应用于场景识别中,提出了一种基于道路场景理解的巡检机器人自主避障方法(Road Scene Understanding Net, RSUNet).该方法首先将多层卷积与LeakyReLU激活函数相结合,并以残差结构和金字塔上采样结构的方式,构建高精度道路场景理解网络;其次,设计自适应控制模块来对比前后两帧图像的深层特征信息,并根据特征差异大小自动控制后续网络层的特征计算,避免相似特征重复提取,保障网络效率;最后,将场景理解结果转化为巡检机器人前方不同区域的目标信息,通过分析机器人前方可行道路区域以及障碍物所处位置来指导巡检机器人实现导航避障.实验结果表明,所提方法有效的平衡了场景理解网络的识别精度与计算效率,同时,在实际变电站巡检机器人平台上,该方法也表现出较强的适应性,并能准确高效的为机器人提供场景信息,辅助机器人完成实时自主避障.
Abstract:In order to improve the navigation obstacle avoidance ability to the inspection robot, deep learning technology is applied to scene recognition in this paper, and an autonomous obstacle avoidance method of inspection robot based on road scene understanding is proposed. Firstly, a high-precision road scene understanding network is constructed by combining multi-layer convolution with LeakyReLU activation function, and in the way of residual structure and sampling structure on pyramid; Secondly, an adaptive control module is designed to compare the deep feature information of the two images before and after, and automatically control the feature calculation of the subsequent network layer according to the feature difference, so as to avoid repeated extraction of similar features and ensure the network efficiency; Finally, the scene understanding results are transformed into target information in different areas in front of the inspection robot, and the feasible road area in front of the robot and the location of obstacles are analyzed to guide the inspection robot to realize navigation and obstacle avoidance. Experiments show that the proposed method effectively balances the recognition accuracy and calculation efficiency of the scene understanding network. At the same time, on the actual substation inspection robot platform, the method also shows strong adaptability, and can accurately and efficiently provide scene information for the robot and assist the robot to realize real-time autonomous obstacle avoidance.
-
Key words:
- deep learning /
- scene understanding /
- inspection robot /
- obstacles avoidance
-
表 1 场景理解主干网络结构
Table 1. Scene understanding backbone network structure
Stage Layer Structure Repetitions Output Size original RGB, 3 1 448×448 init Conv 3×3, 13
Max pooling 2×2, 31 224×224 stage1 Conv 1×1, 16
Conv 3×3, 32
Residual1 112×112 stage2 Conv 1×1, 32
Conv 3×3, 64
Residual2 56×56 stage3 Conv 1×1, 64
Conv 3×3, 128
Residual8 28×28 stage4 Conv 1×1, 128
Conv 3×3, 256
Residual8 14×14 stage5 Conv 1×1, 256
Conv 3×3, 512
Residual4 7×7 upsample1 Conv 1×1, 256
upsample 2×2
Eltwise1 14×14 upsample2 Conv1×1, 128
upsample 2×2
Eltwise1 28×28 upsample3 Conv1×1, 64
upsample 2×2
Eltwise1 56×56 upsample4 upsample 8×8 1 448×448 表 2 场景理解主干网络测试结果对比
Table 2. Scene understanding backbone network test results comparison
表 3 不同阈值测试对比结果
Table 3. Comparison results of different threshold tests
阈值 FPS 测试精度/(%) G C mIoU S4=0.01 56 95.5 77.9 65.8 S4=0.02 60 95.4 77.6 65.7 S4=0.03 65 95.4 77.5 65.7 S4=0.04 69 95.3 77.2 65.4 S2=0.09 78 95.0 76.7 65.1 S2=0.1 83 95.0 76.5 65.0 S2=0.11 87 94.6 76.0 64.7 S2=0.12 92 94.2 75.7 64.3 表 4 引入自适应控制模块后测试结果
Table 4. Test after introduction of adaptive control module
网络 FPS 测试精度/(%) G C mIoU 引入前 53 95.6 78.1 65.9 引入后(图像差异大) 60 95.1 77.6 65.7 引入后(图像差异小) 94 94.7 77.2 65.3 表 5 Cityscapes数据集测试结果对比
Table 5. Cityscapes dataset test results comparison
表 6 变电站道路场景理解测试结果
Table 6. Substation road scene understanding test results
-
[1] 王梓强, 胡晓光, 李晓筱, 等. 移动机器人全局路径规划算法综述[J]. 计算机科学, 2021, 48(10): 19-29. DOI: 10.11896/jsjkx.200700114.WANG Z Q, HU X G, LI X X, et al. Overview of global path planning algorithms for mobile robots[J]. Computer Science, 2021, 48(10): 19-29. DOI: 10.11896/jsjkx.200700114. [2] 杨慧炯. Otsu算法在基于ROS系统的变电站机器人导航线提取中的应用研究[J]. 微电子学与计算机, 2018, 35(7): 122-126. DOI: 10.19304/j.cnki.issn1000-7180.2018.07.026.YANG H J. The applicative research of Otsu algorithm in navigation-line extraction by ROS-system-based substation robot[J]. Microelectronics & Computer, 2018, 35(7): 122-126. DOI: 10.19304/j.cnki.issn1000-7180.2018.07.026. [3] 袁茂鸿, 王姝, 林心如. 基于超声波传感器的扫地机器人避障技术研究[J]. 南方农机, 2021, 52(10): 100-101. DOI: 10.3969/j.issn.1672-3872.2021.10.039.YUAN M H, WANG S, LIN X R. Research on obstacle avoidance technology of sweeping robot based on ultrasonic sensor[J]. South Agricultural Machinery, 2021, 52(10): 100-101. DOI: 10.3969/j.issn.1672-3872.2021.10.039. [4] 袁林峰, 柯达, 许超, 等. 基于正交激光雷达的电力巡检无人机自主避障系统研究[J]. 自动化技术与应用, 2021, 40(7): 18-22. DOI: 10.3969/j.issn.1003-7241.2021.07.005.YUAN L F, KE D, XU C, et al. Research on autonomous obstacle avoidance system of electric patrol UAV Based on orthogonal lidar[J]. Techniques of Automation and Applications, 2021, 40(7): 18-22. DOI: 10.3969/j.issn.1003-7241.2021.07.005. [5] 梁丰, 熊凌. 基于GA-BP神经网络的移动机器人UWB室内定位[J]. 微电子学与计算机, 2019, 36(4): 33-37. http://www.journalmc.com/article/id/bbb54180-df0d-4efd-8eb2-01ea4b7a8882LIANG F, XIONG L. UWB indoor positioning of mobile robot based on GA-BP neural network[J]. Microelectronics & Computer, 2019, 36(4): 33-37. http://www.journalmc.com/article/id/bbb54180-df0d-4efd-8eb2-01ea4b7a8882 [6] 李奎霖, 魏武, 高勇, 等. 基于LDSO的机器人视觉定位与稠密建图的应用[J]. 微电子学与计算机, 2020, 37(2): 51-56. DOI: 10.19304/j.cnki.issn1000-7180.2020.02.009.LI K L, WEI W, GAO Y, et al. Application of robot visual localization and dense mapping based on LDSO[J]. Microelectronics & Computer, 2020, 37(2): 51-56. DOI: 10.19304/j.cnki.issn1000-7180.2020.02.009. [7] 金彦亮, 朱容廷. 机器人端到端视觉避障方法研究[J]. 工业控制计算机, 2019, 32(9): 77-79. DOI: 10.3969/j.issn.1001-182X.2019.09.032.JIN Y L, ZHU R T. Research on robot end-to-end visual obstacle avoidance method[J]. Industrial Control Computer, 2019, 32(9): 77-79. DOI: 10.3969/j.issn.1001-182X.2019.09.032. [8] 刘藏龙. 基于深度学习的移动机器人避障研究[D]. 长春: 长春理工大学, 2018.LIU C L. Research on the obstacle avoidance of mobile robots based on deep learning[D]. Changchun: Changchun University of Science and Technology, 2018. [9] 鲜开义, 彭志远, 谷湘煜, 等. 变电站巡检机器人避障方法研究与应用[J]. 科学技术与工程, 2021, 21(5): 1957-1962. doi: 10.3969/j.issn.1671-1815.2021.05.041XIAN K Y, PENG Z Y, GU X Y, et al. Research and application of obstacle avoidance method for substation inspection robot[J]. Science Technology and Engineering, 2021, 21(5): 1957-1962. doi: 10.3969/j.issn.1671-1815.2021.05.041 [10] SINGHANI A. Real-time freespace segmentation on autonomous robots for detection of obstacles and drop-offs[J]. arXiv: 2019.1902.00842, 2019. [11] LI J Y, SONG B W, LI J H. Obstacle avoidance prediction system of hospital distribution robot based on deep learning[C]//2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). Chongqing, China: IEEE, 2021: 939-943. DOI: 10.1109/IAEAC50856.2021.9390593. [12] SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651. DOI: 10.1109/TPAMI.2016.2572683. [13] SHEN F L, GAN R, YAN S C, et al. Semantic segmentation via structured patch prediction, context CRF and guidance CRF[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 5178-5186. DOI: 10.1109/CVPR.2017.550. [14] HE J J, DENG Z Y, ZHOU L, et al. Adaptive pyramid context network for semantic segmentation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 7519-7528. DOI: 10.1109/CVPR.2019.00770. [15] YU C Q, GAO C X, WANG J B, et al. BiSeNet V2: Bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129(11): 3051-3068. DOI: 10.1007/s11263-021-01515-2. [16] PASZKE A, CHAURASIA A, KIM S, et al. ENet: a deep neural network architecture for real-time semantic segmentation[J]. arXiv: 1606.02147, 2016. [17] GAMAL M, SIAM M, ABDEL-RAZEK M. ShuffleSeg: real-time semantic segmentation network[J]. arXiv: 1803.03816, 2018. [18] ZHUANG J T, YANG J L. ShelfNet for real-time semantic segmentation[J]. arXiv: 1811.11254, 2018. [19] SHELHAMER E, RAKELLY K, HOFFMAN J, et al. Clockwork convnets for video semantic segmentation[M]//HUA G, JÉGOU H. European Conference on Computer Vision, Amsterdam, The Netherlands: Springer, 2016: 852-868. DOI: 10.1007/978-3-319-49409-8_69. [20] 李娟, 秦伟. 基于视觉的移动机器人避障控制系统设计[J]. 机床与液压, 2021, 49(15): 24-28. DOI: 10.3969/j.issn.1001-3881.2021.15.005.LI J, QIN W. Design of mobile robot obstacle-avoidance control system based on vision system[J]. Machine Tool & Hydraulics, 2021, 49(15): 24-28. DOI: 10.3969/j.issn.1001-3881.2021.15.005. -