• 北大核心期刊(《中文核心期刊要目总览》2017版)
  • 中国科技核心期刊(中国科技论文统计源期刊)
  • JST 日本科学技术振兴机构数据库(日)收录期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于道路场景理解的巡检机器人避障方法研究与应用

赵小勇 陈钦柱 郑鸿彦 陈川刚

赵小勇, 陈钦柱, 郑鸿彦, 陈川刚. 基于道路场景理解的巡检机器人避障方法研究与应用[J]. 微电子学与计算机, 2022, 39(4): 118-127. doi: 10.19304/J.ISSN1000-7180.2021.1077
引用本文: 赵小勇, 陈钦柱, 郑鸿彦, 陈川刚. 基于道路场景理解的巡检机器人避障方法研究与应用[J]. 微电子学与计算机, 2022, 39(4): 118-127. doi: 10.19304/J.ISSN1000-7180.2021.1077
ZHAO Xiaoyong, CHEN Qinzhu, ZHENG Hongyan, CHEN Chuangang. Research and application of obstacle avoidance method for inspection robot based on road scene understanding[J]. Microelectronics & Computer, 2022, 39(4): 118-127. doi: 10.19304/J.ISSN1000-7180.2021.1077
Citation: ZHAO Xiaoyong, CHEN Qinzhu, ZHENG Hongyan, CHEN Chuangang. Research and application of obstacle avoidance method for inspection robot based on road scene understanding[J]. Microelectronics & Computer, 2022, 39(4): 118-127. doi: 10.19304/J.ISSN1000-7180.2021.1077

基于道路场景理解的巡检机器人避障方法研究与应用

doi: 10.19304/J.ISSN1000-7180.2021.1077
基金项目: 

中国南方电网有限责任公司生产技术改造项目 073000GS62180079

中国南方电网有限责任公司科技项目 073000KK58190004

详细信息
    作者简介:

    赵小勇    男,(1978-),学士,高级工程师.研究方向为电力设备运行技术、机器人技术.E-mail:1079050943@qq.com

    陈钦柱    男,(1983-),学士,高级工程师.研究方向为电力设备运行技术、机器人技术

    郑鸿彦    男,(1983-),硕士,中级工程师.研究方向为电力设备运行技术、机器人技术

    陈川刚    男,(1988-),硕士,高级工程师.研究方向为电力信息技术、网络安全技术

  • 中图分类号: TN391.41

Research and application of obstacle avoidance method for inspection robot based on road scene understanding

  • 摘要:

    为提升巡检机器人的导航避障能力,本文将深度学习技术应用于场景识别中,提出了一种基于道路场景理解的巡检机器人自主避障方法(Road Scene Understanding Net, RSUNet).该方法首先将多层卷积与LeakyReLU激活函数相结合,并以残差结构和金字塔上采样结构的方式,构建高精度道路场景理解网络;其次,设计自适应控制模块来对比前后两帧图像的深层特征信息,并根据特征差异大小自动控制后续网络层的特征计算,避免相似特征重复提取,保障网络效率;最后,将场景理解结果转化为巡检机器人前方不同区域的目标信息,通过分析机器人前方可行道路区域以及障碍物所处位置来指导巡检机器人实现导航避障.实验结果表明,所提方法有效的平衡了场景理解网络的识别精度与计算效率,同时,在实际变电站巡检机器人平台上,该方法也表现出较强的适应性,并能准确高效的为机器人提供场景信息,辅助机器人完成实时自主避障.

     

  • 图 1  RSUNet网络结构图

    Figure 1.  The structure diagram of RSUNet

    图 2  场景理解主干网络模块

    Figure 2.  Scene understanding backbone network module

    图 3  自适应控制结构

    Figure 3.  Adaptive control structure

    图 4  机器人前方目标区域

    Figure 4.  The target area in front of robot

    图 5  不同特征层特征信息差异

    Figure 5.  Difference of characteristic information of different characteristic layers

    图 6  标准数据集测试结果(前两排:CamVid;后两排:Cityscapes)

    Figure 6.  Standard data set test results(First two rows: CamVid; Back two rows: Cityscapes)

    图 7  实验平台

    Figure 7.  Experimental platform

    图 8  巡检机器人测试视角

    Figure 8.  Inspection robot test perspective

    图 9  场景理解及避障结果

    Figure 9.  Classification and segmentation results

    表  1  场景理解主干网络结构

    Table  1.   Scene understanding backbone network structure

    Stage Layer Structure Repetitions Output Size
    original RGB, 3 1 448×448
    init Conv 3×3, 13
    Max pooling 2×2, 3
    1 224×224
    stage1 Conv 1×1, 16
    Conv 3×3, 32
    Residual
    1 112×112
    stage2 Conv 1×1, 32
    Conv 3×3, 64
    Residual
    2 56×56
    stage3 Conv 1×1, 64
    Conv 3×3, 128
    Residual
    8 28×28
    stage4 Conv 1×1, 128
    Conv 3×3, 256
    Residual
    8 14×14
    stage5 Conv 1×1, 256
    Conv 3×3, 512
    Residual
    4 7×7
    upsample1 Conv 1×1, 256
    upsample 2×2
    Eltwise
    1 14×14
    upsample2 Conv1×1, 128
    upsample 2×2
    Eltwise
    1 28×28
    upsample3 Conv1×1, 64
    upsample 2×2
    Eltwise
    1 56×56
    upsample4 upsample 8×8 1 448×448
    下载: 导出CSV

    表  2  场景理解主干网络测试结果对比

    Table  2.   Scene understanding backbone network test results comparison

    网络 FPS 测试精度/(%)
    G C mIoU
    ShuffleSeg[17] 118 86.1 65.3 56.3
    BiSeNet[15] 45 96.2 78.5 66.2
    RSUNet 53 95.6 78.1 65.9
    下载: 导出CSV

    表  3  不同阈值测试对比结果

    Table  3.   Comparison results of different threshold tests

    阈值 FPS 测试精度/(%)
    G C mIoU
    S4=0.01 56 95.5 77.9 65.8
    S4=0.02 60 95.4 77.6 65.7
    S4=0.03 65 95.4 77.5 65.7
    S4=0.04 69 95.3 77.2 65.4
    S2=0.09 78 95.0 76.7 65.1
    S2=0.1 83 95.0 76.5 65.0
    S2=0.11 87 94.6 76.0 64.7
    S2=0.12 92 94.2 75.7 64.3
    下载: 导出CSV

    表  4  引入自适应控制模块后测试结果

    Table  4.   Test after introduction of adaptive control module

    网络 FPS 测试精度/(%)
    G C mIoU
    引入前 53 95.6 78.1 65.9
    引入后(图像差异大) 60 95.1 77.6 65.7
    引入后(图像差异小) 94 94.7 77.2 65.3
    下载: 导出CSV

    表  5  Cityscapes数据集测试结果对比

    Table  5.   Cityscapes dataset test results comparison

    网络 FPS 测试精度/(%)
    G C mIoU
    ShuffleSeg[17] 121 82.1 60.8 50.7
    BiSeNet[15] 48 92.7 71.1 61.2
    RSUNet 86 91.4 70.3 60.5
    下载: 导出CSV

    表  6  变电站道路场景理解测试结果

    Table  6.   Substation road scene understanding test results

    网络 FPS 测试精度/(%)
    G C mIoU
    ShuffleSeg[17] 21 88.1 78.3 67.2
    BiSeNet[15] 8 96.2 84.4 74.3
    RSUNet 20 95.6 84.1 74.0
    下载: 导出CSV

    表  7  机器人避障方法对比

    Table  7.   Comparison of robot obstacle avoidance methods

    P AP FPS
    正常 左转 右转 停止
    分类[8] 98.1 91.8 92.1 87.6 92.4 12
    分类+分割[9] 98.6 94.1 94.0 89.7 94.1 7
    模糊控制[20] 98.0 90.3 90.1 85.5 91.0 23
    RSUNet 98.4 93.2 93.4 89.1 93.5 19
    下载: 导出CSV
  • [1] 王梓强, 胡晓光, 李晓筱, 等. 移动机器人全局路径规划算法综述[J]. 计算机科学, 2021, 48(10): 19-29. DOI: 10.11896/jsjkx.200700114.

    WANG Z Q, HU X G, LI X X, et al. Overview of global path planning algorithms for mobile robots[J]. Computer Science, 2021, 48(10): 19-29. DOI: 10.11896/jsjkx.200700114.
    [2] 杨慧炯. Otsu算法在基于ROS系统的变电站机器人导航线提取中的应用研究[J]. 微电子学与计算机, 2018, 35(7): 122-126. DOI: 10.19304/j.cnki.issn1000-7180.2018.07.026.

    YANG H J. The applicative research of Otsu algorithm in navigation-line extraction by ROS-system-based substation robot[J]. Microelectronics & Computer, 2018, 35(7): 122-126. DOI: 10.19304/j.cnki.issn1000-7180.2018.07.026.
    [3] 袁茂鸿, 王姝, 林心如. 基于超声波传感器的扫地机器人避障技术研究[J]. 南方农机, 2021, 52(10): 100-101. DOI: 10.3969/j.issn.1672-3872.2021.10.039.

    YUAN M H, WANG S, LIN X R. Research on obstacle avoidance technology of sweeping robot based on ultrasonic sensor[J]. South Agricultural Machinery, 2021, 52(10): 100-101. DOI: 10.3969/j.issn.1672-3872.2021.10.039.
    [4] 袁林峰, 柯达, 许超, 等. 基于正交激光雷达的电力巡检无人机自主避障系统研究[J]. 自动化技术与应用, 2021, 40(7): 18-22. DOI: 10.3969/j.issn.1003-7241.2021.07.005.

    YUAN L F, KE D, XU C, et al. Research on autonomous obstacle avoidance system of electric patrol UAV Based on orthogonal lidar[J]. Techniques of Automation and Applications, 2021, 40(7): 18-22. DOI: 10.3969/j.issn.1003-7241.2021.07.005.
    [5] 梁丰, 熊凌. 基于GA-BP神经网络的移动机器人UWB室内定位[J]. 微电子学与计算机, 2019, 36(4): 33-37. http://www.journalmc.com/article/id/bbb54180-df0d-4efd-8eb2-01ea4b7a8882

    LIANG F, XIONG L. UWB indoor positioning of mobile robot based on GA-BP neural network[J]. Microelectronics & Computer, 2019, 36(4): 33-37. http://www.journalmc.com/article/id/bbb54180-df0d-4efd-8eb2-01ea4b7a8882
    [6] 李奎霖, 魏武, 高勇, 等. 基于LDSO的机器人视觉定位与稠密建图的应用[J]. 微电子学与计算机, 2020, 37(2): 51-56. DOI: 10.19304/j.cnki.issn1000-7180.2020.02.009.

    LI K L, WEI W, GAO Y, et al. Application of robot visual localization and dense mapping based on LDSO[J]. Microelectronics & Computer, 2020, 37(2): 51-56. DOI: 10.19304/j.cnki.issn1000-7180.2020.02.009.
    [7] 金彦亮, 朱容廷. 机器人端到端视觉避障方法研究[J]. 工业控制计算机, 2019, 32(9): 77-79. DOI: 10.3969/j.issn.1001-182X.2019.09.032.

    JIN Y L, ZHU R T. Research on robot end-to-end visual obstacle avoidance method[J]. Industrial Control Computer, 2019, 32(9): 77-79. DOI: 10.3969/j.issn.1001-182X.2019.09.032.
    [8] 刘藏龙. 基于深度学习的移动机器人避障研究[D]. 长春: 长春理工大学, 2018.

    LIU C L. Research on the obstacle avoidance of mobile robots based on deep learning[D]. Changchun: Changchun University of Science and Technology, 2018.
    [9] 鲜开义, 彭志远, 谷湘煜, 等. 变电站巡检机器人避障方法研究与应用[J]. 科学技术与工程, 2021, 21(5): 1957-1962. doi: 10.3969/j.issn.1671-1815.2021.05.041

    XIAN K Y, PENG Z Y, GU X Y, et al. Research and application of obstacle avoidance method for substation inspection robot[J]. Science Technology and Engineering, 2021, 21(5): 1957-1962. doi: 10.3969/j.issn.1671-1815.2021.05.041
    [10] SINGHANI A. Real-time freespace segmentation on autonomous robots for detection of obstacles and drop-offs[J]. arXiv: 2019.1902.00842, 2019.
    [11] LI J Y, SONG B W, LI J H. Obstacle avoidance prediction system of hospital distribution robot based on deep learning[C]//2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). Chongqing, China: IEEE, 2021: 939-943. DOI: 10.1109/IAEAC50856.2021.9390593.
    [12] SHELHAMER E, LONG J, DARRELL T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4): 640-651. DOI: 10.1109/TPAMI.2016.2572683.
    [13] SHEN F L, GAN R, YAN S C, et al. Semantic segmentation via structured patch prediction, context CRF and guidance CRF[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017: 5178-5186. DOI: 10.1109/CVPR.2017.550.
    [14] HE J J, DENG Z Y, ZHOU L, et al. Adaptive pyramid context network for semantic segmentation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA: IEEE, 2019: 7519-7528. DOI: 10.1109/CVPR.2019.00770.
    [15] YU C Q, GAO C X, WANG J B, et al. BiSeNet V2: Bilateral network with guided aggregation for real-time semantic segmentation[J]. International Journal of Computer Vision, 2021, 129(11): 3051-3068. DOI: 10.1007/s11263-021-01515-2.
    [16] PASZKE A, CHAURASIA A, KIM S, et al. ENet: a deep neural network architecture for real-time semantic segmentation[J]. arXiv: 1606.02147, 2016.
    [17] GAMAL M, SIAM M, ABDEL-RAZEK M. ShuffleSeg: real-time semantic segmentation network[J]. arXiv: 1803.03816, 2018.
    [18] ZHUANG J T, YANG J L. ShelfNet for real-time semantic segmentation[J]. arXiv: 1811.11254, 2018.
    [19] SHELHAMER E, RAKELLY K, HOFFMAN J, et al. Clockwork convnets for video semantic segmentation[M]//HUA G, JÉGOU H. European Conference on Computer Vision, Amsterdam, The Netherlands: Springer, 2016: 852-868. DOI: 10.1007/978-3-319-49409-8_69.
    [20] 李娟, 秦伟. 基于视觉的移动机器人避障控制系统设计[J]. 机床与液压, 2021, 49(15): 24-28. DOI: 10.3969/j.issn.1001-3881.2021.15.005.

    LI J, QIN W. Design of mobile robot obstacle-avoidance control system based on vision system[J]. Machine Tool & Hydraulics, 2021, 49(15): 24-28. DOI: 10.3969/j.issn.1001-3881.2021.15.005.
  • 加载中
图(9) / 表(7)
计量
  • 文章访问数:  18
  • HTML全文浏览量:  26
  • PDF下载量:  0
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-09-07
  • 修回日期:  2021-09-27
  • 网络出版日期:  2022-05-12

目录

    /

    返回文章
    返回