韩辉,孙熙庆,陈一君,等.基于SA-YOLOv5的交通标志目标检测研究[J]. 微电子学与计算机,2023,40(2):94-100. doi: 10.19304/J.ISSN1000-7180.2022.0369
引用本文: 韩辉,孙熙庆,陈一君,等.基于SA-YOLOv5的交通标志目标检测研究[J]. 微电子学与计算机,2023,40(2):94-100. doi: 10.19304/J.ISSN1000-7180.2022.0369
HAN H,SUN X Q,CHEN Y J,et al. Research on traffic sign detection based on SA-YOLOv5[J]. Microelectronics & Computer,2023,40(2):94-100. doi: 10.19304/J.ISSN1000-7180.2022.0369
Citation: HAN H,SUN X Q,CHEN Y J,et al. Research on traffic sign detection based on SA-YOLOv5[J]. Microelectronics & Computer,2023,40(2):94-100. doi: 10.19304/J.ISSN1000-7180.2022.0369

基于SA-YOLOv5的交通标志目标检测研究

Research on traffic sign detection based on SA-YOLOv5

  • 摘要: 在自动驾驶的目标检测算法中,识别远处交通标志时对微小目标很容易出现漏检、误检现象,影响车载设备对路况的判断,因此对目标检测算法的精度有着严格要求. 针对微小交通标志检测精度低甚至漏检的问题,提出一种基于Shuffle注意力模块(Shuffle Attention Module,SA)、卷积注意力模块(Convolutional Block Attention Module,CBAM)与目标检测算法YOLOv5相结合的交通标志检测方法,称为Shuffle-Attention-YOLOv5(SA-YOLOv5). 该方法是将SA模块融入到YOLOv5的Backbone网络中构成对像素级信息特征提取的网络,准确提取所有相关的输入特征,并在网络的Neck部分中融入了CBAM,更好地利用Backbone中提取的特征,在上采样与下采样中寻找微小物体场景中的注意力区域,以此加强对远处微小交通标志的特征提取与识别能力. 在中国交通标志数据集(CCTSDB)上进行训练和模型对比实验,并将结果模型部署在嵌入式开发板中验证实际检测性能,通过筛选,检测微小目标时的mAP值达到了98.10%,对比其他基于注意力机制改进的YOLOv5,mAP值提高了1.70%,验证了SA-YOLOv5能有效聚焦于图像中的感兴趣区域,在检测交通标志此类小目标场景中具有良好的性能.

     

    Abstract: In the object detection algorithm of autonomous driving, it is easy to miss detection and misdetection of small targets when recognizing distant traffic signs, which affects the judgment of on-board equipment on road conditions. Therefore, there are strict requirements for the accuracy of target detection algorithms. Aiming at the problem of low detection accuracy or even missed detection of small traffic signs, a new traffic sign detection method that Shuffle-Attention-YOLOv5 (SA-YOLOv5) based on Shuffle Attention Module (SA), Convolutional Block Attention Module (CBAM) and target detection algorithm YOLOv5 is proposed. This method integrates the SA module into the Backbone network of YOLOv5 to form a network for pixel-level information feature extraction, accurately extracts all relevant input features, and integrates CBAM into the Neck part of the network to better utilize the extracted data in Backbone. In the up-sampling and down-sampling, the attention area in the scene of small objects is found, so as to strengthen the feature extraction and recognition ability of small traffic signs in the distance. Training and model comparison experiments were conducted on the CSUST China Traffic Sign Detection Benchmark (CCTSDB), and deploy the model in an embedded system to verify the actual detection performance. Through screening, the mAP value when detecting tiny targets reaches 98.10%, compared with other The improved attention mechanism of YOLOv5, the mAP value is increased by 1.70%, which verifies that SA-YOLOv5 can effectively focus on the region of interest in the image, and has good performance in detecting small target scenes such as traffic signs.

     

/

返回文章
返回