Research on grasp detection method based on angle constraint and gaussian quality map
-
摘要:
针对动态抓取环境中最优抓取点选取不稳定、抓取角度不准确的问题,提出一种基于角度约束与高斯化质量图的抓取检测方法.首先,将抓取角度按角度取值划分为多个类别,约束类别内角度取值范围,解决因密集标注导致的像素级标注丢失问题;再经形态学开运算方法过滤角度图中由于多个标注堆叠产生的碎片,得到标注一致性更强的抓取角度图.其次,利用高斯函数优化抓取质量图,突出可抓取区域中心位置的重要性,提升最优抓取点选取的稳定性.最后,在全卷积网络的基础上,引入抓取点和抓取方向注意力机制,提出融合注意力的生成式抓取检测网络(Attentive Generative Grasping Detection Network, AGGDN).在Jacquard仿真数据集上的实验结果表明:该方法的检测准确率能够达到94.4%,单次检测时间为11ms,能有效提升对复杂物体的抓取检测能力,且具有较好的实时性.对真实环境中不同姿态摆放的异形目标抓取实验结果表明:该方法抓取成功率能够达到88.8%,对训练集中从未出现的新目标有较强的泛化能力,能够应用于机器人抓取的相关任务.
Abstract:To solve the problem of unstable selection of optimal grasping point and inaccurate grasping angle in dynamic grasping environment, a grasp detection method based on angle constraint and gaussian quality map was proposed. Firstly, the grasping angle were divided into several categories according to the angle value, and the angle value range within the category was constrained to solve the pixel-level annotation loss caused by intensive annotation. morphological open operation method was used to filter the debris generated by multiple annotation stacking in the angle map, and the grasping angle map with stronger annotation consistency was obtained. Secondly, gaussian function was used to optimize the grasping quality map to highlight the importance of the center position of the grasping region and improve the stability of the selection of the optimal grasping point. Finally, based on the fully convolutional network, grasping point and grasping direction attention mechanisms are introduced, and an Attentive Generative Grasping Detection Network (AGGDN) is proposed. Experimental results on Jacquard simulation dataset show that the detection accuracy of proposed method achieves 94.4%, and the single detection time is 11ms, which can effectively improve the grasping detecting ability of complex objects, and has good real-time performance. The experimental results of grasping irregular targets with different poses in the real environment show that, the grasping success rate of proposed method can reach 88.8%, and it has strong generalization ability to the new targets that never appear in the training set, and can be applied to the relevant tasks of robot grasping.
-
表 1 Jacquard数据集定量比较结果
Table 1. Quantitative comparison for Jacquard dataset
表 2 不同设置对检测准确率的影响
Table 2. The impact of different settings on performance
模型设置 准确率/% 时间/ms 阈值0.25 阈值0.30 基线 92.3 89.7 9 基线+高斯 92.7 90.6 9 基线+高斯+分类 94.4 93.1 11 -
[1] WATKINS-VALLS D, VARLEY J, ALLEN P. Multi-modal geometric learning for grasping and manipulation[C]//Proceedings of 2019 International Conference on Robotics and Automation. Montreal: IEEE, 2019: 7339-7345. DOI: 10.1109/ICRA.2019.8794233. [2] 苏杰, 张云洲, 房立金, 等. 基于多重几何约束的未知物体抓取位姿估计[J]. 机器人, 2020, 42(2): 129-138. DOI: 10.13973/j.cnki.robot.190261.SU J, ZHANG Y Z, FANG L J, et al. Estimation of the grasping pose of unknown objects based on multiple geometric constraints[J]. Robot, 2020, 42(2): 129-138. DOI: 10.13973/j.cnki.robot.190261. [3] 刘亚欣, 王斯瑶, 姚玉峰, 等. 机器人抓取检测技术的研究现状[J]. 控制与决策, 2020, 35(12): 2817-2828. DOI: 10.13195/j.kzyjc.2019.1145.LIU Y X, WANG S Y, YAO Y F, et al. Recent researches on robot autonomous grasp technology[J]. Control and Decision, 2020, 35(12): 2817-2828. DOI: 10.13195/j.kzyjc.2019.1145. [4] KROEMER O, NIEKUM S, KONIDARIS G. A review of robot learning for manipulation: Challenges, representations, and algorithms[J]. The Journal of Machine Learning Research, 2021, 22(1): 30. [5] MORRISON D, CORKE P, LEITNER J. Learning robust, real-time, reactive robotic grasping[J]. The International Journal of Robotics Research, 2020, 39(2-3): 183-201. DOI: 10.1177/0278364919859066. [6] WANG S F, JIANG X, ZHAO J, et al. Efficient fully convolution neural network for generating pixel wise robotic grasps with high resolution images[C]//Proceedings of 2019 IEEE International Conference on Robotics and Biomimetics. Dali: IEEE, 2019: 474-480. DOI: 10.1109/ROBIO49542.2019.8961711. [7] TENG Y D, GAO P X. Generative robotic grasping using depthwise separable convolution[J]. Computers & Electrical Engineering, 2021(94): 107318. DOI: 10.1016/j.compeleceng.2021.107318. [8] KUMRA S, JOSHI S, SAHIN F. Antipodal robotic grasping using generative residual convolutional neural network[C]//Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems. Las Vegas: IEEE, 2020: 9626-9633. DOI: 10.1109/IROS45743.2020.9340777. [9] PREW W, BRECKON T, BORDEWICH M, et al. Improving robotic grasping on monocular images via multi-task learning and positional loss[C]//Proceedings of the 2020 25th International Conference on Pattern Recognition. Milan: IEEE, 2021: 9843-9850. DOI: 10.1109/ICPR48806.2021.9413197. [10] KLEEBERGER K, BORMANN R, KRAUS W, et al. A survey on learning-based robotic grasping[J]. Current Robotics Reports, 2020, 1(4): 239-249. DOI: 10.1007/s43154-020-00021-6. [11] CHALVATZAKI G, GKANATSIOS N, MARAGOS P, et al. Orientation attentive robotic grasp synthesis with augmented grasp map representation[Z]. arXiv: 2006.05123, 2021. [12] XIE S N, GIRSHICK R, DOLLÁR P, et al. Aggregated residual transformations for deep neural networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 5987-5995. DOI: 10.1109/CVPR.2017.634. [13] 李正明, 章金龙. 基于深度学习的抓取目标姿态检测与定位[J]. 信息与控制, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212.LI Z M, ZHANG J L. Detection and positioning of grab target based on deep learning[J]. Information and Control, 2020, 49(2): 147-153. DOI: 10.13976/j.cnki.xk.2020.9212. [14] DEPIERRE A, DELLANDRÉA E, CHEN L M. Jacquard: a large scale dataset for robotic grasp detection[C]//Proceedings of 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. Madrid: IEEE, 2018: 3511-3516. DOI: 10.1109/IROS.2018.8593950. [15] DU G G, WANG K, LIAN S G, et al. Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review[J]. Artificial Intelligence Review, 2021, 54(3): 1677-1734. DOI: 10.1007/s10462-020-09888-5. -