基于三维检测网络的机器人抓取方法
DOI:
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP242 TH86

基金项目:

科技部国家重点研发计划(2018YFC0309104)项目资助


A robotic grasping method based on three-dimensional detection network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    机器人抓取任务中面对的是不同形状和大小的物体,而散落在场景中的物体会有不同的姿态和位置,这对机器人抓取 中计算物体位姿任务提出了较高的挑战。 针对于此,本文设计了一种基于三维目标检测的机器人抓取方法,弥补了基于二维图 像识别引导机器人抓取任务中对视角要求较高的缺陷。 首先,设计了一种卷积神经网络在 RGB 图像中识别物体,并回归出物 体三维包围盒、物体中心点;其次,提出一种计算机器人抓取物体最佳姿势的策略;最后,控制机器人进行抓取。 在实际场景中, 使用本文设计的三维检测网络,三维目标检测精度达到 88% ,抓取成功率达到 94% 。 综上所述,本文设计的系统能有效找到机 器人合适的抓取姿势,提高抓取成功率,满足更高的抓取任务要求。

    Abstract:

    The robot faces different shapes and sizes of objects in the task of grasping. The scattered objects in the scene may have different poses and positions, which make the task of recognizing positions and poses of objects more difficult. In view of this, a threedimensional scene recognition method for robotic grasping is proposed. It makes up a defect that the 2D detection method is sensitive to the field of view in robotic grasping task. Firstly, the convolutional neural network is designed to detect the object in the RGB image. Eight corner points of the three-dimensional bounding box of objects, and the center point of the object are generated. Secondly, a method is proposed to calculate the best position and pose for robotic grasping. Finally, the robot is controlled to grasp objects. In real scene, the detection accuracy reaches 88% , and the grasping success rate based on the designed three-dimensional recognition network is up to 94% . In summary, the designed network can effectively find a suitable grasping pose. The grasping success rate is improved. It is able to meet higher requirements.

    参考文献
    相似文献
    引证文献
引用本文

葛俊彦,史金龙,周志强,王 直,钱 强.基于三维检测网络的机器人抓取方法[J].仪器仪表学报,2021,(8):146-153

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-06-28
  • 出版日期: