Abstract:Abstract:The service robot is faced with unstructured scene in the task of grasp. Because of the irregular placement and shape of the objects, it is difficult to accurately calculate the robot′s grasp posture. Aiming at this problem, a robot optimal grasp posture detection algorithm with dual network architecture is proposed. Firstly, the YOLO V3 target detection model is improved, which improves the detection speed of the model and the recognition performance of small target objects. Secondly, convolutional neural network is used to design multitarget grasp detection network, which generates the robot grasp area in the image. In order to calculate the optimal grasp posture of the robot, the IOU area evaluation algorithm is established, which screens out the optimal grasp area of the target object. The experiment results show that the target detection accuracy of improved YOLO V3 reaches 91%, and the detection accuracy of the multitarget grasp reaches 86%, the detection accuracy of the robot optimal grasp posture reaches above 90%. In summary, the proposed method can efficiently and accurately calculate the optimal grasp area of the target object to meet the requirements of the grasp task. .txt