Abstract:The robotic precise peg-in-hole assembly in the unstructured environment is a problem of non-cooperative games. The position uncertainty of the peg brings challenges to the subsequent hole search and insertion. Hence, the robot needs to adjust the position of the peg to eliminate the peg-in-hole posture deviation. In this article, the peg adjustment is divided into two stages, including rough adjustment and fine adjustment. First, in the rough adjustment phase, the force-angle samples of the peg are collected when the peg does not contact the hole. They are used as the input of the MLP model for training. In this way, the robot arm for rough compensation is guided. Next, in the fine adjustment phase, the RLVAC model is formulated to estimate the peg-in-hole contact state and accurately adjust the position of the peg. A peg-hole contact fuzzy inference model is established to estimate the peg-in-hole contact state. Based on the peg-in-hole contact state, the optimal admittance control parameters are found by the reinforcement learning algorithm, which incorporates the fuzzy reward mechanism to realize the tight fit of the peg-in-hole surface. Finally, the comprehensive experiment is implemented on the peg with an unknown posture. Comparison with other conventional methods is analyzed in terms of adjustment accuracy, running time, and success rate.