Manipulation and navigation are the key problems in robotic research and have broad applications such as industrial robots and service robots. On the one hand, as for the manipulation problem, traditional research works mainly focus on rigid objects (i.e., car parts, cups and bottles) and less attention is paid to deformable objects (i.e., clothes, sponges, organs and tissues). The deformable object manipulation problem is challenging due to its extremely high dimensionality of the configuration space (only six dimensions of rigid objects). Though challenging, deformable object manipulation has many applications, such as cloth folding, string insertion, sheet tension, robotassisted surgery and suturing. Thus, we focus on this challenging topic and hope our algorithms can be used in the above mentioned applications. On the other hand, as for the navigation problem, we focus on the humaninstructed navigation problem in which human instructions are given to the robot for it to follow and achieve the navigation task. This problem is challenging since it requires the robot to understand the human instructions correctly.
To solve the aforementioned problems, we present learningbased algorithms. For the deformable manipulation problem, we present controllers based on Gaussian Process (GP) and Deep Neural Networks (DNNs) to servocontrol the position and shape of deformable objects with unknown deformation properties. Also, there are some deformable objects called living objects (i.e., fish and mouse) that can actively dodge and struggle by writhing or deforming to escape from being captured. We thus present an algorithm based on reinforcement learning to grasp and manipulate them. As for the human-instructed navigation problem, we present an algorithm based on Long ShortTerm Memory (LSTM) to process and understand the human instructions and then combine it with traditional motion planning techniques to achieve the navigation task.
The presented learning based algorithms are tested in real environments and demonstrate good results. We achieve 70%90% success rates for deformable object manipulation tasks, a 90% success rate for the living object grasping task, and a 96% success rate for the phrase classification module in the navigation with instructions task.