Efficient trajectory adaptation is crucial for improving overall robot performance.
The use of Reinforcement Learning (RL), despite its promise in robot motion planning, suffers from long training times and limited generalizability. Learning from Demonstrations (LfD) offers an alternative solution by transferring human-like skills to robots. However, human demonstrations may not align optimally with robot dynamics due to biomechanical differences.
To address these challenges, this thesis proposes novel frameworks that combine RL, LfD, and the Dynamic Movement Primitives (DMP) framework. The DMP framework overcomes LfD limitations but requires parameter tuning of second-order dynamics. In this work, a systematic approach is introduced to extract dynamic features from human demonstrations, enabling automatic parameter tuning within the DMP framework. These extracted features facilitate skill transfer to RL agents, leading to more efficient trajectory exploration and significantly improved robot compliance.
Additionally, the thesis presents a framework that integrates Implicit Behavior Cloning (IBC) with DMP to leverage RL training speed through human demonstrations. The framework demonstrates faster training, higher scores, and increased stability in both simulation and real robot experiments. Comparative studies highlight the advantages of the proposed method over conventional RL agents.
The findings of this thesis hold significant implications for enhancing performance and adaptability of robots in practical applications. By incorporating human expertise from demonstrations to leverage conventional RL methods, this research offers novel approaches to improving efficiency and generalizability in robot motion planning.