Reinforcement learning (RL) is a trial-and-error framework for enabling intelligent systems to learn the optimal behaviour based on the feedback from the environment. In recent years, successful application of RL in controlling various embodied systems have been observed. However, the real-world deployment and training of RL methods require paying attention to certain limitations imposed by the robot and its surroundings. To address these limitations, safe RL algorithms aim to define safety constraints based on the physics of the system and modify the training regime of the RL methods to satisfy them during training and inference. While safe RL offers a promising direction for achieving real-world deployability, challenges such as sample efficiency and hyperparameter tuning hinders its applicability in real-world scenarios.
To address these challenges, this thesis proposes various approaches. First, a metagradient-based training pipeline called Meta Soft Actor-Critic Lagrangian (Meta SAC-Lag) is proposed which aims to optimize the aforementioned safety-related hyperparameters under the conventional Lagrangian framework. To study the performance, the proposed method is evaluated in various safety-critical simulated environments. In addition, a real-world task is designed, and the algorithm is successfully deployed on a Kinova Gen3 robotic arm to showcase its real-world deployability with minimal hyperparameter tuning requirements. Furthermore, a multi-objective policy optimization framework is proposed which specifies the trade-off between optimality and safety directly and optimizes both of them simultaneously. The competitive performance of the proposed algorithm compared to the state-of-the-art safe RL methods with fewer hyperparameters showcases its potential in providing a powerful alternative framework for safe RL.