Skip to main content

Homayoun Honari

  • BSc (Sharif University of Technology, 2022)

Notice of the Final Oral Examination for the Degree of Master of Applied Science

Topic

Meta-Optimization in Safe Reinforcement Learning: Enhancing Safety at Training and Deployment with Fewer Hyperparameters

Department of Mechanical Engineering

Date & location

  • Tuesday, September 10, 2024

  • 12:00 P.M.

  • Virtual Defence

Reviewers

Supervisory Committee

  • Dr. Homayoun Najjaran, Department of Mechanical Engineering, University of Victoria (Supervisor)

  • Dr. Yang Shi, Department of Mechanical Engineering, UVic (Member) 

External Examiner

  • Dr. T. Aaron Gulliver, Department of Electrical and Computer Engineering, University of Victoria 

Chair of Oral Examination

  • Dr. Pauline van den Driessche, Department of Mathmatics and Statistics, UVic

     

Abstract

Reinforcement learning (RL) is a trial-and-error framework for enabling intelligent systems to learn the optimal behaviour based on the feedback from the environment. In recent years, successful application of RL in controlling various embodied systems have been observed. However, the real-world deployment and training of RL methods require paying attention to certain limitations imposed by the robot and its surroundings. To address these limitations, safe RL algorithms aim to define safety constraints based on the physics of the system and modify the training regime of the RL methods to satisfy them during training and inference. While safe RL offers a promising direction for achieving real-world deployability, challenges such as sample efficiency and hyperparameter tuning hinders its applicability in real-world scenarios. To address these challenges, this thesis proposes various approaches. First, a metagradient-based training pipeline called Meta Soft Actor-Critic Lagrangian (Meta SAC-Lag) is proposed which aims to optimize the aforementioned safety-related hyperparameters under the conventional Lagrangian framework. To study the perfor mance, the proposed method is evaluated in various safety-critical simulated environments. In addition, a real-world task is designed, and the algorithm is successfully deployed on a Kinova Gen3 robotic arm to showcase its real-world deployability with minimal hyperparameter tuning requirements. Furthermore, a multi-objective policy optimization framework is proposed which specifies the trade-off between optimality and safety directly and optimizes both of them simultaneously. The competitive performance of the proposed algorithm compared to the state-of-the-art safe RL methods with fewer hyperparameters showcases its potential in providing a powerful alternative framework for safe RL.