About
Reinforcement learning (RL) has shown remarkable achievements in applications ranging from autonomous driving, object manipulation, or beating best players in complex board-games. Different communities, including RL, human-robot interaction (HRI), control, and formal methods (FM), have proposed multiple techniques to increase safety, transparency, and robustness of RL. However, elementary problems of RL remain open: exploratory and learned policies may cause unsafe situations, lack task-robustness, or be unstable. By satisfactorily addressing these problems, RL research will have long-lasting impact and see breakthroughs on real physical systems and in human-centered environments. As an example, a collaborative mobile manipulator needs to be robust and verifiably safe around humans. This requires an integrated approach with RL to learn optimal policies for complex manipulation tasks, control techniques to ensure stability of the system, FM techniques to provide formal guarantees to ensure safety, and techniques from human-robot interaction to learn from and interact with humans. The aim of this multidisciplinary workshop is to bring these communities together to:
© RL-CONFORM. All Rights Reserved. Designed by HTML Codex