When: October 1st, 2023
Where: Detroit, USA
Full-day program: 08:30 a.m. - 5:30 p.m.
Submission deadline: September 1st, 2023 (AoE)
Reinforcement learning (RL) has shown remarkable achievements in applications ranging from autonomous driving, object manipulation, or beating best players in complex board games. However, elementary problems of RL remain open: exploratory and learned policies may cause unsafe situations, lack task-robustness, be unstable, or require many samples in the learning process. By satisfactorily addressing these problems, RL research will have long-lasting impact and see breakthroughs on real physical systems and in human-centered environments. Different communities have proposed multiple techniques to increase safety, transparency, and robustness of RL. The aim of this workshop is to provide a multidisciplinary platform to (1) jointly identify and clearly define major challenges in RL, (2) propose and debate existing approaches to ensure desired properties of learned policies from various perspectives, and (3) discuss opportunities to accelerate RL research. The themes of the workshop would comprise (but not be limited to) RL and control theory, RL and Human-Robot Interaction, RL and Formal Methods, benchmarking of RL, etc. In the tradition of our previous RL-CONFORM workshops, we encourage a fruitful and lively discussion between researchers that is open to anyone.
Learn More© RL-CONFORM. All Rights Reserved. Designed by HTML Codex