RLCONFORM logo

2nd RL-CONFORM Workshop, co-located with IROS'22

Reinforcement Learning meets HRI, Control, and Formal Methods

Reinforcement learning (RL) has shown remarkable achievements in applications ranging from autonomous driving, object manipulation, or beating best players in complex board-games. Different communities, including RL, human-robot interaction (HRI), control, and formal methods (FM), have proposed multiple techniques to increase safety, transparency, and robustness of RL. However, elementary problems of RL remain open: exploratory and learned policies may cause unsafe situations, lack task-robustness, or be unstable. By satisfactorily addressing these problems, RL research will have long-lasting impact and see breakthroughs on real physical systems and in human-centered environments. As an example, a collaborative mobile manipulator needs to be robust and verifiably safe around humans. This requires an integrated approach with RL to learn optimal policies for complex manipulation tasks, control techniques to ensure stability of the system, FM techniques to provide formal guarantees to ensure safety, and techniques from human-robot interaction to learn from and interact with humans. The aim of this multidisciplinary workshop is to bring these communities together to:

  1. Identify key challenges and opportunities related to safe and robust exploration, formal safety and stability guarantees of control systems, safety in physical human-robot collaborative systems;
  2. Provide unique insights into how these challenges depend on the application, desired system properties, and complexity of the environment;
  3. Propose new and debate existing approaches to ensure desired properties of learned policies in a wide range of domains;
  4. Discuss existing and new benchmarks to accelerate safe and robust RL research;
  5. Disseminate the outcomes of the workshop and publish the results as a perspectives article in one of the major robotics journals.
The themes of the workshop include but are not limited to RL and control theory, RL and Human-Robot Interaction, RL and Formal Methods, and benchmarking of RL.

Tentative Program

October 23, 2022.

Invited Speakers

Headshot of Jeanette Bohg
Long-Horizon Reasoning for Manipulation

Jeanette Bohg is an Assistant Professor at Stanford University, US.

Headshot of Bradley Hayes
Making Capable Human-Robot Teams through Reinforcement Learning and Multimodal Communication

Bradley Hayes is an Assistant Professor at the University of Colorado, US.

 Headshot of Georgia Chalvatzaki
Talk Title TBD

Georgia Chalvatzaki is an Assistant Professor at TU Darmstadt, DE.

Headshot of Nils Jansen
Talk Title TBD

Nils Jansen is an Associate Professor at Radboud University Nijmegen, NL.

Headshot of Hadas Kress-Gazit
Talk Title TBD

Hadas Kress-Gazit is a Professor at Cornell University, US.

Headshot of Fabio Ramos
Talk Title TBD

Fabio Ramos is a Principal Research Scientist at NVIDIA and Professor at the University of Sydney, AU.

Headshot of Fabio Ramos
Talk Title TBD

Takayuki Osa is an Associate Professor at the University of Tokyo, JP.

Headshot of Scott Niekum
Talk Title TBD

Scott Niekum is an Associate Professor at the University of Massuchesetts Amherst, US.

 

Invited Panelists

There will be two interactive panel sessions, one on Principles and understanding of RL algorithms and models and one on Benchmarks, implementation, and accelerating RL research.

Headshot of Tesca Fitzgerald

Tesca Fitzgerald is an incoming Assistant Professor at Yale University, US.

Headshot of Jens Kober

Jens Kober is an Associate Professor at TU Delft, NL.

Call for Papers

We invite extended 2-page abstract submissions of recent works, preliminary work with open questions is very welcome, related to the theme of the workshop. All accepted abstracts will be part of a short paper presentation session held during the workshop, where the authors will have the opportunity to present their lines of work in a 5 minutes presentation, followed by a 3-minutes live Q&A session. This is a non-archival venue: there will be no formal proceedings, but we encourage the authors to publish their extended abstracts on arXiv (where the link will be placed on the workshop’s website). Abstracts may be submitted to other venues in the future.

Based on the target areas and the discussions during our RL-CONFORM workshop at last year’s IROS, topics of interest include but are not limited to:

  • Data-efficiency, sim-to-real gap, and guided exploration in RL;
  • Safety guarantees, shielding, invariant sets, and online verification;
  • Query sample-efficiency, human-robot interaction, learning from demonstration, and human feedback;
  • Existing and new benchmarks to accelerate safe and robust RL research.

Important details

  • When: October 23, 2022.
  • Where: Hybrid event co-located with IROS 2022 in Kyoto, Japan, and over Zoom.
  • Submission deadline: September 1, 2022 (AoE)
  • Notification of acceptance: September 12, 2022
  • Submission format: 2-page abstracts (plus references) of original, possibly ongoing research. Papers should be formatted in the IROS 2022 style guidelines, more information can be found at IROS Call for Papers.
  • To submit your work visit: Easychair submission website
  • Contact: rlconform2022@easychair.org

Previous Editions of RL-CONFORM

For information about the workshop in 2021, visit: RL-CONFORM 2021

Connect with us and join the conversation!