Inductive biases encode prior knowledge about the world and play a crucial role in shaping the learning process in reinforcement learning (RL) agents. In particular, they allow for incorporating assumptions and steering the learning algorithm toward the most plausible solutions in the policy search. Inductive biases can also affect how the agent interacts with the environment to gather observations and experiences. Embedding proper inductive biases into the learning system can dramatically improve sample efficiency and generalization and, as a result, enable applications to real-world problems. The versatility of such biases makes them a relevant tool for tackling a wide range of tasks, as they can be embedded both in the problem and solution spaces. As an example, identifying structural similarities among sub-tasks can be useful to promote knowledge transfer in problems such as multi-task RL. At the same time, prior knowledge on the structure of the problem such as agent’s morphological information can be incorporated into the processing by, e.g., using relational representations and message-passing architectures, improving interpretability and sample efficiency. Nevertheless, despite their advantages, inappropriate or poorly understood learning biases can hinder performance and limit adaptability to novel scenarios.
In the Inductive Biases in Reinforcement Learning (IBRL) workshop, we will investigate the role of inductive biases in modern RL methods, analyzing the impact of such biases on the learning procedure from various perspectives and contexts. We will assess the limitations of current methods and explore novel designs to address these gaps toward novel sample-efficient RL algorithms and more robust, general, and adaptable RL agents. We believe that having diverse perspectives is essential to address these challenges, hence the IBRL workshop aims to facilitate the exchange of ideas by fostering collaboration across different sub-fields of RL. To achieve this, the workshop features targeted sessions to cover a wide range of topics and promote fruitful discussion on different inductive biases and their impact on the RL sub-domains; topics include but are not limited to:
• Abstractions and structured policies: task decomposition, hierarchical RL, symmetries, etc.
• Generalization: multi-task RL, continual RL, meta RL, etc.
• Relational biases and representations: graph-based methodologies, message-passing policies, communication in MARL, composability, etc.
• Learning biases for robotics: physics priors, task-specific knowledge, geometric constraints, etc.
• Future directions: real-world applications, learning biases in RLHF, etc.
28/05/25: Â we have extended the deadline for submissions to 6 June AoE.
01/04/25: Â we are currently looking for reviewers. If you are interested, please check this form.
01/04/25: the Call for Papers is now open! Check the guidelines for more information.
• Call for Papers: 1 April 2025
• Workshop Paper Submission: 30 May 2025 12 June 2025 AoE
• Accept/Reject Notification Date: 15 June 2025 22 June 2025
• Camera Ready Submission: 22 June 2025 29 June 2025
• Workshop Day: 5 August 2025
University of Alberta
Google DeepMind
TU Darmstadt
TU Darmstadt/ SAIROL
UT Austin
 University of Würzburg
Utrecht University
SNSF Postdoc Fellow
UniversitĂ della Svizzera italiana
TU Darmstadt