Decision Making in Multi-Agent Systems

2022 IROS Full-Day Workshop

Oct 27, 2022

[This workshop will be held in hybrid form]

[IROS22 registration is required]



Contribution submission deadline extended to October 3, 2022.

All authors and participants need to register through the IROS-22 registration website . Please also fill out this form.                                                    

Important Registration Dates:
  • Workshop-only option available: September 1, 2022
  • Early registraion deadline: August 22, 2022
  • Regular registration for conference: August 23 - October 27, 2022
Workshop-only registration fees:
  • 10,000 JPY (~80 USD) for IEEE members
  • 1,000 JPY (~8 USD) for IEEE student members

About this workshop

Multi-agent systems are widely applicable to real-world problems and applications ranging from warehouse automation to environmental monitoring, autonomous driving, and even game playing. While multi-agent systems can execute time-sensitive, complex, and large-scale problems that are intractable for single agents, it is challenging to efficiently coordinate such systems for cooperative or non-cooperative tasks. On the other hand, decision making and reinforcement learning in single-agent scenarios have seen tremendous achievements in recent years; yet, translation of many single-agent techniques to the multi-agent domain may not be straightforward. The main challenges lie intrinsically in the nature of the multi-agent systems, including: complex interaction dynamics, constrained inter-agent communication, complex optimality, heterogeneity in the system and the potential presence of adversaries.

In this workshop, we aim to bring together multi-agent systems experts and researchers from different disciplines, ranging from robotics, machine learning, and game theory, to share the recent advances, open problems, and challenges in their respective fields and brainstorm research thrusts that will lead us towards revolutionary advances in multi-agent systems. The workshop will consist of invited speakers, presentations from researchers with original research papers, and poster sessions. We hope that through our multi-faceted workshop and the talks of our expert speakers, we can attract the interest of the robotics community on research challenges specific to multi-agent decision making.

Call for Contributions

Efficient decision-making, in robotics, game-playing, social agents, etc., can be considered an essential aspect and a challenging problem in multi-agent systems, and contributions here have the potential making a breakthrough in enabling high-performing collaborative and coordinated multi-agent teams. For a successful multi-agent collaboration in a complex (potentially adversarial) environment, agents must be able to rationalize their actions by reasoning about their teammates' action-decisions and their capabilities and characteristics, as well as the environment states under partial observability. Such enhanced reasoning can also be achieved through efficient inter-agent communication, which opens a new horizon to the realm of multi-agent decision-making. These problems make an interesting opportunity to explore multi-agent systems under collaborative and mixed collaborative-competitive settings to improve team coordination for high-quality cooperation.

The objective of this workshop is to raise awareness of the overall challenges in multi-agent decision-making area to the IEEE Robotics and Automation Society community. This workshop will serve as a catalyst to exchange ideas and recent methodologies in this this unique and important application, that requires innovations across core topics in robotics, AI, and ML.

Topics of Interest

  • Multi-agent reinforcement learning
  • Multi-agent planning and scheduling
  • Multi-agent systems in adversarial environments
  • Bounded rationality
  • Heterogeneous team coordination
  • Emergent communication learning in multi-agent systems
  • Multi-agent systems and challenges: scalability, credit assignment, non-stationarity, etc.
  • Multi-agent decision-making and Transformers: decision-making as a sequence modeling problem
  • Multi-agent decisions and Social dilemmas
  • Explainable multi-agent decision-making

Paper Submission:

We invite short papers (4-6 pages, including references) for submission to the workshop related to the topics above and the theme of multi-agent systems. Position papers, work in progress and novel but not necessarily thoroughly worked out ideas are encouraged. The submissions will be reviewed by the program committee and we will accept papers for oral (live) presentations, or a video highlight. Each paper will undergo a thorough review process and receive two high quality reviews. Accepted papers will be made available on the workshop website after the IROS conference.

The paper should be in PDF format and use the standard IEEE IROS template: Tex or MS Word.

Please use the following EasyChair link for paper submissions: Submission Link .

Important Dates

Invited Speakers


Nora Ayanian

Brown University


Peter Stone

University of Texas at Austin


Amanda Prorok

University of Cambridge


Chi Jin

Princeton University


Sven Koenig

University of Southern California


Bo An

Nanyang Technical University


Huy T. Tran

U of Illinois Urbana-Champaign


Jakob Foerster

University of Oxford

Workshop schedule

All times in JST (UTC + 09)
9:00 Welcome Remarks
9:10a Invited Session 1
Peter Stone (UT Austin) - The Value of Communication in Ad Hoc Teamwork

Abstract: In ad hoc teamwork, multiple agents need to collaborate without having knowledge about their teammates or their plans a priori. A common assumption in this research area is that the agents cannot communicate. However, just as two random people may speak the same language, autonomous teammates may also happen to share a communication protocol. We consider how such a shared protocol can be leveraged, introducing a means to reason about Communication in Ad Hoc Teamwork (CAT). The goal of this work is enabling improved ad hoc teamwork by judiciously leveraging the ability of the team to communicate.

Sven Koenig (USC) - Multi-Agent Path Finding and Its Applications

Abstract: The coordination of robots and other agents becomes more and more important for industry. For example, on the order of one thousand robots already navigate autonomously in fulfillment centers to move inventory pods all the way from their storage locations to the picking stations that need the products they store (and vice versa). Optimal and, in some cases, even approximately optimal path planning for these robots is NP-hard, yet one must find high-quality collision-free paths for them in real-time. Algorithms for such multi-agent path-finding problems have been studied in robotics and theoretical computer science for a longer time but are insufficient since they are either fast but of insufficient solution quality or of good solution quality but too slow. In this talk, I will discuss both recent cool ideas for solving such multi-agent path-finding problems (including the roles of planning and machine learning) and several of their applications, including in warehousing. Our research on this topic has been funded by both NSF and Amazon Robotics.

10:00-10:15a Q/A and Discussion for Invited Session 1
10:15a Contributed Papers
11:15a Invited Session 2
Nora Ayanian (Brown)

Abstract: TBA.

Huy Tran (UIUC)

Abstract: TBA.

12:05-12:20p Q/A and Discussion for Invited Session 2
1:00p Invited Session 3
Chi Jin (Princeton) - V-Learning --- A Simple, Efficient, Decentralized Algorithm for Multiagent RL

Abstract: A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms---V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with \max_i A_i, where A_i is the number of actions for the ith player. This is in sharp contrast to the size of the joint action space which is \prod_i A_i. V-learning (in its basic form) is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.

Bo An (NTU) - Deep Learning for Solving Large Scale Complex Games

Abstract: Some important breakthroughs in artificial intelligence in recent years (such as Libratus and security games) can be attributed to the development of large-scale game solving technology in the past decade. However, game solving technology cannot handle large scale complex games, and the academic community began to use deep learning techniques to solve such complex games. The talk will discuss important progress in this direction in recent years and future directions.

1:50-2:10p Q/A for Invited Session 3
2:10p Contributed Papers
3:15p Invited Session 4
Jakob Foerster (Oxford) - Opponent-Shaping and Interference in General-Sum Games

Abstract: In general-sum games, the interaction of self-interested learning agents commonly leads to collectively worst-case outcomes, such as defect-defect in the iterated prisoner's dilemma (IPD). To overcome this, some methods, such as Learning with Opponent-Learning Awareness (LOLA), shape their opponents' learning process. However, these methods are myopic since only a small number of steps can be anticipated, are asymmetric since they treat other agents as naive learners, and require the use of higher-order derivatives, which are calculated through white-box access to an opponent's differentiable learning algorithm. In this talk I will first introduce Model-Free Opponent Shaping (M-FOS), which overcomes all of these limitations. M-FOS learns in a meta-game in which each meta-step is an episode of the underlying (``inner'') game. The meta-state consists of the inner policies, and the meta-policy produces a new inner policy to be used in the next episode. M-FOS then uses generic model-free optimisation methods to learn meta-policies that accomplish long-horizon opponent shaping. I will finish off the talk with our recent results for adversarial (or cooperative) cheap-talk: How can agents interfere with (or support) the learning process of other agents without being able to act in the environment?

Amanda Prorok (Cambridge) - Machine Learning for Multi-Agent and Multi-Robot Problems

Abstract: In this talk, I first discuss how we leverage machine learning methods to generate cooperative policies for multi-robot systems. I describe how we use Graph Neural Networks (GNNs) to learn effective communication strategies for decentralized coordination. I then show how our GNN-based policy is able to achieve near-optimal performance across a variety of problems, at a fraction of the real-time computational cost. Finally, I present some pioneering real-robot experiments that demonstrate the transfer of our methods to the physical world.

4:05-4:25p Q/A for Invited Session 4
4:25-4:50p Concluding Remarks



Panagiotis Tsiotras


School of Aerospace Engineering
Georgia Institute of Technology


Matthew Gombolay

Assistant Professor

School of Interactive Computing
Georgia Institute of Technology


Scott Guan

Ph.D. Student

School of Aerospace Engineering
Georgia Institute of Technology


Esmaeil Seraj

Ph.D. Student

School of Electrical and Computer Engineering
Georgia Institute of Technology

Program Committee

  • Dipankar Maity, University of North Carolina
  • Daigo Shishika, George Mason University
  • Mi Zhou, Georgia Institute of Technology


   If you have any question regarding the workshop, please contact the Workshop Organizers.


This workshop is supported by the IEEE RAS TC on Multi-Robot Systems and the IEEE RAS TC on Algorithms for Planning and Control of Robot Motion.