Decision Making in Multi-Agent Systems


2022 IROS Full-Day Workshop

Oct 27, 2022

[This workshop will be held in hybrid form]

[IROS22 registration is required]

Announcements

                                                                                                       

All authors and participants **must** register through the IROS-22 registration website .                                  

Venue Info:
Workshop-only registration fees:
  • 10,000 JPY (~80 USD) for IEEE members
  • 1,000 JPY (~8 USD) for IEEE student members
Publication Opportunities:

Selected papers from the workshop will be considered for publication as part of a Research Topic in the section of Multi-Robot Systems in the Frontiers of Robotics and AI. The papers will undergo a regular review process prior to publication. More information regarding the submission process to this Research Topic will be provided to the participants after the workshop.

About this workshop

Multi-agent systems are widely applicable to real-world problems and applications ranging from warehouse automation to environmental monitoring, autonomous driving, and even game playing. While multi-agent systems can execute time-sensitive, complex, and large-scale problems that are intractable for single agents, it is challenging to efficiently coordinate such systems for cooperative or non-cooperative tasks. On the other hand, decision making and reinforcement learning in single-agent scenarios have seen tremendous achievements in recent years; yet, translation of many single-agent techniques to the multi-agent domain may not be straightforward. The main challenges lie intrinsically in the nature of the multi-agent systems, including: complex interaction dynamics, constrained inter-agent communication, complex optimality, heterogeneity in the system and the potential presence of adversaries.

In this workshop, we aim to bring together multi-agent systems experts and researchers from different disciplines, ranging from robotics, machine learning, and game theory, to share the recent advances, open problems, and challenges in their respective fields and brainstorm research thrusts that will lead us towards revolutionary advances in multi-agent systems. The workshop will consist of invited speakers, presentations from researchers with original research papers, and poster sessions. We hope that through our multi-faceted workshop and the talks of our expert speakers, we can attract the interest of the robotics community on research challenges specific to multi-agent decision making.

Video Recordings

[Invited Session 1] Nora Ayanian - Multi-Agent Path Finding in Robotics

[Invited Session 1] Peter Stone - The Value of Communication in Ad Hoc Teamwork


[Invited Session 2] Sven Koenig - Multi-Agent Path Finding and Its Applications

[Invited Session 2] Huy Tran - On Utilities for Cooperative Multi-agent RL


[Invited Session 3] Chi Jin - V-Learning --- A Simple, Efficient, Decentralized Algorithm for Multiagent RL

[Invited Session 3] Bo An - Deep Learning for Solving Large Scale Complex Games


[Invited Session 4] Jakob Foerster - Opponent-Shaping and Interference in General-Sum Games

[Invited Session 4] Amanda Prorok - Machine Learning for Multi-Agent and Multi-Robot Problems


Oral Presentations

Invited Speakers

Image

Nora Ayanian

Brown University

Image

Peter Stone

University of Texas at Austin

Image

Amanda Prorok

University of Cambridge

Image

Chi Jin

Princeton University

Image

Sven Koenig

University of Southern California

Image

Bo An

Nanyang Technical University

Image

Huy T. Tran

U of Illinois Urbana-Champaign

Image

Jakob Foerster

University of Oxford

Workshop schedule



All times in JST (UTC + 09)
9:00a Welcome Remarks
9:10a Invited Session 1
9:10-9:40a
Nora Ayanian (Brown) - Why Is This Taking So Long? Multi-Agent Path Finding in robotics

Abstract: Multi-agent path finding (MAPF) is an important decision problem for all multi-robot systems, but with a huge array of solvers, it is difficult to decide which solver to use. In this presentation, I will discuss our state-of-the-art MAPF algorithm selector that automatically chooses an optimal MAPF solver to use to solve any instance. I will also discuss our work in trying to understand what makes some MAPF instances hard to solve, which, in turn, can improve our algorithm selection.

9:40-10:10a
Peter Stone (UT Austin) - The Value of Communication in Ad Hoc Teamwork

Abstract: In ad hoc teamwork, multiple agents need to collaborate without having knowledge about their teammates or their plans a priori. A common assumption in this research area is that the agents cannot communicate. However, just as two random people may speak the same language, autonomous teammates may also happen to share a communication protocol. We consider how such a shared protocol can be leveraged, introducing a means to reason about Communication in Ad Hoc Teamwork (CAT). The goal of this work is enabling improved ad hoc teamwork by judiciously leveraging the ability of the team to communicate.

10:10-10:15a Q/A and Discussion for Invited Session 1
10:15a
Contributed Papers - Oral Presentations

10:15-10:27a --- Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry
10:27-10:39a --- Modular Value Function Factorization in Multi-Agent Reinforcement Learning
10:39-10:51a --- Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
10:51-11:03a --- Toward Capability-Aware Cooperation for Decentralized Planning
11:03-11:15a --- A Game-theoretic Utility Network for Multi-Agent Decisions in Adversarial Environments

11:15a Invited Session 2
11:15-11:45a
Sven Koenig (USC) - Multi-Agent Path Finding and Its Applications

Abstract: The coordination of robots and other agents becomes more and more important for industry. For example, on the order of one thousand robots already navigate autonomously in fulfillment centers to move inventory pods all the way from their storage locations to the picking stations that need the products they store (and vice versa). Optimal and, in some cases, even approximately optimal path planning for these robots is NP-hard, yet one must find high-quality collision-free paths for them in real-time. Algorithms for such multi-agent path-finding problems have been studied in robotics and theoretical computer science for a longer time but are insufficient since they are either fast but of insufficient solution quality or of good solution quality but too slow. In this talk, I will discuss both recent cool ideas for solving such multi-agent path-finding problems (including the roles of planning and machine learning) and several of their applications, including in warehousing. Our research on this topic has been funded by both NSF and Amazon Robotics.

11:45-12:15p
Huy Tran (UIUC) - On Utilities for Cooperative Multi-agent RL

Abstract: A challenge in cooperative multi-agent reinforcement learning is credit assignment, where it is often difficult to determine when an agent contributes to a shared reward. This problem is particularly hard for teams with many decentralized agents. In this talk, I will discuss common approaches for addressing this problem from the perspective of learning individual utility functions agents can use for decentralized decision making. I will then discuss our recent work in this growing body of literature and how we leverage successor features to achieve promising results.

12:15-12:20p Q/A and Discussion for Invited Session 2
1:00p Contributed Papers - Poster Session
2:00p Invited Session 3
2:00-2:30p
Chi Jin (Princeton) - V-Learning --- A Simple, Efficient, Decentralized Algorithm for Multiagent RL

Abstract: A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms---V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with \max_i A_i, where A_i is the number of actions for the ith player. This is in sharp contrast to the size of the joint action space which is \prod_i A_i. V-learning (in its basic form) is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.

2:30-3:00p
Bo An (NTU) - Deep Learning for Solving Large Scale Complex Games

Abstract: Some important breakthroughs in artificial intelligence in recent years (such as Libratus and security games) can be attributed to the development of large-scale game solving technology in the past decade. However, game solving technology cannot handle large scale complex games, and the academic community began to use deep learning techniques to solve such complex games. The talk will discuss important progress in this direction in recent years and future directions.

3:00-3:10p Q/A and Discussion for Invited Session 3
3:15p Invited Session 4
3:15-3:45p
Jakob Foerster (Oxford) - Opponent-Shaping and Interference in General-Sum Games

Abstract: In general-sum games, the interaction of self-interested learning agents commonly leads to collectively worst-case outcomes, such as defect-defect in the iterated prisoner's dilemma (IPD). To overcome this, some methods, such as Learning with Opponent-Learning Awareness (LOLA), shape their opponents' learning process. However, these methods are myopic since only a small number of steps can be anticipated, are asymmetric since they treat other agents as naive learners, and require the use of higher-order derivatives, which are calculated through white-box access to an opponent's differentiable learning algorithm. In this talk I will first introduce Model-Free Opponent Shaping (M-FOS), which overcomes all of these limitations. M-FOS learns in a meta-game in which each meta-step is an episode of the underlying (``inner'') game. The meta-state consists of the inner policies, and the meta-policy produces a new inner policy to be used in the next episode. M-FOS then uses generic model-free optimisation methods to learn meta-policies that accomplish long-horizon opponent shaping. I will finish off the talk with our recent results for adversarial (or cooperative) cheap-talk: How can agents interfere with (or support) the learning process of other agents without being able to act in the environment?

3:45-4:15p
Amanda Prorok (Cambridge) - Machine Learning for Multi-Agent and Multi-Robot Problems

Abstract: In this talk, I first discuss how we leverage machine learning methods to generate cooperative policies for multi-robot systems. I describe how we use Graph Neural Networks (GNNs) to learn effective communication strategies for decentralized coordination. I then show how our GNN-based policy is able to achieve near-optimal performance across a variety of problems, at a fraction of the real-time computational cost. Finally, I present some pioneering real-robot experiments that demonstrate the transfer of our methods to the physical world.

4:15-4:25p Q/A and Discussion for Invited Session 4
4:25-4:50p Concluding Remarks

Contributed Papers

Oral Presentation
[#1] Oliver Järnefelt and Carlo D'Eramo, Modular Value Function Factorization in Multi-Agent Reinforcement Learning
[Link to Paper] [Link to Poster]
[#2] Charles Jin, Zhang-Wei Hong and Martin Rinard, Toward Capability-Aware Cooperation for Decentralized Planning
[Link to Paper]
[#3] Yuchen Xiao, Weihao Tan and Christopher Amato, Asynchronous Actor-Critic for Multi-Agent Reinforcement Learning
[Link to Paper]
[#4] Qin Yang and Ramviyas Parasuraman, A Game-theoretic Utility Network for Cooperative Multi-Agent Decisions in Adversarial Environments [Link to Paper] [Link to Poster]
[#5] Vincenzo Polizzi, Robert Hewitt, Javier Hidalgo-Carrió, Jeff Delaune and Davide Scaramuzza, Data-Efficient Collaborative Decentralized Thermal-Inertial Odometry [Link to Paper] [Link to Poster]

Posters
[#6] Niels van Duijkeren, Luigi Palmieri, Ralph Lange and Alexander Kleiner, An Industrial Perspective on Multi-Agent Decision Making for Interoperable Robot Navigation following the VDA5050 Standard [Link to Paper] [Link to Poster]
[#7] Simon Schaefer, Luigi Palmieri, Lukas Heuer, Niels van Duijkeren, Ruediger Dillmann, Sven Koenig and Alexander Kleiner, Towards Reliable Benchmarking for Multi-Robot Planning in Realistic, Cluttered and Complex Environments [Link to Paper] [Link to Poster]
[#8] Dipanwita Guhathakurta, Fatemeh Rastgar, M Aditya Sharma, Madhava Krishna and Arun Kumar Singh, GPU Acceleration of Joint Multi-Agent Trajectory Optimization [Link to Paper] [Link to Poster]
[#9] Bilal Maassarani, Kevin Garanger and Eric Feron, The TetraheDrone: A Structured Fractal HFW-VTOL UAS [Link to Paper]
[#10] Khaled Wahba and Wolfgang Hönig, Distributed Geometric and Optimization-based Control of Multiple Quadrotors for Cable-Suspended Payload Transport [Link to Paper] [Link to Poster]
[#11] Ziyi Zhou and Ye Zhao, Leveraging Quadrupedal Robots in Heterogeneous Multi-Robot Teaming with Run-Time Disturbances [Link to Paper] [Link to Poster]
[#12] Brian Reily, Bradley Woosley, John Rogers and Christopher Reardon, Multi-Agent Perimeter Monitoring for Uncertainty Reduction [Link to Paper] [Link to Poster]
[#13] Harel Biggie and Christoffer Heckman, Towards Reducing Human Supervision in Fielded Multi-Agent Systems
[Link to Paper] [Link to Poster]
[#14] Seth Karten and Katia Sycara, Intent-Grounded Compositional Communication through Mutual Information in Multi-Agent Teams [Link to Paper] [Link to Poster]
[#15] Mingi Jeong, Julien Blanchet, Joseph Gatto and Alberto Quattrini Li, A Hierarchical Multi-ASV Control System Framework for Adversarial ASV Detainment [Link to Paper] [Link to Poster]
[#16] Andreas Kokkas, Micael Couceiro, Ivan Doria, Andre Araújo and Jose Rosa, Towards the Next Generation of Multi-Robot Systems for Solar Power Plants Inspection and Maintenance [Link to Paper] [Link to Poster]

Organizers

Image

Panagiotis Tsiotras

Professor

School of Aerospace Engineering
Georgia Institute of Technology

Image

Matthew Gombolay

Assistant Professor

School of Interactive Computing
Georgia Institute of Technology

Image

Scott Guan

Ph.D. Student

School of Aerospace Engineering
Georgia Institute of Technology

Image

Esmaeil Seraj

Ph.D. Student

School of Electrical and Computer Engineering
Georgia Institute of Technology


Program Committee


  • Dipankar Maity, University of North Carolina
  • Daigo Shishika, George Mason University
  • Mi Zhou, Georgia Institute of Technology

Contact

   If you have any question regarding the workshop, please contact the Workshop Organizers.

Sponsors

This workshop is supported by the IEEE RAS TC on Multi-Robot Systems and the IEEE RAS TC on Algorithms for Planning and Control of Robot Motion.