Berducci, L., Yang, S., Mangharam, R., & Grosu, R. (2024). Learning Adaptive Safety for Multi-Agent Systems. In 2024 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2859–2865). https://doi.org/10.1109/ICRA57147.2024.10611037
E191-01 - Forschungsbereich Cyber-Physical Systems
-
Published in:
2024 IEEE International Conference on Robotics and Automation (ICRA)
-
ISBN:
9798350384574
-
Date (published):
2024
-
Event name:
2024 IEEE International Conference on Robotics and Automation (ICRA 2024)
en
Event date:
13-May-2024 - 17-May-2024
-
Event place:
Yokohama, Japan
-
Number of Pages:
7
-
Keywords:
Reinforcement Learning; Artificial Intelligence; Safety-Critical Systems
en
Abstract:
Ensuring safety in dynamic multi-agent systems is challenging due to limited information about the other agents. Control Barrier Functions (CBFs) are showing promise for safety assurance but current methods make strong assumptions about other agents and often rely on manual tuning to balance safety, feasibility, and performance. In this work, we delve into the problem of adaptive safe learning for multi-agent systems with CBF. We show how emergent behaviour can be profoundly influenced by the CBF configuration, highlighting the necessity for a responsive and dynamic approach to CBF design. We present ASRL, a novel adaptive safe RL framework, to fully automate the optimization of policy and CBF coefficients, to enhance safety and long-term performance through reinforcement learning. By directly interacting with the other agents, ASRL learns to cope with diverse agent behaviours and maintains the cost violations below a desired limit. We evaluate ASRL in a multi-robot system and competitive multi-agent racing, against learning-based and control-theoretic approaches. We empirically demonstrate the efficacy of ASRL, and assess generalization and scalability to out-of-distribution scenarios.