Notice
This item was automatically migrated from a legacy system. It's data has not been checked and might not meet the quality criteria of the present system.
Roy, S., Mehmood, U., Grosu, R., Smolka, S. A., Stoller, S. D., & Tiwari, A. (2020). Learning Distributed Controllers for V-Formation. arXiv. https://doi.org/10.48550/arXiv.2006.00680
E191-01 - Forschungsbereich Cyber-Physical Systems
-
ArXiv ID:
2006.00680
-
Date (published):
2020
-
Number of Pages:
10
-
Preprint Server:
arXiv
-
Keywords:
Model Predictive Control; V-Formation; Distributed Neural Controller; Deep Neural Network; Supervised Learning. I
-
Abstract:
We show how a high-performing, fully distributed
and symmetric neural V-formation controller can be synthesized
from a Centralized MPC (Model Predictive Control) controller
using Deep Learning. This result is significant as we also establish
that under very reasonable conditions, it is impossible to achieve
V-formation using a deterministic, distributed, and symmetric
controller. The learnin...
We show how a high-performing, fully distributed
and symmetric neural V-formation controller can be synthesized
from a Centralized MPC (Model Predictive Control) controller
using Deep Learning. This result is significant as we also establish
that under very reasonable conditions, it is impossible to achieve
V-formation using a deterministic, distributed, and symmetric
controller. The learning process we use for the neural V-formation
controller is significantly enhanced by CEGkR, a Counterexample-
Guided k-fold Retraining technique we introduce, which extends
prior work in this direction in important ways. Our experimental
results show that our neural V-formation controller generalizes
to a significantly larger number of agents than for which it
was trained (from 7 to 15), and exhibits substantial speedup
over the MPC-based controller. We use a form of statistical
model checking to compute confidence intervals for our neural Vformation
controller's convergence rate and time to convergence.