<div class="csl-bib-body">
<div class="csl-entry">Mandl, P., Jaumann, F., Unterreiner, M., Gräber, T., Klinger, F., Edelmann, J., & Plöchl, M. (2024). Speed Control in the Presence of Road Obstacles: A Comparison of Model Predictive Control and Reinforcement Learning. In <i>16th International Symposium on Advanced Vehicle Control</i> (pp. 91–97). CRC Press. https://doi.org/10.1007/978-3-031-70392-8_14</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/205585
-
dc.description.abstract
The paper compares two optimal control methods — Reinforcement Learning and Model Predictive Control — for adaptive speed control in the presence of road obstacles to enhance ride comfort. Both methods use a model for training or prediction and a reward or cost function to achieve a desired control objective. Using the same quarter-car model and objective function for both methods, differences in planned speed profiles, optimality of the control objective, and differences in computational time are analysed through simulations over a series of cosine-shaped road bumps.
en
dc.language.iso
en
-
dc.relation.ispartofseries
Lecture Notes in Mechanical Engineering
-
dc.rights.uri
http://creativecommons.org/licenses/by/4.0/
-
dc.subject
Longitudinal Control
en
dc.subject
Model Predictive Control
en
dc.subject
Reinforcement Learning
en
dc.subject
Ride Comfort
en
dc.title
Speed Control in the Presence of Road Obstacles: A Comparison of Model Predictive Control and Reinforcement Learning