<div class="csl-bib-body">
<div class="csl-entry">Brunnbauer, A. (2021). <i>Model-based deep Reinforcement learning for autonomous racing</i> [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2021.86588</div>
</div>
-
dc.identifier.uri
https://doi.org/10.34726/hss.2021.86588
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/17644
-
dc.description.abstract
Reinforcement learning (RL) is currently one of the most active machine learning research fields. RL algorithms have been successfully deployed ubiquitously in many real-world application domains, such as autonomous vehicles, intelligent production sites, and finance. Despite the many advances made in recent years, there are still fundamental challenges that need to be addressed before RL can be reliably applied in industrial applications. One of these problems is the sheer amount of training data that is needed to train deep RL agents. Model-based RL is a branch of RL algorithms that learn a model of the agent or its environment which is then leveraged to generate new training data or to plan ahead. Model-based approaches are expected to reduce the required amount of training data to be sampled from an environment, down to a level that allows RL algorithms to be trained in environments where it is hard to generate sufficient data. The goal of this work is to investigate the advantages that model-based RL algorithms bring. To this end, we adapt an existing model-based RL algorithm and compare its performance with that of common, model-free RL algorithms that mark the current State-of-the-Art. The application domain in which we conduct the experiments is in the field of autonomous racing. In our experiments, agents are trained to minimize lap times in time-trial races. The experiments aim to evaluate algorithms, that were trained in simulation, with respect to their ability to be deployed in the real world. We also compare the flexibility of the algorithms to produce comparable results on other, unseen race tracks. Last but not least, we also investigate the training behavior of the different algorithms. The experiments are performed both in a simulation environment, implemented specifically for this work, and on a prototyping platform based on a small remote-controlled car.
en
dc.language
English
-
dc.language.iso
en
-
dc.rights.uri
http://rightsstatements.org/vocab/InC/1.0/
-
dc.subject
model-based reinforcement learning
de
dc.subject
autonomes fahren
de
dc.subject
selbstlernende regelungssysteme
de
dc.subject
model-based reinforcement learning
en
dc.subject
deep reinforcement learning
en
dc.subject
learning-based control
en
dc.subject
autonomous racing
en
dc.title
Model-based deep Reinforcement learning for autonomous racing
en
dc.type
Thesis
en
dc.type
Hochschulschrift
de
dc.rights.license
In Copyright
en
dc.rights.license
Urheberrechtsschutz
de
dc.identifier.doi
10.34726/hss.2021.86588
-
dc.contributor.affiliation
TU Wien, Österreich
-
dc.rights.holder
Axel Brunnbauer
-
dc.publisher.place
Wien
-
tuw.version
vor
-
tuw.thesisinformation
Technische Universität Wien
-
dc.contributor.assistant
Hasani, Ramin
-
tuw.publication.orgunit
E191 - Institut für Computer Engineering
-
dc.type.qualificationlevel
Diploma
-
dc.identifier.libraryid
AC16222281
-
dc.description.numberOfPages
75
-
dc.thesistype
Diplomarbeit
de
dc.thesistype
Diploma Thesis
en
dc.rights.identifier
In Copyright
en
dc.rights.identifier
Urheberrechtsschutz
de
tuw.advisor.staffStatus
staff
-
tuw.assistant.staffStatus
staff
-
tuw.assistant.orcid
0000-0002-9889-5222
-
item.languageiso639-1
en
-
item.mimetype
application/pdf
-
item.openairecristype
http://purl.org/coar/resource_type/c_bdcc
-
item.fulltext
with Fulltext
-
item.openairetype
master thesis
-
item.grantfulltext
open
-
item.openaccessfulltext
Open Access
-
item.cerifentitytype
Publications
-
crisitem.author.dept
E191-01 - Forschungsbereich Cyber-Physical Systems