model-based reinforcement learning; deep reinforcement learning; learning-based control; autonomous racing
Reinforcement learning (RL) is currently one of the most active machine learning research fields. RL algorithms have been successfully deployed ubiquitously in many real-world application domains, such as autonomous vehicles, intelligent production sites, and finance. Despite the many advances made in recent years, there are still fundamental challenges that need to be addressed before RL can be reliably applied in industrial applications. One of these problems is the sheer amount of training data that is needed to train deep RL agents. Model-based RL is a branch of RL algorithms that learn a model of the agent or its environment which is then leveraged to generate new training data or to plan ahead. Model-based approaches are expected to reduce the required amount of training data to be sampled from an environment, down to a level that allows RL algorithms to be trained in environments where it is hard to generate sufficient data. The goal of this work is to investigate the advantages that model-based RL algorithms bring. To this end, we adapt an existing model-based RL algorithm and compare its performance with that of common, model-free RL algorithms that mark the current State-of-the-Art. The application domain in which we conduct the experiments is in the field of autonomous racing. In our experiments, agents are trained to minimize lap times in time-trial races. The experiments aim to evaluate algorithms, that were trained in simulation, with respect to their ability to be deployed in the real world. We also compare the flexibility of the algorithms to produce comparable results on other, unseen race tracks. Last but not least, we also investigate the training behavior of the different algorithms. The experiments are performed both in a simulation environment, implemented specifically for this work, and on a prototyping platform based on a small remote-controlled car.