<div class="csl-bib-body">
<div class="csl-entry">Tubeuf, C., Birkelbach, F., Maly, A., Krause, M., & Hofmann, R. (2023). Enabling Reinforcement Learning for Flexible Energy Systems Through Transfer Learning on a Digital Twin Platform. In <i>36th International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems (ECOS 2023)</i> (pp. 3218–3228). https://doi.org/10.52202/069564-0289</div>
</div>
-
dc.identifier.uri
http://hdl.handle.net/20.500.12708/190174
-
dc.description.abstract
Pumped storage power plants compensate for fluctuations in the electricity grid and improve the stability through grid services. By increasing the flexibility of pumped storage power plants, they could compensate fluctuations in an even greater extent and thus accelerate the shift to a fully renewable energy system. One way to do this is to accelerate the switching between operating modes within pumped storage stations. For this, we propose to apply reinforcement learning (RL) to control the start and stop processes within a hydraulic machine. RL has been shown to outperform traditional optimal control methods, however, safety concerns are stalling research on applying RL for process control in safety-sensitive energy systems. To enable the safe and reliable transfer of the algorithm's learning strategy from a virtual test environment to the physical asset, we present a concept for applying RL via a digital twin platform. To demonstrate this concept, we set up a simulation model for the operating behavior during the start and stop processes of a lab-scale pump-turbine and validate it with experimental data. On this virtual representation, we test the application of RL to optimally control the blow-out process within pump-turbines. We present the structure of the deep Q-learning (DQN) RL algorithm we trained and the necessary problem formulations. Our results show that the DQN algorithm is suitable for finding the optimal operating strategy to blow-out the pump-turbine runner. We discuss the viability of our approach for the control of a pump-turbine and outline the next steps to test RL on a lab-scale model machine.
en
dc.language.iso
en
-
dc.subject
Digital Twin
en
dc.subject
Hydro Power
en
dc.subject
Process Control
en
dc.subject
Pump-Turbine
en
dc.subject
Reinforcement Learning
en
dc.subject
Transfer Learning
en
dc.title
Enabling Reinforcement Learning for Flexible Energy Systems Through Transfer Learning on a Digital Twin Platform
en
dc.type
Inproceedings
en
dc.type
Konferenzbeitrag
de
dc.relation.isbn
978-1-7138-7481-2
-
dc.relation.doi
10.52202/069564
-
dc.description.startpage
3218
-
dc.description.endpage
3228
-
dc.type.category
Full-Paper Contribution
-
tuw.booktitle
36th International Conference on Efficiency, Cost, Optimization, Simulation and Environmental Impact of Energy Systems (ECOS 2023)
-
tuw.researchTopic.id
E3
-
tuw.researchTopic.name
Climate Neutral, Renewable and Conventional Energy Supply Systems