|Title:||Reinforcement learning ohne Backpropagation in Neural Regulatory Networks : eine erste Abschätzung : a preliminary assessment||Other Titles:||Reinforcement learning without backpropagation in neural regulatory netzworks||Language:||English||Authors:||Lemmel, Julian||Qualification level:||Diploma||Advisor:||Grosu, Radu||Issue Date:||2020||Citation:||
Lemmel, J. (2020). Reinforcement learning ohne Backpropagation in Neural Regulatory Networks : eine erste Abschätzung : a preliminary assessment [Diploma Thesis, Technische Universität Wien]. reposiTUm. https://doi.org/10.34726/hss.2020.81325
|Number of Pages:||42||Qualification level:||Diploma||Abstract:||
Reinforcement Learning (RL) aims at creating controllers for discrete and continuous problems and was initially inspired by neuroscience. However, the most successful methods are relying on backpropagation for calculating the gradients of the loss-function. The backpropagation algorithm is considered to be biologically implausible suggesting that it will not suffice when striving for human-like learning abilities. Neuroscience has brought forth different models of synaptic plasticity by observing isolated neurons. Such models could serve as alternatives to the ubiquitous backpropagation algorithm for calculating changes to network parameters. Neural Regulatory Networks are special RNNs whose inner states are calculated according to dynamics derived from biological observations. In this thesis, a novel framework based on state-of-the-art RL techniques and using NRNs, is introduced and experimented with by applying it to a cartpole balancing task. Two different methods of incorporating learning rules based on models of synaptic plasticity are investigated: the custom gradients method replaces the real gradient calculated by backpropagation with a biologically plausible synaptic plasticity rule, the plasticity dynamics method leaves the gradients unchanged but introduces additional plasticity dynamics that act throughout the entire unrolling of network states. Both methods were tested with three different learning rules: hebb’s rule, oja’s rule and the BCM rule. The results suggest that training can be accelerated when using the BCM rule.
|DOI:||10.34726/hss.2020.81325||Library ID:||AC15760934||Organisation:||E191 - Institut für Computer Engineering||Publication Type:||Thesis
|Appears in Collections:||Thesis|
Files in this item:
checked on Oct 23, 2021
checked on Oct 23, 2021
Items in reposiTUm are protected by copyright, with all rights reserved, unless otherwise indicated.