Reinforcement Learning with Recurrent Neural Networks

Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2008112111
Titel: Reinforcement Learning with Recurrent Neural Networks
Autor(en): Schäfer, Anton Maximilian
Erstgutachter: Prof. Dr. Martin Riedmiller
Zweitgutachter: Dr. Hans-Georg Zimmermann
Zusammenfassung: Controlling a high-dimensional dynamical system with continuous state and action spaces in a partially unknown environment like a gas turbine is a challenging problem. So far often hard coded rules based on experts´ knowledge and experience are used. Machine learning techniques, which comprise the field of reinforcement learning, are generally only applied to sub-problems. A reason for this is that most standard RL approaches still fail to produce satisfactory results in those complex environments. Besides, they are rarely data-efficient, a fact which is crucial for most real-world applications, where the available amount of data is limited. In this thesis recurrent neural reinforcement learning approaches to identify and control dynamical systems in discrete time are presented. They form a novel connection between recurrent neural networks (RNN) and reinforcement learning (RL) techniques. RNN are used as they allow for the identification of dynamical systems in form of high-dimensional, non-linear state space models. Also, they have shown to be very data-efficient. In addition, a proof is given for their universal approximation capability of open dynamical systems. Moreover, it is pointed out that they are, in contrast to an often cited statement, well able to capture long-term dependencies. As a first step towards reinforcement learning, it is shown that RNN can well map and reconstruct (partially observable) MDP. In the so-called hybrid RNN approach, the resulting inner state of the network is then used as a basis for standard RL algorithms. The further developed recurrent control neural network combines system identification and determination of an optimal policy in one network. In contrast to most RL methods, it determines the optimal policy directly without making use of a value function. The methods are tested on several standard benchmark problems. In addition, they are applied to different kinds of gas turbine simulations of industrial scale.
URL: https://repositorium.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2008112111
Schlagworte: Optimal Control; Recurrent Neural Networks; Reinforcement Learning; State Space Reconstruction; System Identification
Erscheinungsdatum: 20-Nov-2008
Enthalten in den Sammlungen:FB06 - E-Dissertationen

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
E-Diss839_thesis.pdfPräsentationsformat5,23 MBAdobe PDFMiniaturbild
Öffnen/Anzeigen


Alle Ressourcen im repOSitorium sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt. rightsstatements.org