Reinforcement Learning with Recurrent Neural Networks

Please use this identifier to cite or link to this item:
https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2008112111
Open Access logo originally created by the Public Library of Science (PLoS)
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorProf. Dr. Martin Riedmiller
dc.creatorSchäfer, Anton Maximilian
dc.date.accessioned2010-01-30T14:52:58Z
dc.date.available2010-01-30T14:52:58Z
dc.date.issued2008-11-20T16:23:42Z
dc.date.submitted2008-11-20T16:23:42Z
dc.identifier.urihttps://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2008112111-
dc.description.abstractControlling a high-dimensional dynamical system with continuous state and action spaces in a partially unknown environment like a gas turbine is a challenging problem. So far often hard coded rules based on experts´ knowledge and experience are used. Machine learning techniques, which comprise the field of reinforcement learning, are generally only applied to sub-problems. A reason for this is that most standard RL approaches still fail to produce satisfactory results in those complex environments. Besides, they are rarely data-efficient, a fact which is crucial for most real-world applications, where the available amount of data is limited. In this thesis recurrent neural reinforcement learning approaches to identify and control dynamical systems in discrete time are presented. They form a novel connection between recurrent neural networks (RNN) and reinforcement learning (RL) techniques. RNN are used as they allow for the identification of dynamical systems in form of high-dimensional, non-linear state space models. Also, they have shown to be very data-efficient. In addition, a proof is given for their universal approximation capability of open dynamical systems. Moreover, it is pointed out that they are, in contrast to an often cited statement, well able to capture long-term dependencies. As a first step towards reinforcement learning, it is shown that RNN can well map and reconstruct (partially observable) MDP. In the so-called hybrid RNN approach, the resulting inner state of the network is then used as a basis for standard RL algorithms. The further developed recurrent control neural network combines system identification and determination of an optimal policy in one network. In contrast to most RL methods, it determines the optimal policy directly without making use of a value function. The methods are tested on several standard benchmark problems. In addition, they are applied to different kinds of gas turbine simulations of industrial scale.eng
dc.language.isoeng
dc.subjectOptimal Control
dc.subjectRecurrent Neural Networks
dc.subjectReinforcement Learning
dc.subjectState Space Reconstruction
dc.subjectSystem Identification
dc.subject.ddc004 - Informatikger
dc.titleReinforcement Learning with Recurrent Neural Networkseng
dc.typeDissertation oder Habilitation [doctoralThesis]-
thesis.locationOsnabrück-
thesis.institutionUniversität-
thesis.typeDissertation [thesis.doctoral]-
thesis.date2008-10-31T12:00:00Z-
elib.elibid839-
elib.marc.edtjost-
elib.dct.accessRightsa-
elib.dct.created2008-11-09T23:42:46Z-
elib.dct.modified2008-11-20T16:23:42Z-
dc.contributor.refereeDr. Hans-Georg Zimmermann
dc.subject.dnb27 - Mathematikger
dc.subject.dnb28 - Informatik, Datenverarbeitungger
dc.subject.ccsI.2 - ARTIFICIAL INTELLIGENCEeng
dc.subject.ccsI.6 - SIMULATION AND MODELINGeng
vCard.ORGFB6ger
Appears in Collections:FB06 - E-Dissertationen

Files in This Item:
File Description SizeFormat 
E-Diss839_thesis.pdfPräsentationsformat5,23 MBAdobe PDF
E-Diss839_thesis.pdf
Thumbnail
View/Open


Items in osnaDocs repository are protected by copyright, with all rights reserved, unless otherwise indicated. rightsstatements.org