Symbolic Regression Methods for Reinforcement Learning
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @Misc{DBLP:journals/corr/abs-1903-09688,
-
author = "Jiri Kubalik and Jan Zegklitz and Erik Derner and
Robert Babuska",
-
title = "Symbolic Regression Methods for Reinforcement
Learning",
-
howpublished = "arXiv",
-
year = "2019",
-
volume = "abs/1903.09688",
-
month = "22 " # mar,
-
keywords = "genetic algorithms, genetic programming, reinforcement
learning, value iteration, policy iteration, symbolic
regression, nonlinear optimal control",
-
URL = "http://arxiv.org/abs/1903.09688",
-
size = "12 pages",
-
abstract = "Reinforcement learning algorithms can be used to
optimally solve dynamic decision-making and control
problems. With continuous-valued state and input
variables, reinforcement learning algorithms must rely
on function approximators to represent the value
function and policy mappings. Commonly used numerical
approximators, such as neural networks or basis
function expansions, have two main drawbacks: they are
black-box models offering no insight in the mappings
learned, and they require significant trial and error
tuning of their meta-parameters. In this paper, we
propose a new approach to constructing smooth value
functions by means of symbolic regression. We introduce
three off-line methods for finding value functions
based on a state transition model: symbolic value
iteration, symbolic policy iteration, and a direct
solution of the Bellman equation. The methods are
illustrated on four nonlinear control problems:
velocity control under friction, one-link and two-link
pendulum swing-up,",
- }
Genetic Programming entries for
Jiri Kubalik
Jan Zegklitz
Erik Derner
Robert Babuska
Citations