Benchmarking state-of-the-art symbolic regression
Created by W.Langdon from
gp-bibliography.bib Revision:1.7964
- @Article{Zegklitz:GPEM:,
-
author = "Jan Zegklitz and Petr Posik",
-
title = "Benchmarking state-of-the-art symbolic regression",
-
journal = "Genetic Programming and Evolvable Machines",
-
year = "2021",
-
volume = "22",
-
number = "1",
-
pages = "5--33",
-
month = mar,
-
keywords = "genetic algorithms, genetic programming, Symbolic
regression, Linear regression, Comparative study",
-
ISSN = "1389-2576",
-
URL = "https://link.springer.com/article/10.1007/s10710-020-09387-0",
-
DOI = "doi:10.1007/s10710-020-09387-0",
-
abstract = "Symbolic regression (SR) is a powerful method for
building predictive models from data without assuming
any model structure. Traditionally, genetic programming
(GP) was used as the SR engine. However, for these
purely evolutionary methods it was quite hard to even
accommodate the function to the range of the data and
the training was consequently inefficient and slow.
Recently, several SR algorithms emerged which employ
multiple linear regression. This allows the algorithms
to create models with relatively small error right from
the beginning of the search. Such algorithms are
claimed to be by orders of magnitude faster than SR
algorithms based on classic GP. However, a systematic
comparison of these algorithms on a common set of
problems is still missing and there is no basis on
which to decide which algorithm to use. In this paper
we conceptually and experimentally compare several
representatives of such algorithms: GPTIPS, FFX, and
EFS. We also include GSGP-Red, which is an enhanced
version of geometric semantic genetic programming, an
important algorithm in the field of SR. They are
applied as off-the-shelf, ready-to-use techniques,
mostly using their default settings. The methods are
compared on several synthetic SR benchmark problems as
well as real-world ones ranging from civil engineering
to aerodynamics and acoustics. Their performance is
also related to the performance of three conventional
machine learning algorithms: multiple regression,
random forests and support vector regression. The
results suggest that across all the problems, the
algorithms have comparable performance. We provide
basic recommendations to the user regarding the choice
of the algorithm.",
- }
Genetic Programming entries for
Jan Zegklitz
Petr Posik
Citations