It is a great pleasure to present this special section for the Genetic Programming and Evolvable Machines journal on Integrating Numerical Optimization Methods with Genetic Programming (GP). In traditional GP the search space includes all possible syntactic expressions that can be generated from the set of functions and terminals. The structure of this search space depends on the program representation and the set of primitives (functions and terminals) used by a particular GP system. The search operators modify individuals at the level of syntax, and, given that syntactic expressions tend to be fragile, their effect on behavior is usually non-local and difficult to predict. This has led researchers to explore other search operators or program representations.

Numerical optimization methods can be exploited to help GP explore the search space more efficiently. In general, the representation and adaptation (i.e. learning) of real-valued parameters in GP is still an open issue in GP at large, where most of the work has focused on what Koza termed ephemeral random constants or even fixed constants, with only a few examples in other directions [1,2,3,4,5]. This area is particularly relevant in modern machine learning, where powerful computing platforms like GPUs are highly optimized for performing such tasks. This special section encouraged submissions that aimed at breaching the gap between these two forms of search and optimization.

This special section presents two excellent papers in this vein. The first contribution is by Kommenda et al. entitled “Parameter Identification for Symbolic Regression using Nonlinear Least Squares”, where the authors explore one of the most direct and promising ways of hybridizing numerical optimization and GP, by using a nonlinear optimizer to tune the parameters of symbolic regression models. The numerical method is used as a local search operator, discussing its strengths and weaknesses, and show that the synergy between the methods is strong enough to produce state-of-the-art results.

The second paper is by Póvoa et al. entitled “Unimodal optimization using a genetic-programming-based method with periodic boundary conditions”, and presents a completely different way to hybridize GP and numerical optimization. In this paper, GP is used to solve a numerical optimization problem, extending the traditional application domain of this paradigm. The proposed method used niching and periodic domain constraints, and is shown to be applicable to both multimodal and unimodal problems. Particularly interesting are the results of experiments carried out on CEC 2015 benchmarks [6], showing how their proposal compares favorably relative to the state of the art.

These two papers give a complementary view of how GP can be hybridized with numerical optimization methods and problems. On the one hand, the paper by Kommenda et al. is precisely the type of work that was envisioned when this special section was originally proposed, enhancing GP with powerful optimization methods to achieve state of the art performance. On the other hand, the paper by Póvoa et al. provides a complementary approach, exploiting the extraordinary flexibility of GP to turn the hybridization on its head by solving numerical optimization tasks directly with GP.

We hope this special section will help pave the way for a smoother interplay between GP and numerical optimizers: new representations, algorithms and methodologies that can enhance GP systems by exploiting numerical optimization techniques to improve convergence, reduce computation cost and achieve state-of-the-art performance in real-world machine learning challenges.