Created by W.Langdon from gp-bibliography.bib Revision:1.8564
In this thesis, we developed methods for meta-learning the loss function of deep neural networks. In particular, we first introduced a method for meta-learning symbolic model-agnostic loss function called Evolved Model Agnostic Loss (EvoMAL). This method consolidates recent advancements in loss function learning and enables the development of interpretable loss functions on commodity hardware. Through empirical and theoretical analysis, we uncovered patterns in the learned loss functions, which later inspired the development of Sparse Label Smoothing Regularization (SparseLSR), which is a significantly faster and more memory-efficient way to perform label smoothing regularization. Second, we challenged the conventional notion that a loss function must be a static function by developing Adaptive Loss Function Learning (AdaLFL), a method for meta-learning adaptive loss functions. Lastly, we developed Neural Procedural Bias Meta-Learning (NPBML) a task-adaptive few-shot learning method that meta-learns the parameter initialization, optimizer, and loss function simultaneously.",
Genetic Programming entries for Christian Raymond