Created by W.Langdon from gp-bibliography.bib Revision:1.7954
In this dissertation, I extend the Markov Brain framework [1, 2] which consists of evolvable networks of probabilistic and deterministic logic gates to include a novel gate type feedback gates. Feedback gates use internally generated feedback to learn how to navigate a complex task by learning in the same manner a natural organism would. The evolutionary path the Markov Brains take to develop this ability provides insight into the evolution of learning. I show that the feedback gates allow Markov Brains to evolve the ability to learn how to navigate environments by relying solely on their experiences. In fact, the probabilistic logic tables of these gates adapt to the point where the an input almost always results in a single output, to the point of almost being deterministic. Further, I show that the mechanism the gates use to adapt their probability table is robust enough to allow the agents to successfully complete the task in novel environments. This ability to generalise to the environment means that the Markov Brains with feedback gates that emerge from evolution are learning autonomously; that is without external feedback. In the context of machine learning, this allows algorithms to be trained based solely on how they interact with the environment. Once a Markov Brain can generalize, it is able adapt to changing sets of stimuli, i.e. reversal learn. Machines that are able to reversal learn are no longer limited to solving a single task. Lastly, I show that the neuro-correlate phi is increased through neuralplasticity using Markov Brains augmented with feedback gates. The measurement of phi is based on Information Integration Theory[3, 4] and quantifies the agent’s ability to integrate information.",
https://www.proquest.com/openview/8b8f279d2f9fd205e9d5255581c0dea9/1.pdf
Thesis Advisors: Arend Hintze",
Genetic Programming entries for Leigh Sheneman