abstract = "The overall goal of evolving algorithms for femtocells
is to create a continuous on-line evolution of the
femtocell pilot power control algorithm to optimise
their coverage. Two aspects of intelligence are used
for increasing the complexity of the input and the
behaviour, communication and learning. In this initial
study we investigate how to evolve more complex
behaviour in decentralised control algorithms by
changing the representation of communication and
learning. The communication is addressed by allowing
the femtocell to identify its neighbours and take the
values of its neighbours into account when making
decisions regarding the increase or decrease of pilot
power. Learning is considered in two variants: the use
of input parameters and the implementation of a
built-in reinforcement procedure. The reinforcement
allows learning during the simulation in addition to
the execution of fixed commands. The experiments
compare the new representation in the form of different
terminal symbols in a grammar. The results show that
there are differences between the communication and
learning combinations and that the best solution uses
both communication and learning.",