The problem of neural network representations for evolution of recurrent networks is similar to our problem of encoding brick structures. From early naive representations the concept of `developmental' or `cellular' grammatical encodings emerged [9,78,60]. They increase the efficiency of the GA by reducing the search space, eliminating redundancies and meaningless codes, and providing meaningful recombination operators.
There is a developmental stage in our experiments, because the genotype builds a phenotype by laying down bricks one at at time, and fails whenever the position indicated is invalid, either because a previous brick is occupying that position already, is out of bounds, or the maximum numbers of bricks has been reached (see eqs. 2.10, 2.11 and fig. 2.31). Each failed brick results in the deletion of a subtree.
An interesting alternative, never tested, would have been to delete illegal bricks from the phenotype but not the genotype, thus allowing for ghost limbs that could reappear later on. In any case, our representation has no means to represent subroutines or iterations other than interchanging genetic material via recombination.
There have been studies of modularity in recurrent neural net representations [61,5], aiming to improve the GA with automatic creation of libraries of reusable subcomponents. Structural representations should ultimately aim at the same objective: Finding useful complex blocks and incorporating them in the making of large structures with inherent modularities. Recent work by Hornby, et. al  utilizes an L-system generative representation to evolve walking virtual creatures.