# High-Capacity Recursive Neural Coding

## Introduction

There has been a major research effort in the last several years to circumvent
the obstacles facing Artificial Intelligence by developing alternative
cognitive paradigms, such as neural nets and statistical methods. These
approaches have had a number of major successes in particular domains.
However, there is one aspect in which symbolic AI still retains a signficant
advantage over other methods -- namely, in its **ability to store and
manipulate
complex data structures**, which it accomplishes through the use of pointers
and
symbol tables. In contrast, neural networks operate on data which must be
organized into vectors of pre-determined size.

In the 1980's it became clear that such connectionist systems lacked the flexibility needed for higher-level congitive tasks. These limitations fueled the controversy over representational adequacy and were used as a basis for attacking the entire field (Fodor and Pylyshyn, 1988; Pinker and Prince, 1988).

RAAM (
Pollack 1990 ) used back-propagation to learn to represent compositional structures within a fixed-width space. Although it has found wide use in demonstration and feasability studies of what could be done with networks using symbolic representations, our knowledge was very limited about the ultimate capacity of RAAMS.

There are several logical conundrums with the original definition of RAAM which have led to its failure to scale up. We believe that these conundrums can now be resolved by observing that the **natural dynamics of a representation decoder are that repeated applications of decoding lead the state of the network onto a fractal attractor**.

This attractor, related to Barnsley's IFS theory, is governed only by the weights of the network. The natural dynamics of the decoder were in conflict with the somewhat arbitrary "terminal test" used to control termination of the decoding process. By terminating the decoding process when the decoded values lie
on the attractor, it is now possible to have extremely large sets of data
structures represented in small fixed-dimensional neural codes.