There are several logical conundrums with the original definition of RAAM
( Pollack 1990 ) which have led to its failure to scale up to enable industrial-sized applications of neural networks to classical symbolic problems. Although RAAM answered the challenge of showing how neural networks could represent compositional structures in a systematic way, and has led to lots of philosophical discussion (e.g., vanGelder, 1990; Horgan and Tienson, 1991), to date RAAM has been applied only to relatively small sets of trees (Chalmers, 1990; Chrisman, 1991). We believe that these limitations can now be effectively addressed.
The RAAM decoder works in conjunction with a logical "terminal test," which determins whether or not a given representation requires further decoding. The default terminal test used in the primary set of experiments merely asked whether all elements in the code are above 0.8 or below 0.2. This "analog-to-binary" conversion was standard in back-propagation research of the late 1980's, but it leads to several basic logical problems that prevented the scaling-up of RAAM: