Tron agents, by their architecture, are purely reactive entities -- there is no state at all, no history or even access to the current position of the adversary. All behaviors are thus emergent, in the sense that they appear only as a result of the changing environment.
The architecture of Tron was designed in 1997, when many researchers in the adaptive behavior community were proposing basic reactive agents whose complex behavior relied more on the complexities of the environment than on complex reasoning and planning . Consequently, Tron agents were deprived of any planning capability; they have no internal state, their sensory inputs are very restricted and all the complex behaviors that we observe are the result of their situatedness: Tron agents behave in complex ways as a results of their being adapted to a complex environment over a long evolutionary process.
Placed on a fixed environment, a Tron agent would have a constant behavior, either right or left turn, or straight. This of course, never happens: even in the absence of an opponent, a Tron agent is constantly moving, generating a trail that immediately becomes part of the environment; such changes will be perceived and thus different actions begin to occur as a result of the re-evaluation of the agent's control expression.
Among the information sent back by the Tron Java applets, and stored in our server's database, is the full trace of game. All turns are recorded, so we can re-enact and study the games that took place between humans and agents.
From this record we have selected a few snapshots that showcase Tron agents performing different high-level behaviors during their games with humans. Every snapshot is labelled with the time3.7 at which the game finished, the id number of the human opponent and the robot's id number. An asterisk marks the losing player (or both in the case of a tie).
The trace of the robot player is a black line, the human a grey line.