next up previous
Next: Maze Navigation Up: Emergent Behaviors Previous: Quantitative Analysis of Behaviors

Differences Between Human and Agent Behaviors

In this section we analyze the differences between agent and human behavior, according to the 12 ``quantifiable'' behaviors described on the previous section. Figure 3.40 shows the results. The graphs in this figure repeat the curves for robot behavior frequency vs. performance (same as in 3.39), adding the curves for the human case.

  
Figure 3.40: Agent & Humans behavior frequencies vs. strength: There are significant behavioral differences between agent (thin lines) and human (thick lines) behaviors, according to our 12 test cases. Horizontal axis: RS; vertical axis: average events per game step. Error bars indicate the standard error of the mean. The first and last bins are wider, to compensate for the sparsity of players at both ends of the performance scale.




.

Table 3.4 summarizes these results, comparing four categories: novice agents, advanced agents, novice humans and advanced humans.

 
Table 3.4: Correlations between humans, agents and behaviors. Each column represents a pair of categories. The ``='' symbol means that there is no big difference between both categories on the respective behavior, whereas ``+'' means that the second group has an increased value with respect to the second (and ``-'' the opposite). The first and last columns compare novices with advanced players, amongst agents and humans respectively. Tight turns for example, increase with level of play for both agents and humans (+); a novice agent doing about as many of them as an advanced human. Asymmetry is negatively correlated with quality (for robots) but uncorrelated for humans.
from Novice Advanced Novice
  Agent Agent Human
to Advanced Novice Advanced Novice Advanced Advanced
  Agent Human Human Human
tight turns + - = - - +
spiral - - - - = =
staircase = - - - - =
zigzag + = + - - +
loop + = - - - =
diagonal = - - - - =
zigzag fill = = + - = +
turns = - - - - =
asymmetry - - - = = =
edge crossing = - - - - +
edge following + - = - - =
spiral in&out + = - - - =


These are the differences for each individual behavior:

Tight turns
Agents develop the capacity for doing tight turns early on. Due to their error-free sensors, they can perform this type of maneuver very efficiently. The more advanced the agent, the more frequent the behavior becomes.

For humans, doing closed turns with exact precision requires training. As with agents, there is a strong correlation between frequency of tight turns and performance. A top human, on average, performs tight turns as often as a beginner agent.

Spiral
Although it does happen sometimes (see fig. 3.29), this is not a frequent behavior for people. The inwards spiral amounts to creating a confined space and entering it, something that human biases warn us against. The outwards spiral also seems pointless, it is a passive behavior that neither attacks nor runs away from the attacker.

The opposite is true for agents. Robots develop this strategy in the beginning of robot-robot coevolution scenarios, when most other strategies are random (hence suicidal). Sometimes a whole population may fall into a mediocre stable-state [108] characterized by most agents doing spirals. The spiral is probably the simplest non-suicidal behavior in terms of GP code.

A search for the shortest robots ever produced by the novelty engine (table 3.5) reveals two minimal behaviors which use just 5 tokens. One of them, R230007 does a classic tight spiral, and the other, R. 90001, a more loose spiral.

 
Table 3.5: The shortest agents produced by the novelty engine have 5 tokens each. Agents 230007, 230009 and 230010 do a tight spiral. 90001 and 90002, a wide spiral (fig. 3.41). 510003 does something different: it goes straight until it reaches an obstacle. 60001-60003 do a sort of ``Tit-for-tat''; they spiral while the other player is also spiraling, but break the pattern when the other player does so.
Agent Id. Len Code
230007 5 (IFLTE 0.88889 _C _C (LEFT_TURN))
230009 5 (IFLTE 0.88889 _C _C (LEFT_TURN))
230010 5 (IFLTE 0.88889 _C _C (LEFT_TURN))
90001 5 (IFLTE _D _C (LEFT_TURN) _D)
90002 5 (IFLTE _D _C (LEFT_TURN) _D)
510003 7 (* _H (IFLTE _A 0.90476 _H (RIGHT_TURN)))
50008 9 (* _H (IFLTE _A 0.90476 _H (RIGHT_TURN)))
60001 9 (IFLTE _B _F (IFLTE _C _H (LEFT_TURN) 0.11111) _F)
60002 9 (IFLTE _B _F (IFLTE _C _H (LEFT_TURN) 0.11111) _F)
60003 9 (IFLTE _B _F (IFLTE _C _H (LEFT_TURN) 0.11111) 0.12698)




The code for R. 230007 is:

0pt



(IFLTE 0.88889 _C _C (LEFT_TURN))

which translates as:In the end, humans get out of mazes for the exact same reason.





if LEFT < 0.8888 then go straight else turn left
so this robot executes a left turn whenever there are no obstacles to the left. This minimal code results in an basic wall following that produces a tight spiral as depicted on fig. 3.41 (top). When the robot is running along its own wall, built by the previous lap, the left sensor perceives the obstacle and the agent goes straight. But as soon as the corner is reached, the space suddenly opens to the left and the agent turns.

As evolution progresses, agents ``unlearn'' to do spirals, finding better strategies. The behavior frequency diminishes sharply for more advanced agents, approaching the human average rate: In the best robots, spiraling has been almost completely abandoned.

  
Figure 3.41: Simplest Agent. Sample games of the simplest agents according to code size (table 3.5). R. 230007 and R. 90001 are 5 tokens long. Agent 230007 does a tight spiral by means of a simple wall following, oblivious to what its opponent is doing (top left). This agent can sometimes break the spiral when it finds an obstacle (top right), by ``following'' the wall of an obstacle. The spiral of agent 90001 (bottom), created by comparing the left and rear-left sensors, is a Fibonacci spiral (the length of each segment equals the sum of the previous two).

\resizebox*{0.45\textwidth}{!}{\includegraphics{graph/r230007a.eps}} \resizebox*{0.45\textwidth}{!}{\includegraphics{graph/r230007b.eps}}

\resizebox*{0.45\textwidth}{!}{\includegraphics{graph/r90001.eps}}


Staircase
Together with its tight version, the diagonal, staircasing is a characteristic behavior that strongly differentiates human and robotic playing styles. Agents perform a diagonal on 1% of their total game time on average, whereas the rate for humans is much lower, close to 0.05%.

A human's attention typically shifts between two modes: it either focuses on a narrow region around the present position, in order to perform precise maneuvers and turns, or spreads over a wider region, analyzing the different parts of the arena in an effort to plan the next move.

A move such as the staircase can be performed only in the narrow attention mode. When one switches to the second, ``big picture'' mode of attention, turns stop completely. So humans in general will not perform continuous turns for long periods of time.

Agents, on the other hand, lack attention characteristics altogether, so they can afford to be constantly turning without confusing or delaying their sensors readings or analysis.

Zigzag/Zigzag fill
This is a behavior that shares similar frequency profiles for both species. Zigzagging is an important ability for the endgame, so its frequency increases with expertise on agents as well as on humans. The sample game shown on figure 3.36 illustrates how both species resort to zigzagging in similar situations.

The ``filling'' zigzag serves the purpose of making the most out of a confined space and amounts to about half of all zig-zags, in humans and robots alike. The frequency of filling zig-zag, for humans as well as agents, is an order of magnitude larger for expert players as compared to novices.

Loop
Looping, together with spiraling and tight zigzagging, is a space-filling strategy (fig. 3.29, left). The correlations of looping and strength are unique, though: both humans and agents seem to increase looping with expertise, but only up to a certain point. In the end, the most expert players, humans or robots alike, have abandoned this behavior, frequencies falling down to beginners' levels.
Turns
Another behavior that strongly differentiates humans and agents: agents are much more ``nervous'', they make turns more frequently. Robots turn once every 33 steps on average, whereas humans do so only once every 80 steps. Again we think that this difference is related to human attention modes, as in the staircase above.
Asymmetry
Humans rarely depict any strong preference for turning to either side, whereas this is a typical characteristic of unsophisticated robots.

The reasons for asymmetric behavior on robots are similar to those explained for spiraling above: early on coevolutionary runs, a useful turn is discovered and exploited. The code will spread along the population and everybody will start performing the same type of turn. Later on, more advanced turning patterns are discovered that involve left as well as right turns.

In the end, the best agent strategies have perfectly balanced frequencies of left and right turns: levels of asymmetry are near-zero for advanced robots, and for humans of all levels.

Edge Crossing
Unsurprisingly, robots cross the edges of the screen more often than humans. Robots do not perceive edges in any direct manner, so they move across without a problem.

Agents go across edges once every 300 game steps (approximately), whereas the human frequency is closer to one crossing every 500 game steps (a random walk would go across an edge every 256 steps).

Edge Following
Another differentiating behavior, robots move close and parallel to the edges of the visible screen (at a distance of 10 or less, see fig. 3.35) more often than humans. Also, the percentage of game time they spend doing this, increases with the expertise of the agent.

A random walk would move along the edges 7.8% of the time. This is about the frequency for novice robots, but expert ones `edge' about 12% of the time. Human rates stay between 2.5% and 5%, increasing slightly for experts.

Even though agents do not perceive edges -- and thus are incapable of defining ``edging'' explicitly -- the better ones do it more often than random. Thus, albeit indirectly defined, agents seem to have found a way to exploit a human weakness.

For humans, being close to an edge is perceived as dangerous: something might come up unexpectedly from the other side, so humans stay away from edges more often than not.

Spiral In & Out
A behavior that occurs only amongst advanced robots. Difficult for humans, because it needs very precise navigation, robots discovered it at some point and now is a resource strongly correlated with better performance.


Altogether, we have found that the set of behaviors we have been analyzing has provided us with interesting measures of robot and human evolution and learning. Some of them are typical of the ``robot'' species: more tight turns, more crossings of the screen's edges, diagonals produced by quickly alternating turns.

Zigzag is a unique problem in that it seems about equally important, and equally difficult for agents and humans alike. Zigzagging is fundamental for split endgames, when both players are trying to save space, waiting for the other to make a mistake.

Some behaviors occur mostly at specific levels of expertise: Spiraling and asymmetry are typical of novice agents, whereas in-out spirals and edge following are characteristic behaviors of advanced agents. Among humans, tight turns and edge crossings are common tools of expert players.

None of these behaviors had more frequency on humans than robots. Perhaps our choice of 12 sample behaviors was biased by our observations of how agents behave, rather than humans. But it is also interesting to reflect on the fact that human behavior is more complex, more changing, so it is difficult to find fixed patterns that occur very often. Several behaviors have much larger frequencies amongst agents than humans: staircase, edge following, and frequency of turns.

This last characteristic, lower human frequency of turns, we conjecture is related to a fundamental difference on the way that agents and humans approach the game. Agents are reactive, they read their sensors and act immediately. Humans switch between different attention modes: they exploit safe situations, where they can go straight for a while without interruptions, to look at the opponent's behavior, examine remote areas of the board, study the current topology of the game situation, and make plans for the future. Even though strategically it makes no difference, a human would rarely do a diagonal, quickly pressing the left and right keys while his/her attention is analyzing remote areas of the screen. A person can perform a diagonal with equal efficiency than a robot, but at the cost of concentrating all attention on the narrowest area, maintaining a precise coordination of turns and trajectory.


next up previous
Next: Maze Navigation Up: Emergent Behaviors Previous: Quantitative Analysis of Behaviors
Pablo Funes
2001-05-08