next up previous
Next: Evolution as Learning Up: Coevolving Behavior with Live Previous: Are New Generations Better?

Learning

We wish to study how the performance of the different players and species on this experiment has changed over time. Fig. 3.10 shows the sliding window method applied to one robot. It reveals how inexact or ``noisy'' the RS estimates are when too few games are put together. It is apparent that 100 games or more are needed to obtain an accurate measure.

  
Figure 3.10: Performance of robot 460003 -- which was arbitrarily chosen as the zero of the strength scale -- observed along its nearly 1600 games, using increasingly bigger window sizes.

\resizebox*{0.7\textwidth}{!}{\includegraphics{sab00/graph/ts2f03.eps}}


Since each individual agent embodies a single, unchanging strategy for the game of Tron, the model should estimate approximately the same strength value for the same agent at different points in history. This is indeed the case, as seen for example on figs. 3.10 (bottom) and 3.11a.The situation with humans is very different: people change their game, improving in most cases (fig. 3.11b).

  
Figure 3.11: (a) Robot's strengths, as expected, don't change much over time. Humans, on the other hand, are variable: usually they improve (b).

\resizebox*{0.7\textwidth}{!}{\includegraphics{sab00/graph/ts2f23.eps}}




 
next up previous
Next: Evolution as Learning Up: Coevolving Behavior with Live Previous: Are New Generations Better?
Pablo Funes
2001-05-08