To verify the hypothesis that selecting against humanity is not irrelevant, we selected a group of 10 players produced by the control experiment, and introduced them manually in the main population, to have them tested against humans. We ran this generation (no. 250) for longer than our usual generations, to get an accurate measurement.
The last column of the table shows how these robots compare, as measured by their performance against humans (RS) with all other ranked robots.
From the internal point of view of robot-robot coevolution alone, all these agents should be equal: all of them are number one within their own generation. If anything, those of later generations should be better. But this is not the case, as performance against a training set suggests that after 100 generations the population is wandering, without reaching higher absolute performance levels. This wandering is also occurring with respect to the human performance space.
We conclude that a coevolving population of agents explores a subspace of strategies that is not identical to the subspace of human strategies and consequently the coevolutionary fitness is different from the fitness vs. people. Without further testing vs. humans, self play alone provides a weaker evolutionary measure.