next up previous
Next: From Discovery to Abstraction Up: Conclusions Previous: Conclusions

Discovery in AI

Intelligence defined as symbolic reasoning was the foundation of the first works in AI: playing chess, solving logic problems, following formal rules, were thought to epitomize intelligent behavior. Reasoning capacity was traditionally considered to be the difference between intelligent Man and unintelligent animal.

Doug Lenat claimed that his Automated Mathematician (AM) program [91], seeded with 115 elementary set theory concepts, was able to ``discover'' natural numbers and formulate interesting concepts in basic number theory such as addition, multiplication, etc.; even prime numbers. Lenat thought that the fundamental rule of intelligence, namely heuristic search, had been found. The follow-up project, EURISKO, would discover a domain's heuristics by itself, thus enabling discovery in any field. To discover mathematics for example, one should be able to run EURISKO with the initial axioms, and the program would eventually find both the subject's heuristics and facts, each one reinforcing the other. Lenat's hopes failed, though, and EURISKO did not live up to its expectations [90]. From this and other failures, AI shifted focus, over the ensuing decades, towards programmed expertise rather than discovery.

By showing how evolutionary entities find and exploit emergent properties of reality, we go back to Lenat's idea of AI as discovery. From the human perspective, complex is what is not obvious, what is entangled and difficult; emergent properties are the complex ones, those that need intelligence to be discovered and assimilated. With a growing understanding of what complexity and emergence are, and how they are generated, Artificial Life methods bring a new perspective on discovery as adaptation.

The goal of AI should not be to program situated agents, but to make them adaptive. Manually coding the program with all the information an organism needs becomes infeasible because it is too much -- and because of the Red Queen problem: by the time we finish, the environment has changed. The code for a complete real agent, a horse for example, may well be impossible to write. Even if we knew how to, a team of programmers could not debug so many interdependent lines of code [111]. But an adaptive agent, as natural evolution shows, is capable of doing so.


next up previous
Next: From Discovery to Abstraction Up: Conclusions Previous: Conclusions
Pablo Funes
2001-05-08