We are interested in technical issues in building multiagent systems where agent performance can improve from learning. We call our solution to this problem collective memory. Collective Memory (CM) is the resource agents gain through experience and use to improve their performance when interacting to solve collaborative problems. CM can be stored in a centralized memory, in the distributed memories of the individual agents, or in a hybrid such as institutional memory. This work addresses a purely distributed implementation.
It has been argued [Cole & Engeström1993] that CM plays a central role in the development of distributed cooperative behavior. CM can be a source for the dissemination of community procedural knowledge. When a novice agent collaborates with more experienced agents, the requests and responses of the more experienced agents guide the novice into more efficient patterns of activity. There can be a benefit to the experienced agents as well, since the novice may, in her ignorance, force some innovation to occur by refusing to follow the old way.
We are currently investigating two aspects of learning that CM facilitates:
The penultimate section discusses how the characteristics of our domain of interest influenced our design of CM. In short, CM must enable learning despite the following constraints of the task environment:
Since communication is considered an action, it will be recorded in the execution trace, and thus agents can learn to reduce the amount of communication in the same manner they learn to reduce their use of other operators. Our thesis is that the mechanisms of collective memory lead to efficient long-term behavior even if short-term behavior is suboptimal.