Stephanie Forrest, Alan Perelson, Robert Smith

Paper #: 92-06-027

In typical applications, “genetic algorithms” (GAs) process populations of potential problem solutions to evolve a single population member that specifies an “optimized” solution. The majority of GA analysis has focused these optimization applications. In other applications (notably “learning classifier systems” and certain connectionist learning systems), a GA searches for a population of “cooperative” structures that jointly perform a computational task. This paper presents an analysis of this type of GA problem. The analysis considers a simplified genetics-based machine learning system: a model of an immune system. In this model, a GA must discover a set of pattern-matching “antibodies” that effectively match a set of “antigen” patterns. Analysis shows how a GA can automatically evolve and sustain a diverse, cooperative population. The cooperation emerges as a natural part of the antigen-antibody matching procedure. This emergent effect is shown to be similar to “fitness sharing,” an explicit technique for multimodel GA optimization. Further analysis shows how the GA population can adapt to express various degrees of “generalization.” The results show how GAs can automatically and simultaneously discover effective groups of cooperative computational structures.

PDF