Stephanie Forrest, Melanie Mitchell

Paper #: 91-10-041

What makes a problem easy or hard for a genetic algorithm (GA)? Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a “Walsh polynominal.” The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this paper we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing “schema processing” in GAs, and give informal descriptions of how Walsh analysis and Bethke's Walsh-schema transform relate to GA performance. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a number of fundamental questions about GAs: in particular, what are the features of problems that determine the likelihood of successful GA performance, and what should “successful GA performance” mean?