Evolution strategies for the genetic algorithm

Difficulty
Average 50%

Evolution strategies

Evolution strategies are inspired by the theory of evolution by natural selection. More precisely, the technique is inspired by the process of evolution at the macro level or at the level of the species (phenotype, hereditary, variation) and is not interested in the genetic mechanisms of evolution (genome, chromosomes, genes, alleles).

The objective of the strategic evolution algorithm is to maximize the relevance of collecting candidate solutions in the context of an objective function in a given domain. The goal is achieved by adopting dynamic variation, a substitute for descent with modification, where the amount of variation has been dynamically adapted with performance-based heuristics. Contemporary approaches co-adapt the parameters controlling the amount and bias of variation with candidate solutions.

Instances of the evolution strategy algorithms can be described concisely with custom terminology in the form (μ; λ), where μ is the number of candidate solutions in the parent generation, and λ is the number of candidate solutions generated from the parent generation. In this configuration, the best μ are kept if λ> μ> 1. In addition to the so called Comma Selection Evolution Strategies algorithm, an additive selection variation can be defined (μ + λ) , where the best members of the union of the μ and λ generations compete on fitness for a position in the next generation.

Another version of the algorithm notation includes a ρ as (μ / ρ; λ) which specifies the number of parents who will contribute to each new candidate solution using a recombination operator. A classic rule used to govern the amount of mutation (standard deviation used in mutation for optimization of continuous function) was the 1: 5 rule, where the ratio of successful mutations should be 1: 5 of all mutations. If it is greater, the variance is increased, otherwise if the ratio is less, the variance is decreased.

The algorithm's comma selection variation can be good for dynamic problem instances given its ability to continue exploring the search space, while the additive selection variation can be good for refinement and convergence.

evolution strategies