Cross-Entropy Method

It was developed as an efficient estimation technique for rare-event probabilities in discrete event simulation systems and was adapted for use in optimization. The name of the technique comes from the Kullback-Leibler cross-entropy method for measuring the amount of information (bits) needed to identify an event from a set of probabilities.

The information processing strategy of the algorithm is to sample the problem space and approximate the distribution of good solutions. This is achieved by assuming a distribution of the problem space (such as Gaussian), sampling the problem domain by generating candidate solutions using the distribution, and updating the distribution based on the better candidate solutions discovered. Samples are constructed stepwise (one component at a time) based on the summarized distribution of good solutions. As the algorithm progresses, the distribution becomes more refined until it focuses on the area or scope of optimal solutions in the domain.

The following algorithm provides a pseudocode listing of the Cross-Entropy Method algorithm for minimizing a cost function.

The Cross-Entropy Method was adapted for combinatorial optimization problems, although has been applied to continuous function optimization as well as noisy simulation problems.

A alpha (a) parameter or learning rate in [0;1] is typically set high, such as 0.7. A smoothing function can be used to further control the updates the summaries of the distribution(s) of samples from the problem space. For example, in continuous function optimization a parameter may replace (a) for updating the standard deviation, calculated at time t as B-B(1-1/t)^q, where B is initially set high in [0,8; 0,99] and q is a small integer in [5; 10].