Stochastic descent algorithm

Stochastic descent

The strategy of the stochastic descent algorithm is to iterate the process of randomly selecting a neighbor for a candidate solution and only accepting it if it results in an improvement. The proposed strategy aimed to address the limitations of deterministic escalation techniques that may get stuck in local optima due to their greedy acceptance of neighboring moves.

The stochastic descent algorithm was designed for use in discrete domains with explicit neighbors such as combinatorial optimization (versus continuous function optimization). The algorithm's strategy can be applied to continuous domains by using a step to define candidate neighbors for the solution (such as the random search localized and size-determined random search stepwise).

The stochastic descent algorithm is a technique of local search and can be used to get a result after executing a algorithm global search. Even though the technique uses a stochastic process, it can get stuck in local optima. Neighbors with equal or greater cost must be accepted, allowing the technique to navigate through equivalent sets of the definition field.

The algorithm can be restarted and repeated several times after its convergence to provide an improved result (called Multiple Restart Hill Climbing). The procedure can be applied simultaneously to several candidate solutions, which allows the simultaneous execution of several algorithms (called Parallel Hill Climbing).

Stochastic descent
To share