Stochastic descent algorithm

Stochastic descent

The strategy of the stochastic descent algorithm is to iterate the process of randomly selecting a neighbor for a candidate solution and only accepting it if it results in an improvement. The proposed strategy aimed to address the limitations of deterministic escalation techniques that may get stuck in local optima due to their greedy acceptance of neighboring moves.

The stochastic descent algorithm was designed for use in discrete domains with explicit neighbors such as combinatorial optimization (vs. continuous optimization of functions). The strategy of the algorithm can be applied to continuous domains by using a step to define the candidate neighbors for the solution (such as localized random search and determined random search of size step by step).

The stochastic descent algorithm is a local search technique and can be used to get a result after running a global search algorithm. Even though the technique uses a stochastic process, it can get stuck in local optima. Neighbors with equal or greater cost should be accepted, allowing the technique to navigate through equivalent sets of the definition domain.

The algorithm can be restarted and repeated several times after its convergence to provide an improved result (called Multiple Restart Hill Climbing). The procedure can be applied simultaneously to several candidate solutions, which allows the simultaneous execution of several algorithms (called Parallel Hill Climbing).

Stochastic descent
To share
en_GBEN
%d bloggers like this: