Methods for evaluating heuristics
The problem of evaluating heuristics is crucial. Indeed, heuristics offer no guarantee of optimality: they can find the optimum for certain data, or be very far from it.
For a problem of minimization, Rh (d) ≥ 1. We can also speak of distance to the optimum as a percentage with the formula: 100 * (RH (d) - 1). Relative performance is an evaluation of heuristics which can be bounded a priori (with the tests) or unpredictable a posteriori (after the tests).
A priori assessment
This is a performance guarantee obtained by theoretical analysis of H. For this we limit RH (d) for any data d, then we construct a data showing that this worst case can actually be reached. This result is very difficult to obtain, and is often not possible to calculate, in particular for all heuristics which do not include mathematical choice characters.
Take an example of a minimization heuristic with PH = 1.5. This means that whatever the input data, the heuristic will never give a result more than 50% above the overall optimum. "Never" since this has been proven mathematically.
A posteriori evaluation: lower bounds
When the evaluation of heuristics before execution is impossible, it is still possible to evaluate the results after execution of the heuristic H. We have seen the relative performance, but this method only applies if it is possible to know the global optimum.
A priori and a posteriori evaluations of heuristics still pose a problem: when are there very large problems? The simplest heuristic evaluation consists in comparing solutions with those given in benchmarks or widely used methods. The value of the solution, over a large set of tests, compared to known methods, gives a good approximation of the efficiency of the heuristic.