Contenus

Toggle## Post-optimal sensitivity analysis

When the basic fix optimal of the PL problem is analyzed to answer questions about changes in its formulation, the study is called post-optimal sensitivity analysis.

We call post-optimization all the techniques making it possible to obtain the optimum of the PL problem when certain data have undergone modifications.

We consider the problem of linear programming general in its stand art form:

This study can be motivated by several reasons:

- the data of the problem is not known with exactitude, in which case it is important to determine to what extent this affects the proposed solution;
- we want to assess the consequences of a policy change that would modify the facts of the problem.

## Marginal costs

If the problem posed consists in transforming goods to sell a production with a better profit and the maximum increase in income which results from the possibility of having an additional unit of one of the goods, is the marginal value of this good . Very often, the qualifier marginal cost is also used in this case.

**These are the values of the variables in row Z.**

If a difference variable is not zero, in the optimal solution, it means that the corresponding good is already surplus. Therefore, having an additional unit of this good will have no influence on the income. The slack variable has zero marginal value.

On the other hand, if a variance variable is zero in the optimal solution, it is because the corresponding good is totally used. Subsequently, a variation in availability will generally have an influence on income. This is why this zero deviation variable in the optimal solution has a non-zero marginal value, and this marginal value specifies the variation of the economic function resulting from the use of an additional unit of the associated good.

with the solution vector x * = (2,6). Attention here the line shows the value of Z and not of -Z (hence the positive values).

We can measure the sensitivity of the optimal solution to a change in a line term or a coefficient in the objective.

## Study 1: variation in objective function

_{j}is equivalent to modifying the slope of the objective function.

The variation of a coefficient in the objective function over a certain interval does not lead to a modification of the optimal solution. Outside this interval, we have a new solution which itself remains optimal over another interval. We can thus highlight a finite number of variation intervals for c_{j}, with on each of them an invariant solution.

_{j}measures the increase in the objective function if the unit cost c is increased by one

_{j}. Logical and trivial behavior since the objective function is composed of the sum of c

_{j}* x

_{j}.

Let us change the objective function by max z '= 4 * x_{1} + 5 * x_{2}. The value of the objective function will change by x *_{1} = 2, while the solution vector will not be modified as shown in graphics resolution :

Likewise if c_{1} changes from 3 to 2, only the value of the objective function will be modified. To calculate the interval over which the coefficient x *_{1} is valid, we need to evolve the objective function until it is parallel to the other constraints.

That is to say when the slope of the objective function is equal to the slope of the saturated stresses for the solution vector s *:

- z = c
_{1}* x_{1}+ 5 * x_{2}and 2 * x_{2}= 12 so -c_{1}/ 5 = 0, c_{1}= 0; - z = c
_{1}* x_{1}+ 5 * x_{2}and 3 * x_{1}+ 2 * x_{2}= 18 so -c_{1}/ 5 = -3/2, c_{1}=15/2.

The coefficient x *_{1} is therefore valid for c_{1} between 0 and 15/2.

When the problem is of large dimension, it is possible to calculate the variation of the cost using the simplex by adding a delta on the cost to vary as in the following example:

The solution remains optimal as long as the line of -Z has negative numbers so if:

## Study 2: variation in the second limb

When the second member of a constraint varies (within a certain interval), if this constraint was not saturated, then the solution does not change and neither does the optimal value of the objective function. This result is obvious since the optimal solution not satisfying the constraint with equality, one can vary (a little) the second member without “touching” the optimal solution.

On the other hand, if the constraint were checked with equality at the optimum, one has an interval of variation for the second member such as:

- The solution changes but the zero variables remain zero and the non-zero variables remain non-zero: the structure of the solution does not change.
- The variation of the second member i causes a variation of the optimal value of the objective function equal to u
_{i}* d_{i}, therefore proportional to d_{i}.

_{i}is equal to the change in the optimal value of the objective function as the second member increases by one.

If we leave the interval, we have a new dual cost. We can thus highlight a finite number of variation intervals for the second member with, on each of them, a value for the dual cost. On the different intervals, the sensitivity analysis does not give the optimal solution since the numerical values of the variables depend on the exact value of the second member.

Consider in the example that b_{1} = 4 becomes b '_{1} = 5. Let's perform a graphic resolution of the new problem:

_{1}is zero: z '* - z * = y *

_{1}= 0.

When is the decrease in b_{1} ? As the explanation on marginal costs explains, decreasing 1 of the second member causes the value of the objective function to decrease by an amount equal to the marginal cost. So decreasing 1 will not result in a change.

To know the possibilities of evolution of the stock without changing the value of the optimal solution, it is necessary to add a delta in the second member studied as shown in the following example:

The solution remains optimal as long as the simplex is not degenerated, i.e. the second member is positive:

_{1}is less than 2, we can see on the graphics resolution that the optimal solution will be changed. The validity interval of y *

_{1}= 0 is therefore for b

_{1}between 2 and infinity.

Consider now an increase of the third second member to b '_{3} = 19. Since the marginal cost is not zero, the optimal solution will be modified as shown by the graphical resolution.

_{3}of the value of the objective function over an interval b

_{3}comprised between 12 and 24.

## Study 3: variation of non-base variables

_{j}, denoted by

_{j}, measures the increase in the objective function if the value of the non-base variable is increased by one unit. The reduced cost of

_{j}is the opposite of the coefficient of the variable in the objective line Z.

Let's go back to the previous example with a new constraint and a new variable:

We have the following reduced cost: d_{3} = -2. This means that to build a unit of x_{3} would decrease the value of the objective function by 2 (since it is outside the base x *_{3} = 0).

_{3}to 1. We obtain the following three inequalities: x

_{1}≤ 3; 2 * x

_{2}≤ 10; 3 * x

_{1}+ 2 * x

_{2}≤ 15. The vector (5/3, 5) is solution of the system. We therefore have an evolution of x

_{1}from 5/3 - 2 = -1/3 and x

_{2}= 5 -6 = -1 in the objective function (new value - old value). Its cost therefore evolves by -1 / 3 * c

_{1}- 1 C

_{2}+ 1 * c

_{3}= -2 (1 * c

_{3}because we go from a production of 0 to 1). We find the value of the reduced cost of x

_{3}.

_{3}to become profitable, its cost must increase at least the opposite of its reduced cost.

## Study 4: variation in production

_{ij}changes in a saturated constraint, then neither the optimal solution nor the optimal value is kept.

_{ij}changes in an unsaturated constraint and a base variable, then the value can vary between + or - infinity (depending on a min or max) to S

_{i }/ x *

_{i}. with S

_{i }the variance variable.

Indeed by adding a delta variable in the cost of the variable based on the target constraint then it suffices to inject the optimal vector and solve the equation as in the following example:

_{ij}changes in any constraint and of a non-base variable, then only a negative variation (in the case of a max) can make the product viable in this constraint. We must then solve the simplex by incorporating the delta and verify its various optimality criteria.