Saturday, September 11, 2010

Dual Values for Primal Variable Bounds

A question arose recently about how one obtains from a linear program solver the shadow price (dual value) of a bound on one of the variables. The short answer: look at the reduced cost of the variable. Here's why.

Let's start with a generic linear program in which one particular variable ($x_{j}$) has specified, finite bounds ($l_{j}\le x_{j}\le u_{j}$):\[\begin{array}{lrclc}\textrm{minimize} & z & = & c'x\\\textrm{s.t.} & Ax & = & b\\ & x & \ge & 0\end{array}.\]We can write the bounds as explicit constraints, in which case they will have dual variables associated with them. The dual variables represent the marginal change to the optimal objective value $z$ per unit change in the corresponding bound.

The more efficient approach, however, is to enter the bounds as bounds and let the software cope with them through modifications to the (primal or dual) simplex algorithm. This reduces the number of constraints and thus the size of basis matrices. If the bounds are not entered as constraints, though, there will be no associated dual variables. Where do we find the marginal change information?

Let $A=\left[B \quad N\right]$ be the decomposition of $A$ according to basic ($B$) and nonbasic ($N$) columns in the final, optimal basis. Anyone who has studied the simplex method at an introductory level knows the equations\begin{eqnarray*} x_{B} & = & B^{-1}b-B^{-1}Nx_{N}\\z & = & c_{B}'B^{-1}b+(c_{N}'-c_{B}'B^{-1}N)x_{N}\end{eqnarray*} from which (in the absence of bounds) we deduce the optimal solution\begin{eqnarray*} x_{N} & = & 0\\x_{B} & = & B^{-1}b\\z & = & c_{B}'B^{-1}b\end{eqnarray*}when the reduced costs of the nonbasic variables ($r=c_{N}'-c_{B}'B^{-1}N$) are all nonnegative. Until further notice, let's assume for simplicity that no basis is degenerate. The algorithmic modification to handle variable bounds in essence treats a variable that is at its lower or upper bound as being nonbasic (so nonbasic variables are no longer necessarily $0$ in the optimal solution). It also modifies pivoting rules so that a variable trying to enter the basis is kept nonbasic at one of its bounds if entry to the basis would lead to that bound being violated.

Suppose that $x_{j}$ is simply nonnegative ($l_{j}=0$, $u_{j}=\infty$) and turns out to be nonbasic in the final solution. We don't normally think of nonnegativity as a constraint, but of course it is. If we were to allow $x_{j}$ to take on a small negative value (say $-\delta$, where $\delta>0$), then $x_{B}$ would change by $B^{-1}A_{j}\delta$ (where $A_{j}$ is the constraint column for $x_{j}$), and $z$ would change by $-r_{j}\delta\le0$. Similarly, if we raised the lower bound from $0$ to $\delta$, $z$ would change by $+r_{j}\delta\ge0$. The key is to keep $\delta$ small enough that the modified value of $x_{B}$ remains nonnegative (or, more generally, within its bounds), which we can do if the solution is not degenerate. So the marginal rate of change of $z$ per unit change in the lower bound of $x_{j}$ is just $r_{j}$, the reduced cost, and that works whether or not $l_{j}=0$.

Now suppose that $x_{j}$ has a finite upper bound $u_{j}$, and that $x_{j}$ is again nonbasic in the (nondegenerate) optimal solution, but this time at its upper bound. In contrast to the previous case, $x_{j}$ will have a favorable (nonpositive) reduced cost, else it would not be at its upper bound. This time $r_{j}\le0$. Once again, if we change the upper bound by $\delta$, the change in $z$ is $r_{j}\delta$, and so $r_{j}$ is the shadow price of the upper bound. The shadow price of the lower bound is $0$ due to the complementary slackness principle (CS), since the lower bound is nonbinding. (If $x_{j}$ is nonbasic at its lower bound, $r_{j}$ is the shadow price of the lower bound and CS makes the shadow price of the upper bound $0$).

What if $x_{j}$ is basic? In a nondegenerate solution, that will imply $l_{j}<x_{j}<u_{j}$, and complementary slackness tells us the shadow price of both bounds must be $0$, since both are nonbinding. That leaves us the possibility of a degenerate solution, in which $x_{j}$ is simultaneously basic (which forces its reduced cost to be $0$) and at one of its bounds. Degeneracy means the shadow price of whichever bound $x_{j}$ attained is not unique; the reduced cost ($0$) is one of the valid shadow prices, but any change of that bound in the "wrong" direction, however small, will trigger a shift to a different shadow price.

9 comments:

  1. There is very nice theory on doing "correct" sensitivity analysis, which is not bound by the annoying non degeneracy assumption. You have to do more work, which involves reoptimizing, but what is amazing is that nobody seem to find this just slightly interesting. I have never seen anybody use it in a real life application, people just use the standard ranges because they were taught back in school, even though it can lead to faulty conclusions if not careful.

    ReplyDelete
  2. You raise a good point, and it ties into another concern I had when I taught sensitivity analysis, which is whether it's really all that useful when (in most cases) you can just tweak the model and reoptimize rather cheaply. Since you made a related post on OR-Exchange (http://www.or-exchange.com/questions/680/sensitivity-analysis-avoiding-pitfalls), I gave a (much!) lengthier answer there. I'll just repeat here the link you provided to MOSEK's explanation of how to do sensitivity: http://www.mosek.com/fileadmin/products/5_0/tools/doc/html/tools/node016.html.

    ReplyDelete
  3. A related question has come up twice in recent weeks on CPLEX forums: if the reduced cost is non-zero, how do you know to which bound it applies? The obvious answer is "get the variable value and compare it to its lower and upper bounds", but actually in CPLEX there may be a better way (which avoids issues of rounding errors). In the C++ and Java APIs there are methods (getBasisStatus for one variable, getBasisStatuses for a vector of variables) that will return a status indicator for a variable. If it is "AtLower", the reduced cost is the shadow price for the lower bound, and similarly for "AtUpper". I'm pretty sure the other APIs provide this information as well.

    ReplyDelete
  4. This comment has been removed by the author.

    ReplyDelete
  5. Hi Prof. Rubin
    How we can get the reduced costs at each node of the tree if we use CallBacks in IloCplex?

    ReplyDelete
    Replies
    1. In the C API, possibly (I'm not sure). In the "higher" APIs (C++, Java, C#, ...), I don't think so; but you can get pseudocosts with getDownPseudoCost() and getUpPseudoCost() (plus "plural" versions of each).

      Delete
    2. Thanks a lot
      what I wanted to do is have some variable fixing based on the reduced costs, I can only use the PseudoCosts in a heuristic fashion to select which variable to branch on. Am I right?

      Delete
    3. Yes. If your thinking was to fix variables for which rounding to the next integer in one direction would be guaranteed by the reduced cost to be suboptimal (and rounding in the other direction either would be guaranteed infeasible or would not be possible because the variable has an integer value in the LP solution), I would not be surprised if CPLEX is already doing those fixings for you.

      Delete
  6. This comment has been removed by a blog administrator.

    ReplyDelete

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.