Here's a topic that arises repeatedly on the CPLEX support forums: the differences between an LP/MIP model built "internally" using CPLEX Concert technology, the same model written to a file using CPLEX's SAV format (and then read back in, or solved using the interactive solver), and the same model written to a file using CPLEX's industry-not-quite-standard LP format. Since I can never find links to previous discussions when I need them, I'll put a quick summary here. I suspect it applies to other solvers that support both binary and text file formats; I know it applies to CPLEX.
Many factors influence the sequence of pivots a solver does on an LP model and the sequence of nodes it grinds through in a MIP model. If you solve the same model two different ways, using the same or different software, you should end up with the same optimal objective value (to within rounding); that's the definition of "optimal". If you don't, either you've found a bug in the software or, more likely, your problem is mathematically ill-conditioned. Even if the problem is well-conditioned, and you get the same objective value both places, you may not get the same optimal solution. You should if the solution is unique; but if there are multiple optima, you may get one optimal solution one time and a different one the next time.
This is true even if you use the same program (for instance, CPLEX) both times. Small, seemingly innocuous things such as the order in which variable or constraints are entered, or the compiler used to compile the respective CPLEX versions, can make a difference. (Different C/C++ compilers may have different floating point algorithms encoded.)
The bigger (to me) issue, though, is that if you dump the model to a file and solve the file with the interactive solver, you are not solving the same model. You are solving what is hopefully a close (enough) replicate of the original model. The binary output format (SAV for CPLEX) will generally be closer to the original model than the text (LP or MPS for CPLEX) format will be. I'm not an expert on SAV file format, so I'll say that it is possible the SAV file is an exact duplicate. That requires not only full fidelity in representation of coefficients but also that the internal model you get by reading and processing the SAV file has all variables and constraints specified in exactly the same order that they appeared in the internal model from which the SAV file was created. I'm not sure that's true, but I'm not sure it isn't.
What I am sure of is that the text version is not in general a full-fidelity copy. Not only do we have the same questions about preservation of order, but we have the rounding problems incurred by converting a double precision binary number to a text representation and then back to a double precision binary value. If your model is well-conditioned, hopefully the differences are small enough not to affect the optimal solution. It may still affect the solution process. Suppose that you have an LP model with some degeneracy (which is not a sign of ill-conditioning). Degeneracy means a tie occurs occasionally when selecting pivot rows. At least it's a tie in theory; in practice, tiny rounding errors may serve to break the tie, and those rounding errors may well be different in the original version and the not-quite-perfect clone you get by saving a text version of the model and then reading it in. So the solver working on the original model pivots in one row, the solver working on the model from the text file pivots in a different row, and the solution time and possibly the solution itself can change.
The first immediate take-away from this is that if you want to compare the solution your code gets to the solution the interactive solver gets, use the binary file format. If you also want to visually check the model, save it as both binary and text files (and feed the binary file to the interactive solver). Don't rely on the text file to replicate solver behavior.
The second take-away is to modulate your expectations. In a just and orderly universe, solving the same model with the same software, on two different platforms and regardless of how you passed it between the software copies, would produce the same solution. A just and orderly universe, however, would not contain politicians. So accept that differences may arise.
Wednesday, September 22, 2010
Saturday, September 11, 2010
Dual Values for Primal Variable Bounds
A question arose recently about how one obtains from a linear program
solver the shadow price (dual value) of a bound on one of the variables.
The short answer: look at the reduced cost of the variable. Here's
why.
Let's start with a generic linear program in which one particular variable ($x_{j}$) has specified, finite bounds ($l_{j}\le x_{j}\le u_{j}$):\[\begin{array}{lrclc}\textrm{minimize} & z & = & c'x\\\textrm{s.t.} & Ax & = & b\\ & x & \ge & 0\end{array}.\]We can write the bounds as explicit constraints, in which case they will have dual variables associated with them. The dual variables represent the marginal change to the optimal objective value $z$ per unit change in the corresponding bound.
The more efficient approach, however, is to enter the bounds as bounds and let the software cope with them through modifications to the (primal or dual) simplex algorithm. This reduces the number of constraints and thus the size of basis matrices. If the bounds are not entered as constraints, though, there will be no associated dual variables. Where do we find the marginal change information?
Let $A=\left[B \quad N\right]$ be the decomposition of $A$ according to basic ($B$) and nonbasic ($N$) columns in the final, optimal basis. Anyone who has studied the simplex method at an introductory level knows the equations\begin{eqnarray*} x_{B} & = & B^{-1}b-B^{-1}Nx_{N}\\z & = & c_{B}'B^{-1}b+(c_{N}'-c_{B}'B^{-1}N)x_{N}\end{eqnarray*} from which (in the absence of bounds) we deduce the optimal solution\begin{eqnarray*} x_{N} & = & 0\\x_{B} & = & B^{-1}b\\z & = & c_{B}'B^{-1}b\end{eqnarray*}when the reduced costs of the nonbasic variables ($r=c_{N}'-c_{B}'B^{-1}N$) are all nonnegative. Until further notice, let's assume for simplicity that no basis is degenerate. The algorithmic modification to handle variable bounds in essence treats a variable that is at its lower or upper bound as being nonbasic (so nonbasic variables are no longer necessarily $0$ in the optimal solution). It also modifies pivoting rules so that a variable trying to enter the basis is kept nonbasic at one of its bounds if entry to the basis would lead to that bound being violated.
Suppose that $x_{j}$ is simply nonnegative ($l_{j}=0$, $u_{j}=\infty$) and turns out to be nonbasic in the final solution. We don't normally think of nonnegativity as a constraint, but of course it is. If we were to allow $x_{j}$ to take on a small negative value (say $-\delta$, where $\delta>0$), then $x_{B}$ would change by $B^{-1}A_{j}\delta$ (where $A_{j}$ is the constraint column for $x_{j}$), and $z$ would change by $-r_{j}\delta\le0$. Similarly, if we raised the lower bound from $0$ to $\delta$, $z$ would change by $+r_{j}\delta\ge0$. The key is to keep $\delta$ small enough that the modified value of $x_{B}$ remains nonnegative (or, more generally, within its bounds), which we can do if the solution is not degenerate. So the marginal rate of change of $z$ per unit change in the lower bound of $x_{j}$ is just $r_{j}$, the reduced cost, and that works whether or not $l_{j}=0$.
Now suppose that $x_{j}$ has a finite upper bound $u_{j}$, and that $x_{j}$ is again nonbasic in the (nondegenerate) optimal solution, but this time at its upper bound. In contrast to the previous case, $x_{j}$ will have a favorable (nonpositive) reduced cost, else it would not be at its upper bound. This time $r_{j}\le0$. Once again, if we change the upper bound by $\delta$, the change in $z$ is $r_{j}\delta$, and so $r_{j}$ is the shadow price of the upper bound. The shadow price of the lower bound is $0$ due to the complementary slackness principle (CS), since the lower bound is nonbinding. (If $x_{j}$ is nonbasic at its lower bound, $r_{j}$ is the shadow price of the lower bound and CS makes the shadow price of the upper bound $0$).
What if $x_{j}$ is basic? In a nondegenerate solution, that will imply $l_{j}<x_{j}<u_{j}$, and complementary slackness tells us the shadow price of both bounds must be $0$, since both are nonbinding. That leaves us the possibility of a degenerate solution, in which $x_{j}$ is simultaneously basic (which forces its reduced cost to be $0$) and at one of its bounds. Degeneracy means the shadow price of whichever bound $x_{j}$ attained is not unique; the reduced cost ($0$) is one of the valid shadow prices, but any change of that bound in the "wrong" direction, however small, will trigger a shift to a different shadow price.
Let's start with a generic linear program in which one particular variable ($x_{j}$) has specified, finite bounds ($l_{j}\le x_{j}\le u_{j}$):\[\begin{array}{lrclc}\textrm{minimize} & z & = & c'x\\\textrm{s.t.} & Ax & = & b\\ & x & \ge & 0\end{array}.\]We can write the bounds as explicit constraints, in which case they will have dual variables associated with them. The dual variables represent the marginal change to the optimal objective value $z$ per unit change in the corresponding bound.
The more efficient approach, however, is to enter the bounds as bounds and let the software cope with them through modifications to the (primal or dual) simplex algorithm. This reduces the number of constraints and thus the size of basis matrices. If the bounds are not entered as constraints, though, there will be no associated dual variables. Where do we find the marginal change information?
Let $A=\left[B \quad N\right]$ be the decomposition of $A$ according to basic ($B$) and nonbasic ($N$) columns in the final, optimal basis. Anyone who has studied the simplex method at an introductory level knows the equations\begin{eqnarray*} x_{B} & = & B^{-1}b-B^{-1}Nx_{N}\\z & = & c_{B}'B^{-1}b+(c_{N}'-c_{B}'B^{-1}N)x_{N}\end{eqnarray*} from which (in the absence of bounds) we deduce the optimal solution\begin{eqnarray*} x_{N} & = & 0\\x_{B} & = & B^{-1}b\\z & = & c_{B}'B^{-1}b\end{eqnarray*}when the reduced costs of the nonbasic variables ($r=c_{N}'-c_{B}'B^{-1}N$) are all nonnegative. Until further notice, let's assume for simplicity that no basis is degenerate. The algorithmic modification to handle variable bounds in essence treats a variable that is at its lower or upper bound as being nonbasic (so nonbasic variables are no longer necessarily $0$ in the optimal solution). It also modifies pivoting rules so that a variable trying to enter the basis is kept nonbasic at one of its bounds if entry to the basis would lead to that bound being violated.
Suppose that $x_{j}$ is simply nonnegative ($l_{j}=0$, $u_{j}=\infty$) and turns out to be nonbasic in the final solution. We don't normally think of nonnegativity as a constraint, but of course it is. If we were to allow $x_{j}$ to take on a small negative value (say $-\delta$, where $\delta>0$), then $x_{B}$ would change by $B^{-1}A_{j}\delta$ (where $A_{j}$ is the constraint column for $x_{j}$), and $z$ would change by $-r_{j}\delta\le0$. Similarly, if we raised the lower bound from $0$ to $\delta$, $z$ would change by $+r_{j}\delta\ge0$. The key is to keep $\delta$ small enough that the modified value of $x_{B}$ remains nonnegative (or, more generally, within its bounds), which we can do if the solution is not degenerate. So the marginal rate of change of $z$ per unit change in the lower bound of $x_{j}$ is just $r_{j}$, the reduced cost, and that works whether or not $l_{j}=0$.
Now suppose that $x_{j}$ has a finite upper bound $u_{j}$, and that $x_{j}$ is again nonbasic in the (nondegenerate) optimal solution, but this time at its upper bound. In contrast to the previous case, $x_{j}$ will have a favorable (nonpositive) reduced cost, else it would not be at its upper bound. This time $r_{j}\le0$. Once again, if we change the upper bound by $\delta$, the change in $z$ is $r_{j}\delta$, and so $r_{j}$ is the shadow price of the upper bound. The shadow price of the lower bound is $0$ due to the complementary slackness principle (CS), since the lower bound is nonbinding. (If $x_{j}$ is nonbasic at its lower bound, $r_{j}$ is the shadow price of the lower bound and CS makes the shadow price of the upper bound $0$).
What if $x_{j}$ is basic? In a nondegenerate solution, that will imply $l_{j}<x_{j}<u_{j}$, and complementary slackness tells us the shadow price of both bounds must be $0$, since both are nonbinding. That leaves us the possibility of a degenerate solution, in which $x_{j}$ is simultaneously basic (which forces its reduced cost to be $0$) and at one of its bounds. Degeneracy means the shadow price of whichever bound $x_{j}$ attained is not unique; the reduced cost ($0$) is one of the valid shadow prices, but any change of that bound in the "wrong" direction, however small, will trigger a shift to a different shadow price.
Monday, September 6, 2010
Give Developers Crappy Machines
The subject line no doubt sounds like heresy in some quarters, but I have my reasons. Software developers typically get pretty fast workstations with multicore processors, lots of memory, big displays (and/or multiple displays), etc. It makes sense from a productivity standpoint. The problem is that in many cases it disconnects them from their user base.
Case in point: I recently downloaded the (free) academic version of CPLEX Optimization Studio 12.2 from IBM and installed it on my home PC. My home PC is a one-lung eMachines box running Win XP (and hence a gaggle of defensive programs -- antivirus, firewall, etc.). I do "serious work" on other, faster machines (running Linux), but this one is good enough for handling mundane chores, including some programming. I have both the Eclipse and Netbeans IDEs on it (sorry Eclipse fans, but so far I haven't found any Eclipse tools for building GUIs that match Netbeans).
So I'm looking for the documentation for CPLEX 12.2. In past versions, this would be some combination of HTML files, PDFs and (on Windows) CHM (Windows help) files. In the latest version, though, they've taken a different route. The Start menu has links to various help entries, but they all point to the same thing -- they launch the help component of OPL's IDE. In other words, to look up anything, I have to launch a stripped/modified version of Eclipse, which can take between one and two minutes on my PC. (After the first launch, subsequent launches go faster.) Between the IDE and the instance of Java it runs on, it eats upwards of 130MB of RAM, which seems like a pretty big footprint for a help browser, particularly if I've got a programming IDE going (both Eclipse and Netbeans run on Java themselves), a web browser and maybe an e-mail client open, etc. I have to wonder whether the developers would have gone with an IDE-only help system (no separate HTML or PDF docs, no CHM files) if they'd been forced to use it themselves on a system like mine.
There may be a lesson here for OR people, too. I think sometimes we over-engineer solutions (I know I'm prone to this) because we're comfortable with specifying mutation rates for genetic algorithms or deciding whether to use dual simplex or barrier algorithms. In our case, the key is not so much to "walk in the user's shoes" by living with the user's resources so much as to put ourselves in the user's head and see how much of what we're putting in front of them would generate a big "huh??".
Case in point: I recently downloaded the (free) academic version of CPLEX Optimization Studio 12.2 from IBM and installed it on my home PC. My home PC is a one-lung eMachines box running Win XP (and hence a gaggle of defensive programs -- antivirus, firewall, etc.). I do "serious work" on other, faster machines (running Linux), but this one is good enough for handling mundane chores, including some programming. I have both the Eclipse and Netbeans IDEs on it (sorry Eclipse fans, but so far I haven't found any Eclipse tools for building GUIs that match Netbeans).
So I'm looking for the documentation for CPLEX 12.2. In past versions, this would be some combination of HTML files, PDFs and (on Windows) CHM (Windows help) files. In the latest version, though, they've taken a different route. The Start menu has links to various help entries, but they all point to the same thing -- they launch the help component of OPL's IDE. In other words, to look up anything, I have to launch a stripped/modified version of Eclipse, which can take between one and two minutes on my PC. (After the first launch, subsequent launches go faster.) Between the IDE and the instance of Java it runs on, it eats upwards of 130MB of RAM, which seems like a pretty big footprint for a help browser, particularly if I've got a programming IDE going (both Eclipse and Netbeans run on Java themselves), a web browser and maybe an e-mail client open, etc. I have to wonder whether the developers would have gone with an IDE-only help system (no separate HTML or PDF docs, no CHM files) if they'd been forced to use it themselves on a system like mine.
There may be a lesson here for OR people, too. I think sometimes we over-engineer solutions (I know I'm prone to this) because we're comfortable with specifying mutation rates for genetic algorithms or deciding whether to use dual simplex or barrier algorithms. In our case, the key is not so much to "walk in the user's shoes" by living with the user's resources so much as to put ourselves in the user's head and see how much of what we're putting in front of them would generate a big "huh??".
Subscribe to:
Posts (Atom)