In her "President's Desk" column in the August issue of OR/MS Today, INFORMS President Susan Albin quotes Seth Bonder's keynote address at the 2007 INFORMS meeting, describing how the National Academy of Engineering defines operations research: "Development and use of analytical methods to describe, analyze, plan, design, manage and integrate the operations of systems and enterprises that involve complex interactions among people, processes, materials, equipment, information, organizations and facilities to provide services and produce goods for society."

The question of how to define OR crops up periodically in a variety of forums. Here's a thread on sci.op-research from 1999 about it, and here's a discussion on OR-Exchange from earlier this year. INFORMS Online has a brief definition on its home page (upper right) with links to delve a bit deeper (including, handily, separate definitions of OR and MS).

What struck me about Susan Albin's column was her interpretation of the NAE definition, and how it distinguishes OR from other engineering disciplines: "Each of the latter focus on physical processes ... In contrast, operations researchers focus on operations processes ..." So if we adopt the NAE definition (or perhaps a more compact version thereof? my fingers are still cramped from retyping it), all we need do now is figure out how to distinguish OR from operations management. I suspect the definition I use locally ("we do the same things, except we understand the math") won't fly globally.

## Sunday, August 29, 2010

## Friday, August 27, 2010

### The Sting of Rejection

In the latest issue of OR/MS Today, there's an interesting and provocative letter from Professor James Ignizio. You may well know his name from the literature on multicriterion decision making. Certainly his record (about 150 peer reviewed journal articles) speaks for itself, and (largely out of jealousy :-) ) I will not comment further on it. Pride of authorship aside, if Prof. Ignizio has a paper he thinks makes a publishable contribution, I'm inclined to respect his judgment.

His letter describes a similarity between his first submission to an OR journal (

I had a similar experience years ago, with a paper on quantity discounts co-authored with a colleague in operations management from another school. The problem was his, the methodology mine. The solution method mixed a bit of nonlinear programming, some Lagrangean relaxation and a dash of partial enumeration. The rejection criterion from the first journal to which we submitted was that the solution approach, while somewhat novel, used off-the-shelf methods and did not involve any brand new techniques. (The paper was subsequently accepted by a different OR journal of comparable stature.)

What my experience and Prof. Ignizio's have in common is that the rejection criterion seemed to focus on a lack of advancement of theory and/or lack of a brand new algorithm. I could understand this from

In defense of journal editors, the pendulum can easily swing too far the other way. I've reviewed entirely too many "look, we can solve this with an (obvious) linear programming model" submissions (not to mention a student-written conference paper which posed a goal programming model with a single criterion). Clearly some contribution threshold has to be met to justify slaughtering more trees for journal pages.

All that said, though, the academic publication model in OR appears to be gravitating toward something along the following lines: theoretical and computational breakthroughs go into high level journals (where they are likely to be read by other people doing theoretical research and not so much by potential users); models and solution techniques for actual problems go to high level journals

I'm not sure what the right answer is. I do think some journals are making a conscious effort to be more inclusive. Whether they're succeeding remains to be determined. Meanwhile, practitioners complain about the lack of relevance of academic "theorists", and we academics wonder why.

His letter describes a similarity between his first submission to an OR journal (

*Operations Research*, to be price) and his most recent submission, 40 years later, both of which were rejected. The first rejection, of a paper that described the solution to an important real-world problem (if I can use "real-world" to describe space travel), specified insufficient advancement of OR*theory*(my emphasis added), while the most recent gives a similar "insufficient contribution" reason. (The rapid turnaround of the most recent paper guarantees that this was a "desk rejection" by the editor.)I had a similar experience years ago, with a paper on quantity discounts co-authored with a colleague in operations management from another school. The problem was his, the methodology mine. The solution method mixed a bit of nonlinear programming, some Lagrangean relaxation and a dash of partial enumeration. The rejection criterion from the first journal to which we submitted was that the solution approach, while somewhat novel, used off-the-shelf methods and did not involve any brand new techniques. (The paper was subsequently accepted by a different OR journal of comparable stature.)

What my experience and Prof. Ignizio's have in common is that the rejection criterion seemed to focus on a lack of advancement of theory and/or lack of a brand new algorithm. I could understand this from

*Math of OR*, but it seems to be an increasingly common attitude among top OR journals in general. There may be a lack of appreciation in some quarters for (a) whether a significant problem is being solved (and I'll stipulate freely that the problem in Prof. Ignizio's paper was more significant than the one in our paper) and (b) whether the solution approach, if not an algorithmic breakthrough in itself, might trigger a "hey, I can use that" response in a reader struggling with an entirely different problem.In defense of journal editors, the pendulum can easily swing too far the other way. I've reviewed entirely too many "look, we can solve this with an (obvious) linear programming model" submissions (not to mention a student-written conference paper which posed a goal programming model with a single criterion). Clearly some contribution threshold has to be met to justify slaughtering more trees for journal pages.

All that said, though, the academic publication model in OR appears to be gravitating toward something along the following lines: theoretical and computational breakthroughs go into high level journals (where they are likely to be read by other people doing theoretical research and not so much by potential users); models and solution techniques for actual problems go to high level journals

*if*there is something singularly clever (and not at all obvious) about either the model or the solution; and moderately clever models and solutions for realistic problems are relegated (if I dare use that word) to journals in application area (where the readers may appreciate the result but not understand the model or methodology very well) or to*Interfaces*(where the details will typically be omitted). That may short-change operations research practitioners (who will miss a potentially useful contribution if it is published in a content-area journal far from their own domains) and academics trying to be relevant despite the odds. (As an example of the latter, I co-authored an application paper years ago in the*Canadian Journal of Forestry*. I'm told by people who should know that it's a well-respected journal in the forestry domain, but I'm in a business school, so I got about the same amount of credit I would have gotten for hitting*TV Guide*. Fortunately, I was too obtuse to be deterred.)I'm not sure what the right answer is. I do think some journals are making a conscious effort to be more inclusive. Whether they're succeeding remains to be determined. Meanwhile, practitioners complain about the lack of relevance of academic "theorists", and we academics wonder why.

## Wednesday, August 25, 2010

### Absolute Value Inequalities

The following question arose today: given divisible variables $x,y\in(-\infty,+\infty)$,
how does one implement in a mathematical program the constraint $|x|\le|y|$?

To start, I do not think it can be done with unbounded variables. Then again, I'm not a big believer in unbounded variables in practice. (Physicists seem to believe the number of atoms in the universe is finite, which gives us a starting point at bounding all physical variables. They also believe the universe will eventually suffer heat death, which gives us a start at bounding time variables.)

Let us assume instead that $L_{x}\le x\le U_{x}$ and $L_{y}\le y\le U_{y}$, with no restrictions on which bounds are positive or negative. Now add a binary variable $z$, which will take value 0 if $y$ is positive and 1 if $y$ is negative. We enforce that with the constraints\begin{eqnarray*}y & \le & U_{y}\times(1-z)\\y & \ge & L_{y}\times z.\end{eqnarray*} [LaTeXMathML does not support equation numbering, so please number the inequalities above (1) and (2) mentally, and the inequalities below (3)--(6). Sorry about that!] Note that the value of $z$ is undetermined if $y=0$, which will not be a problem. Now add the constraints\begin{eqnarray*} x & \le & y+\left(U_{x}-L_{y}\right)\times z\\ -x & \le & y-\left(L_{x}+L_{y}\right)\times z\\ x & \le & -y+\left(U_{y}+U_{x}\right)\times\left(1-z\right)\\ -x & \le & -y+\left(U_{y}-L_{x}\right)\times\left(1-z\right).\end{eqnarray*} If $y>0$, then (1) forces $z=0$, in which case (5) and (6) become vacuous (since $x\le U_{x}\le U_{x}+(U_{y}-y)$ and $-x\le-L_{x}\le-L_{x}+(U_{y}-y)$ respectively). Meanwhile, (3) and (4) become $x\le y$ and $-x\le y$, which enforces $|x|\le y=|y|$. Similarly, if $y<0$, then (2) forces $z=1$, which makes (3) and (4) vacuous and turns (5) and (6) into $x\le-y$ and $-x\le-y$, so that $|x|\le-y=|y|$. Finally, if $y=0$, then it does not matter whether $z=0$ or $z=1$, since either (3) and (4) (if $z=0$) or (5) and (6) (if $z=1$) will force $x=0$.

To start, I do not think it can be done with unbounded variables. Then again, I'm not a big believer in unbounded variables in practice. (Physicists seem to believe the number of atoms in the universe is finite, which gives us a starting point at bounding all physical variables. They also believe the universe will eventually suffer heat death, which gives us a start at bounding time variables.)

Let us assume instead that $L_{x}\le x\le U_{x}$ and $L_{y}\le y\le U_{y}$, with no restrictions on which bounds are positive or negative. Now add a binary variable $z$, which will take value 0 if $y$ is positive and 1 if $y$ is negative. We enforce that with the constraints\begin{eqnarray*}y & \le & U_{y}\times(1-z)\\y & \ge & L_{y}\times z.\end{eqnarray*} [LaTeXMathML does not support equation numbering, so please number the inequalities above (1) and (2) mentally, and the inequalities below (3)--(6). Sorry about that!] Note that the value of $z$ is undetermined if $y=0$, which will not be a problem. Now add the constraints\begin{eqnarray*} x & \le & y+\left(U_{x}-L_{y}\right)\times z\\ -x & \le & y-\left(L_{x}+L_{y}\right)\times z\\ x & \le & -y+\left(U_{y}+U_{x}\right)\times\left(1-z\right)\\ -x & \le & -y+\left(U_{y}-L_{x}\right)\times\left(1-z\right).\end{eqnarray*} If $y>0$, then (1) forces $z=0$, in which case (5) and (6) become vacuous (since $x\le U_{x}\le U_{x}+(U_{y}-y)$ and $-x\le-L_{x}\le-L_{x}+(U_{y}-y)$ respectively). Meanwhile, (3) and (4) become $x\le y$ and $-x\le y$, which enforces $|x|\le y=|y|$. Similarly, if $y<0$, then (2) forces $z=1$, which makes (3) and (4) vacuous and turns (5) and (6) into $x\le-y$ and $-x\le-y$, so that $|x|\le-y=|y|$. Finally, if $y=0$, then it does not matter whether $z=0$ or $z=1$, since either (3) and (4) (if $z=0$) or (5) and (6) (if $z=1$) will force $x=0$.

## Tuesday, August 24, 2010

### Dopey Defaults II

Like many programs, NetBeans IDE automatically checks for updates. Like many programs built to be extensible, NetBeans updates components (or plugins) individually. So it came as no shock when I cranked up NetBeans 6.9 on my home PC this morning and it announced that there were 23 or so updates available. I told it to download and install them. Normally I get out of the way while NB does this, but I was in a bit of a hurry today, so I told it to download/install in the background while I worked on a small Java app ... which may have led to my downfall.

After it was done updating itself, NB called for me to restart it. I held off until I'd finished and saved some changes, then gave it the go-ahead. When it tried to restart, I was present with a

Grump, grump: I bit the bullet, downloaded the Java SE version (all I need, and the smallest at about 54MB), uninstalled and reinstalled, but keeping my old settings (which, again, may turn out to have been shooting myself in the foot). The installer ran successfully and when I started it I got no error messages ... but also no projects. Once again, the file I'd been editing was open but I could not open any projects. This time, though, I noticed that when I highlighted a project directory there was a curious message in the open-project dialog that said something about a possibly uninstalled plugin (without specifying which). So I checked in Tools > Plugins > Installed and, lo and behold, 23 plugins (the correct number) were installed but most were disabled -- including

So there's the dopey part: either the borked upgrade of the original copy of NB 6.9 changed a setting somewhere telling NB that none of the Java development stuff was active, or the 6.9.1 installer,

After it was done updating itself, NB called for me to restart it. I held off until I'd finished and saved some changes, then gave it the go-ahead. When it tried to restart, I was present with a

*very*length list of error messages, all of which seemed to have something to do with either missing plugins or unsatisfied dependencies. I clicked past that, and NB opened up -- sort of. I didn't get the usual start page. I did get the file I'd been editing (a Java main class) and had left open, but the corresponding project did not open. So I tried to open it the usual way, but NB would not open it -- or any other Java project -- and did not seem to recognize them as projects at all.Grump, grump: I bit the bullet, downloaded the Java SE version (all I need, and the smallest at about 54MB), uninstalled and reinstalled, but keeping my old settings (which, again, may turn out to have been shooting myself in the foot). The installer ran successfully and when I started it I got no error messages ... but also no projects. Once again, the file I'd been editing was open but I could not open any projects. This time, though, I noticed that when I highlighted a project directory there was a curious message in the open-project dialog that said something about a possibly uninstalled plugin (without specifying which). So I checked in Tools > Plugins > Installed and, lo and behold, 23 plugins (the correct number) were installed but most were disabled -- including

*all the Java development plugins*.So there's the dopey part: either the borked upgrade of the original copy of NB 6.9 changed a setting somewhere telling NB that none of the Java development stuff was active, or the 6.9.1 installer,

*designed for Java development*, installed with the Java stuff inactive (which it's never done before). Dopey default? Or bad handling of upgrade errors? At least the fix is easy, once you recognize the problem.## Monday, August 16, 2010

### Iterating over a CPLEX Model: the Sequel

In a previous post is showed methods for iterating over the constraints of a CPLEX model. One can also iterate over the model as a whole, but it turns out there are some quirks. I'll enumerate a couple here. Whether they are also quirks in the C++ API I'm not sure.

The example model is the same small LP I used before, so I won't repeat it here. I will repeat the Java code to generate the model, though, so that I can tweak it later.

This is pretty vanilla stuff. If we print cplex.toString(), we get a nice, readable representation of the model:

One (minor?) surprise so far: the objective function's somewhat unimaginative name ("Obj") did not print out. Now suppose that, for some reason, I want to disassemble the model into its components. CPLEX will give me an iterator (an instance of java.util.Iterator) that I can use to iterate over the model. Here comes the first surprise. Consider the following code, which should iterate over the model and print object names along with some context information:

Here's the output:

Notice anything funny? Besides being consistent about denying that the objective function has a name, the iterator identifies the three constraints as variables (and does not identify any of the variables as anything). I'm told by an insider that listing the constraints as variables turns out to be a "feature" of CPLEX: for somewhat arcane reasons associated with using IloConstraint instances as arguments to the ifThen() method, IloRange implements the IloNumVar interface. At any rate, to use the current cliche, it is what it is.

So I have three things to fix: get CPLEX to acknowledge the name of the objective function; get CPLEX to distinguish constraints from variables; and get CPLEX to list the variables. There is an easy workaround for the first issue (objective name): I change the statement introducing the objective function to

Applying the setName method to the instance of IloObjective returned by addMaximize gets the name installed correctly. Fixing the third issue (finding the variables) is also simple: I can just add them explicitly into the model. It's redundant, given that they already occur in the model, but seems harmless and gets them listed. I'll insert the following line at the end of model construction:

The model output changes to

(note that the variables are now listed at the end of the model). An alternative is to iterate over the expressions in the objective and constraint, using an IloLinearNumExprIterator, and pick out the variables. It's less efficient, and it will find every variable multiple times, but if someone else generated the model and did not explicitly (and redundantly) add the variables, it may be the only viable approach. (And, let's face it, if you wrote the model yourself, you probably don't need to iterate over it looking for variables.)

Finally, the middle issue (distinguishing variables from constraints) is also fairly easy to solve. I can use some sort of naming convention to let me tell them apart by names, but there's an even easier method. A constraint will appear as an instance of both IloNumVar and IloRange, so I can use the latter fact to screen out constraints. I'll change the first if statement to

which skips over the constraints. With those changes, the output finally looks right:

The example model is the same small LP I used before, so I won't repeat it here. I will repeat the Java code to generate the model, though, so that I can tweak it later.

```
IloCplex cplex = new IloCplex();
IloNumVar[] vars = new IloNumVar[3];
vars[0] = cplex.numVar(0, Double.MAX_VALUE, "x");
vars[1] = cplex.numVar(0, Double.MAX_VALUE, "y");
vars[2] = cplex.numVar(0, Double.MAX_VALUE, "z");
cplex.addMaximize(
cplex.scalProd(new double[]{1., 2., 3.}, vars),
"Obj")
);
// first constraint: x + y + z <= 5
cplex.addLe(cplex.sum(vars), 5., "Con1");
// second constraint: y + 2z <= 3
cplex.addLe(
cplex.scalProd(new double[]{0., 1., 2.}, vars),
3., "Con2");
// third constraint: x = z
cplex.addEq(vars[0], vars[2], "Con3");
```

This is pretty vanilla stuff. If we print cplex.toString(), we get a nice, readable representation of the model:

IloModel { IloMaximize : 1.0*x + 2.0*y + 3.0*z IloRange Con1 : -infinity <= 1.0*x + 1.0*y + 1.0*z <= 5.0 IloRange Con2 : -infinity <= 1.0*y + 2.0*z <= 3.0 IloRange Con3 : 0.0 <= 1.0*x - 1.0*z <= 0.0 }

One (minor?) surprise so far: the objective function's somewhat unimaginative name ("Obj") did not print out. Now suppose that, for some reason, I want to disassemble the model into its components. CPLEX will give me an iterator (an instance of java.util.Iterator) that I can use to iterate over the model. Here comes the first surprise. Consider the following code, which should iterate over the model and print object names along with some context information:

```
Iterator it = cplex.iterator();
while (it.hasNext()) {
Object thing = it.next();
if (thing instanceof IloNumVar) {
System.out.print("Variable ");
}
else if (thing instanceof IloRange) {
System.out.print("Constraint ");
}
else if (thing instanceof IloObjective) {
System.out.print("Objective ");
}
System.out.println("named " +
((IloAddable) thing).getName());
}
```

Here's the output:

Objective named null Variable named Con1 Variable named Con2 Variable named Con3

Notice anything funny? Besides being consistent about denying that the objective function has a name, the iterator identifies the three constraints as variables (and does not identify any of the variables as anything). I'm told by an insider that listing the constraints as variables turns out to be a "feature" of CPLEX: for somewhat arcane reasons associated with using IloConstraint instances as arguments to the ifThen() method, IloRange implements the IloNumVar interface. At any rate, to use the current cliche, it is what it is.

So I have three things to fix: get CPLEX to acknowledge the name of the objective function; get CPLEX to distinguish constraints from variables; and get CPLEX to list the variables. There is an easy workaround for the first issue (objective name): I change the statement introducing the objective function to

```
cplex.addMaximize(
cplex.scalProd(new double[]{1., 2., 3.}, vars)
).setName("Obj");
```

Applying the setName method to the instance of IloObjective returned by addMaximize gets the name installed correctly. Fixing the third issue (finding the variables) is also simple: I can just add them explicitly into the model. It's redundant, given that they already occur in the model, but seems harmless and gets them listed. I'll insert the following line at the end of model construction:

```
cplex.add(vars);
```

The model output changes to

The model: IloModel { IloMaximize Obj : 1.0*x + 2.0*y + 3.0*z IloRange Con1 : -infinity <= 1.0*x + 1.0*y + 1.0*z <= 5.0 IloRange Con2 : -infinity <= 1.0*y + 2.0*z <= 3.0 IloRange Con3 : 0.0 <= 1.0*x - 1.0*z <= 0.0 x y z }

(note that the variables are now listed at the end of the model). An alternative is to iterate over the expressions in the objective and constraint, using an IloLinearNumExprIterator, and pick out the variables. It's less efficient, and it will find every variable multiple times, but if someone else generated the model and did not explicitly (and redundantly) add the variables, it may be the only viable approach. (And, let's face it, if you wrote the model yourself, you probably don't need to iterate over it looking for variables.)

Finally, the middle issue (distinguishing variables from constraints) is also fairly easy to solve. I can use some sort of naming convention to let me tell them apart by names, but there's an even easier method. A constraint will appear as an instance of both IloNumVar and IloRange, so I can use the latter fact to screen out constraints. I'll change the first if statement to

```
if (thing instanceof IloNumVar &&
!(thing instanceof IloRange)) {
```

which skips over the constraints. With those changes, the output finally looks right:

Objective named Obj Constraint named Con1 Constraint named Con2 Constraint named Con3 Variable named x Variable named y Variable named z

## Saturday, August 7, 2010

### Ill-conditioned Bases and Numerical Instability

I spent a chunk of this afternoon playing with a MIP model sent to me by someone who could not figure out why a solution CPLEX certified as "optimal" had nontrivial constraint violations. That motivated today's blog entry, but first a reminiscence (it's my blog, and I grant myself permission).

Back when I started grad school, men were men and computers were mainframes (and had more or less completed the transition from vacuum tubes to transistors; integrated circuits were still pretty experimental). The first course in my doctoral program was numerical analysis (highly recommended for future OR practitioners), taught by a visiting professor who had not gotten the memo about our "easy grades for doctoral students" policy. (On the bright side, I spent the remaining 27 quarters of my doctoral program free from worry about maintaining a 4.0 GPA.) He told an anecdote about a nameless professor who, in the course of some research, inverted a matrix using some homebrewed FORTRAN code. In the inevitable journal article, he printed the coefficients of the inverse matrix (it apparently had a fairly low dimension) to some ridiculous number of decimal places ... all of which were wrong ... as were the first four digits to the left of the decimal point. The moral of the story was that "stiff" (ill-conditioned) matrices need to be handled carefully.

A bit later in my studies, I took a course in convex analysis that included some linear optimization. I'd also had a course in linear programming as an undergraduate. Neither the undergrad course (in which pivoting was done by hand) nor the graduate course (which emphasized theory) mentioned numerical precision at all. So I'm not surprised that a number of LP/MIP users today are either unaware of or have at best a vague understanding of scaling, conditioning and the perils of stiff matrices.

I probably should offer up some sort of definition of "ill-conditioned" at this point. The Wikipedia has a useful definition of the condition number of a (square, nonsingular) matrix. Essentially, it's the product of the matrix norm of the matrix with the norm of the inverse matrix. Large condition numbers invite bad rounding errors when dealing with the inverse, and of course the simplex method explicitly or implicitly inverts the basis matrix at each iteration. What constitutes a "large" condition number is in the eye of the beholder, but I think it's safe to say that if your matrix has a condition number bigger than the US federal deficit (in dollars), you should start paying attention to it.

The related concept "numerical instability" generally refers to a situation where small, seemingly benign rounding errors can lead to strange results. One source of small errors is when you export a model to a text file and then import it again. Integers are retrieved at full precision, but decimals are rounded both during export and during import, so the imported model is not identical to the exported model. The differences are small, and if the problem is numerical stable the effect is probably negligible; but if the problem is unstable, those small differences can have significant consequences.

A word about scaling before I move on to stiffness. If your constraint matrix mixes big honking coefficients and itty-bitty coefficients (I refer here to magnitude, irrespective of sign), there's a pretty good chance that somewhere along the line you'll get some stiff basis matrices. Even if you don't, there's a good chance rounding error will cause problems somewhere. To a mathematician, a zillion minus epsilon is less than a zillion (epsilon being small and positive). To a computer, maybe it's less, maybe it's equal. So it is advisable to avoid scaling issues by converting the units of variables, where necessary, to bring coefficients into a tighter range of magnitudes.

There is, however, a misconception in some quarters that poor scaling is the only source of ill-conditioned basis matrices. Sadly, that's not true; they can crop up for other, harder to predict, reasons. If you have software that computes condition numbers, pick a nice small value for $\epsilon\gt 0$ and compute the condition number of

\[ \left[\begin{array}{cc} 1 & 1\\ 1-\epsilon & 1 \end{array}\right]. \] Kind of depressing, isn't it?

So what happens if you encounter ill-conditioned basis matrices? It depends on a number of factors, not least of which is the care with which the authors of the software you are using dealt with the possibility. Some algorithms for inverting, factoring or otherwise screwing with matrices are more robust than others, but oftentimes the more robust algorithms are also slower. So it is quite possible (as is the case with CPLEX) that your LP/IP software is capable of compensating for ill-conditioned matrices, at least to some extent, but may not do so to its full capabilities by default because using the full capabilities would slow the software down, including on the majority of problems where that caution was unnecessary.

Optimization software typically will provide, on request, some measures of solution quality. At minimum, you would hope for the maximum violation of any integrality restriction (in a MIP), the maximum violation of any variable bounds, and the maximum violation of any functional constraint (which may be expressed as the maximum bounds violation of any slack variable, which is equivalent). You can, of course, compute these for yourself by substituting the solution values into the constraints. Rounding error being omnipresent, do not expect zero violations across the board; but if you have violations that exceed what you reasonably might expect from rounding, take a look at the condition numbers (sometimes called "kappa values") of the basis matrices. CPLEX will let you see the condition number of the final basis matrix when you solve an LP via one of the simplex variants. A "final" kappa is not available when you solve a MIP, but recent versions of CPLEX allow you, through a parameter setting, to tell it to record the distribution of basis kappa values for either a sample of the node LPs it solves or for all of them (the latter option obviously being slower than the former). It will then tell you what percentage of basis matrices struck it as "stable", "suspect", "unstable" or "ill-posed".

If you are getting suspicious results, there are a number of things you can try, starting with checking the scaling of your coefficients and proceeding to a variety of software settings. I'm familiar with the choices in CPLEX, but my guess is that other optimization software offers a similar palette. First, you can adjust parameter settings that declare the maximum allowable violation of any constraint (EpRHS in CPLEX) or integrality restriction (EpInt). Cranking those tolerances down may help, but it is not guaranteed. (Users sometimes set the tolerances at zero, which can have the unfortunate effect of making a feasible problem look infeasible to CPLEX.) Tighter tolerances may well slow the solution process. Another possibility is to find a setting that tells your solver to try to be more careful about rounding errors. CPLEX has a "Markowitz tolerance" (EpMrk) setting which, if increased, sometimes helps with ill-conditioned bases. The current version (CPLEX 12.2) also has a "numerical precision emphasis" setting (NumericalEmphasis) that tells CPLEX you are concerned about stability issues and want it to be extra careful.

Back when I started grad school, men were men and computers were mainframes (and had more or less completed the transition from vacuum tubes to transistors; integrated circuits were still pretty experimental). The first course in my doctoral program was numerical analysis (highly recommended for future OR practitioners), taught by a visiting professor who had not gotten the memo about our "easy grades for doctoral students" policy. (On the bright side, I spent the remaining 27 quarters of my doctoral program free from worry about maintaining a 4.0 GPA.) He told an anecdote about a nameless professor who, in the course of some research, inverted a matrix using some homebrewed FORTRAN code. In the inevitable journal article, he printed the coefficients of the inverse matrix (it apparently had a fairly low dimension) to some ridiculous number of decimal places ... all of which were wrong ... as were the first four digits to the left of the decimal point. The moral of the story was that "stiff" (ill-conditioned) matrices need to be handled carefully.

A bit later in my studies, I took a course in convex analysis that included some linear optimization. I'd also had a course in linear programming as an undergraduate. Neither the undergrad course (in which pivoting was done by hand) nor the graduate course (which emphasized theory) mentioned numerical precision at all. So I'm not surprised that a number of LP/MIP users today are either unaware of or have at best a vague understanding of scaling, conditioning and the perils of stiff matrices.

I probably should offer up some sort of definition of "ill-conditioned" at this point. The Wikipedia has a useful definition of the condition number of a (square, nonsingular) matrix. Essentially, it's the product of the matrix norm of the matrix with the norm of the inverse matrix. Large condition numbers invite bad rounding errors when dealing with the inverse, and of course the simplex method explicitly or implicitly inverts the basis matrix at each iteration. What constitutes a "large" condition number is in the eye of the beholder, but I think it's safe to say that if your matrix has a condition number bigger than the US federal deficit (in dollars), you should start paying attention to it.

The related concept "numerical instability" generally refers to a situation where small, seemingly benign rounding errors can lead to strange results. One source of small errors is when you export a model to a text file and then import it again. Integers are retrieved at full precision, but decimals are rounded both during export and during import, so the imported model is not identical to the exported model. The differences are small, and if the problem is numerical stable the effect is probably negligible; but if the problem is unstable, those small differences can have significant consequences.

A word about scaling before I move on to stiffness. If your constraint matrix mixes big honking coefficients and itty-bitty coefficients (I refer here to magnitude, irrespective of sign), there's a pretty good chance that somewhere along the line you'll get some stiff basis matrices. Even if you don't, there's a good chance rounding error will cause problems somewhere. To a mathematician, a zillion minus epsilon is less than a zillion (epsilon being small and positive). To a computer, maybe it's less, maybe it's equal. So it is advisable to avoid scaling issues by converting the units of variables, where necessary, to bring coefficients into a tighter range of magnitudes.

There is, however, a misconception in some quarters that poor scaling is the only source of ill-conditioned basis matrices. Sadly, that's not true; they can crop up for other, harder to predict, reasons. If you have software that computes condition numbers, pick a nice small value for $\epsilon\gt 0$ and compute the condition number of

\[ \left[\begin{array}{cc} 1 & 1\\ 1-\epsilon & 1 \end{array}\right]. \] Kind of depressing, isn't it?

So what happens if you encounter ill-conditioned basis matrices? It depends on a number of factors, not least of which is the care with which the authors of the software you are using dealt with the possibility. Some algorithms for inverting, factoring or otherwise screwing with matrices are more robust than others, but oftentimes the more robust algorithms are also slower. So it is quite possible (as is the case with CPLEX) that your LP/IP software is capable of compensating for ill-conditioned matrices, at least to some extent, but may not do so to its full capabilities by default because using the full capabilities would slow the software down, including on the majority of problems where that caution was unnecessary.

Optimization software typically will provide, on request, some measures of solution quality. At minimum, you would hope for the maximum violation of any integrality restriction (in a MIP), the maximum violation of any variable bounds, and the maximum violation of any functional constraint (which may be expressed as the maximum bounds violation of any slack variable, which is equivalent). You can, of course, compute these for yourself by substituting the solution values into the constraints. Rounding error being omnipresent, do not expect zero violations across the board; but if you have violations that exceed what you reasonably might expect from rounding, take a look at the condition numbers (sometimes called "kappa values") of the basis matrices. CPLEX will let you see the condition number of the final basis matrix when you solve an LP via one of the simplex variants. A "final" kappa is not available when you solve a MIP, but recent versions of CPLEX allow you, through a parameter setting, to tell it to record the distribution of basis kappa values for either a sample of the node LPs it solves or for all of them (the latter option obviously being slower than the former). It will then tell you what percentage of basis matrices struck it as "stable", "suspect", "unstable" or "ill-posed".

If you are getting suspicious results, there are a number of things you can try, starting with checking the scaling of your coefficients and proceeding to a variety of software settings. I'm familiar with the choices in CPLEX, but my guess is that other optimization software offers a similar palette. First, you can adjust parameter settings that declare the maximum allowable violation of any constraint (EpRHS in CPLEX) or integrality restriction (EpInt). Cranking those tolerances down may help, but it is not guaranteed. (Users sometimes set the tolerances at zero, which can have the unfortunate effect of making a feasible problem look infeasible to CPLEX.) Tighter tolerances may well slow the solution process. Another possibility is to find a setting that tells your solver to try to be more careful about rounding errors. CPLEX has a "Markowitz tolerance" (EpMrk) setting which, if increased, sometimes helps with ill-conditioned bases. The current version (CPLEX 12.2) also has a "numerical precision emphasis" setting (NumericalEmphasis) that tells CPLEX you are concerned about stability issues and want it to be extra careful.

## Tuesday, August 3, 2010

### Iterating over a CPLEX Model

Periodically a question arises about how to parse an LP (or QP or MIP) in CPLEX after it has been constructed. CPLEX provides iterators in at least the C++ and Java APIs, and probably in the other APIs as well. In C++ the iterators are classes; in Java they implement the standard Java Iterator interface. I'll give a small Java example here (using CPLEX 12; there may be some differences with earlier versions).

To illustrate the process, I'll create a CPLEX model for the following linear program:\begin{eqnarray*}\textrm{maximize} & x+2y+3z\\\textrm{s.t.} & x+y+z & \le 5\\ & y+2z & \le 3\\ & x-z & =0\\ & x,y,z & \ge 0.\end{eqnarray*} Ignoring, as usual, any need to deal with exceptions, here is the code to create the model:

There are several choices for how I iterate over the model. I'll focus specifically on constraints by using a rangeIterator. (With a more general iterator, I might need to rely on instanceOf comparisons to interpret what the iterator was giving me.) Here is the rest of the code:

The output is as follows:

Suppose that, for some reason, I need to be able to associate the coefficients with their corresponding column indices in the original constraint matrix. I'll add a bit of code near the top to map each variable to its column index. (Add the following after the variable declarations and immediately before the declaration of the objective function.)

I replace the contents of the inner while loop with the following:

To illustrate the process, I'll create a CPLEX model for the following linear program:\begin{eqnarray*}\textrm{maximize} & x+2y+3z\\\textrm{s.t.} & x+y+z & \le 5\\ & y+2z & \le 3\\ & x-z & =0\\ & x,y,z & \ge 0.\end{eqnarray*} Ignoring, as usual, any need to deal with exceptions, here is the code to create the model:

IloNumVar[] vars = new IloNumVar[3];

vars[0] = cplex.numVar(0, Double.MAX_VALUE, "x");

vars[1] = cplex.numVar(0, Double.MAX_VALUE, "y");

vars[2] = cplex.numVar(0, Double.MAX_VALUE, "z");

cplex.addMaximize(

cplex.scalProd(

new double[] {1., 2., 3.}, vars), "Obj");

// first constraint: x + y + z <= 5

cplex.addLe(cplex.sum(vars), 5., "Con1");

// second constraint: y + 2z <= 3

cplex.addLe(

cplex.scalProd(

new double[] {0., 1., 2.}, vars), 3., "Con2");

// third constraint: x = z

cplex.addEq(vars[0], vars[2], "Con3");

vars[0] = cplex.numVar(0, Double.MAX_VALUE, "x");

vars[1] = cplex.numVar(0, Double.MAX_VALUE, "y");

vars[2] = cplex.numVar(0, Double.MAX_VALUE, "z");

cplex.addMaximize(

cplex.scalProd(

new double[] {1., 2., 3.}, vars), "Obj");

// first constraint: x + y + z <= 5

cplex.addLe(cplex.sum(vars), 5., "Con1");

// second constraint: y + 2z <= 3

cplex.addLe(

cplex.scalProd(

new double[] {0., 1., 2.}, vars), 3., "Con2");

// third constraint: x = z

cplex.addEq(vars[0], vars[2], "Con3");

There are several choices for how I iterate over the model. I'll focus specifically on constraints by using a rangeIterator. (With a more general iterator, I might need to rely on instanceOf comparisons to interpret what the iterator was giving me.) Here is the rest of the code:

Iterator it = cplex.rangeIterator();

while (it.hasNext()) {

IloRange r = (IloRange) it.next();

System.out.println("Constraint: " + r.getName());

IloLinearNumExprIterator it2 =

((IloLinearNumExpr) r.getExpr()).linearIterator();

while (it2.hasNext()) {

System.out.println("\tVariable "

+ it2.nextNumVar().getName()

+ " has coefficient "

+ it2.getValue());

}

// get range bounds, checking for +/- infinity

// (allowing for some rounding)

String lb = (r.getLB() <= Double.MIN_VALUE + 1) ?

"-infinity" : Double.toString(r.getLB());

String ub = (r.getUB() >= Double.MAX_VALUE - 1) ?

"+infinity" : Double.toString(r.getUB());

System.out.println("\t" + lb + " <= LHS <= " + ub);

while (it.hasNext()) {

IloRange r = (IloRange) it.next();

System.out.println("Constraint: " + r.getName());

IloLinearNumExprIterator it2 =

((IloLinearNumExpr) r.getExpr()).linearIterator();

while (it2.hasNext()) {

System.out.println("\tVariable "

+ it2.nextNumVar().getName()

+ " has coefficient "

+ it2.getValue());

}

// get range bounds, checking for +/- infinity

// (allowing for some rounding)

String lb = (r.getLB() <= Double.MIN_VALUE + 1) ?

"-infinity" : Double.toString(r.getLB());

String ub = (r.getUB() >= Double.MAX_VALUE - 1) ?

"+infinity" : Double.toString(r.getUB());

System.out.println("\t" + lb + " <= LHS <= " + ub);

The output is as follows:

Constraint: Con1 Variable x has coefficient 1.0 Variable y has coefficient 1.0 Variable z has coefficient 1.0 -infinity <= LHS <= 5.0 Constraint: Con2 Variable y has coefficient 1.0 Variable z has coefficient 2.0 -infinity <= LHS <= 3.0 Constraint: Con3 Variable x has coefficient 1.0 Variable z has coefficient -1.0 -infinity <= LHS <= 0.0The outer iterator it (and instance of rangeIterator) iterates over all ranges (constraints) in the model, but its next() method returns the generic type Object, which needs to be cast to IloRange. The inner iterator it2 iterates over the expression portion of a constraint, and returns a variable (via nextNumVar()) and its coefficient (via getValue()) at each step. The iterator exploits any sparsity in the model: only terms with non-zero coefficients are returned.

Suppose that, for some reason, I need to be able to associate the coefficients with their corresponding column indices in the original constraint matrix. I'll add a bit of code near the top to map each variable to its column index. (Add the following after the variable declarations and immediately before the declaration of the objective function.)

HashMap<IloNumVar, Integer> map =

new HashMap<IloNumVar, Integer>();

for (int i = 0; i < 3; i++) {

map.put(vars[i], i);

}

new HashMap<IloNumVar, Integer>();

for (int i = 0; i < 3; i++) {

map.put(vars[i], i);

}

I replace the contents of the inner while loop with the following:

IloNumVar v = it2.nextNumVar();

System.out.println("\tVariable " + v.getName()

+ " (column " + map.get(v) + ")"

+ " has coefficient "

+ it2.getValue());

The relevant portion of the output now reads as follows:
System.out.println("\tVariable " + v.getName()

+ " (column " + map.get(v) + ")"

+ " has coefficient "

+ it2.getValue());

Constraint: Con1 Variable x (column 0) has coefficient 1.0 Variable y (column 1) has coefficient 1.0 Variable z (column 2) has coefficient 1.0 -infinity <= LHS <= 5.0 Constraint: Con2 Variable y (column 1) has coefficient 1.0 Variable z (column 2) has coefficient 2.0 -infinity <= LHS <= 3.0 Constraint: Con3 Variable x (column 0) has coefficient 1.0 Variable z (column 2) has coefficient -1.0 -infinity <= LHS <= 0.0Note that I have to save the value of nextNumVar() in a temporary variable (v) because I need it in two places, and if I called it twice it would advance the iterator too often.

## Monday, August 2, 2010

### Dopey Defaults

Most software these days is configurable, and so comes with default settings. Occasionally even the best crafted software has one or more defaults that just leave me shaking my head.

I'm in the process of upgrading my laptop from Linux Mint Helena to Mint Isadora, having previously upgraded my office machine. I held off on the laptop for a while, partly because I was using hit heavily and partly because I wanted to make sure the office PC didn't suddenly turn into a doorstop, but I decided that today was the day to upgrade the laptop. So I did (well, I'm technically I'm still doing it) ... and suddenly minimized windows started disappearing. Normally they turn into buttons in the task bar (much like Windows, pardon my language), but post-upgrade they just vanished. Googling around, I discovered I was not the first person to notice this, and one of the answers I found involved deleting all hidden files in my home partition (left over from Helena) and rebooting. I was a bit disinclined to try that, partly because I did the office and laptop upgrades the same way (overwrote the system partition but kept my home partition intact), and the office PC didn't seem to have the same problem. But I was on the verge of doing the dirty deed when I found a posted response that solved the problem easily.

The Mint (and Ubuntu) task bar (more properly known as the panel, or Gnome panel, assuming that you are using the Gnome desktop) is configurable. You can right click in an open area in it, click Add to Panel ..., and select various applets to display there. Two of them are Window List and Window Selector. The former is what displays a button for each window. (The latter displays a button that, when clicked, gives a list of windows, from which you can select the one you want to restore.) The dopey thing (to me) is that neither of these is enabled by default.

Now Mint (and Ubuntu) also have a window selection method similar to "Cool Switch" (alt-tab) in Windows, and in fact the default key combo is alt-tab. So if (a) you realize Mint has this and (b) you know the key combination, you can cycle through those invisible (but still open) windows. In fact, if you go to the CompizConfig Settings Manager in the Control Center, you can find some even slicker ways to cycle through windows. (I'm playing with the ring arrangement right now.) Who knows, if I get sufficiently accustomed to tabbing through windows, maybe I'll drop the Window List from the panel -- although I doubt it, as I like to have a visual reminder of which programs are running. But I just can't get past the idea of a default configuration that makes running programs disappear from sight.

I'm in the process of upgrading my laptop from Linux Mint Helena to Mint Isadora, having previously upgraded my office machine. I held off on the laptop for a while, partly because I was using hit heavily and partly because I wanted to make sure the office PC didn't suddenly turn into a doorstop, but I decided that today was the day to upgrade the laptop. So I did (well, I'm technically I'm still doing it) ... and suddenly minimized windows started disappearing. Normally they turn into buttons in the task bar (much like Windows, pardon my language), but post-upgrade they just vanished. Googling around, I discovered I was not the first person to notice this, and one of the answers I found involved deleting all hidden files in my home partition (left over from Helena) and rebooting. I was a bit disinclined to try that, partly because I did the office and laptop upgrades the same way (overwrote the system partition but kept my home partition intact), and the office PC didn't seem to have the same problem. But I was on the verge of doing the dirty deed when I found a posted response that solved the problem easily.

The Mint (and Ubuntu) task bar (more properly known as the panel, or Gnome panel, assuming that you are using the Gnome desktop) is configurable. You can right click in an open area in it, click Add to Panel ..., and select various applets to display there. Two of them are Window List and Window Selector. The former is what displays a button for each window. (The latter displays a button that, when clicked, gives a list of windows, from which you can select the one you want to restore.) The dopey thing (to me) is that neither of these is enabled by default.

Now Mint (and Ubuntu) also have a window selection method similar to "Cool Switch" (alt-tab) in Windows, and in fact the default key combo is alt-tab. So if (a) you realize Mint has this and (b) you know the key combination, you can cycle through those invisible (but still open) windows. In fact, if you go to the CompizConfig Settings Manager in the Control Center, you can find some even slicker ways to cycle through windows. (I'm playing with the ring arrangement right now.) Who knows, if I get sufficiently accustomed to tabbing through windows, maybe I'll drop the Window List from the panel -- although I doubt it, as I like to have a visual reminder of which programs are running. But I just can't get past the idea of a default configuration that makes running programs disappear from sight.

Subscribe to:
Posts (Atom)