Monday, February 17, 2020

Reversing Differences

Fellow blogger Håkan Kjellerstrand posted an interesting question on OR Stack Exchange recently. Starting from a list of integers, it is trivial to compute the list of all pairwise absolute differences, but what about going in the other direction? Given the pairwise (absolute) differences, with duplicates removed, can you recover the source list (or a source list)?

We can view the source "list" as a vector $x\in\mathbb{Z}^n$ for some dimension $n$ (equal to the length of the list). With duplicates removed, we can view the differences as a set $D\subset \mathbb{Z}_+$. So the question has to do with recovering $x$ from $D$. Our first observation kills any chance of recovering the original list with certainty:
If $x$ produces difference set $D$, then for any $t\in\mathbb{R}$ the vector $x+t\cdot (1,\dots,1)'$ produces the same set $D$ of differences.
Translating all components of $x$ by a constant amount has no effect on the differences. So there will be an infinite number of solutions for a given difference set $D$. A reasonable approach (proposed by Håkan in his question) is to look for the shortest possible list, i.e., minimize $n$.

Next, observe that $0\in D$ if and only if two components of $x$ are identical. If $0\notin D$, we can assume that all components of $x$ are distinct. If $0\in D$, we can solve the problem for $D\backslash\{0\}$ and then duplicate any one component of the resulting vector $x$ to get a minimum dimension solution to the original problem.

Combining the assumption that $0\notin D$ and the observation about adding a constant having no effect, we can assume that the minimum element of $x$ is 1. That in turn implies that the maximum element of $x$ is $1+m$ where $m=\max(D)$.

From there, Håkan went on to solve a test problem using constraint programming (CP). Although I'm inclined to suspect that CP will be more efficient in general than an integer programming (IP) model, I went ahead and solved his test problem via an IP model (coded in Java and solved using CPLEX 12.10). CPLEX's solution pool feature found the same four solutions to Håkan's example that he did, in under 100 ms. How well the IP method scales is an open question, but it certainly works for modest size problems.

The IP model uses binary variables $z_1, \dots, z_{m+1}$ to decide which of the integers $1,\dots,m+1$ are included in the solution $x$. It also uses variables $w_{ij}\in [0,1]$ for all $i,j\in \{1,\dots,m+1\}$ such that $i \lt j$. The intent is that $w_{ij}=1$ if both $i$ and $j$ are included in the solution, and $w_{ij} = 0$ otherwise. We could declare the $w_{ij}$ to be binary, but we do not need to; constraints will force them to be $0$ or $1$.

The full IP model is as follows:

\[ \begin{array}{lrlrc} \min & \sum_{i=1}^{m+1}z_{i} & & & (1)\\ \textrm{s.t.} & w_{i,j} & \le z_{i} & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (2)\\ & w_{i,j} & \le z_{j} & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (3)\\ & w_{i,j} & \ge z_{i}+z_{j}-1 & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (4)\\ & w_{i,j} & =0 & \forall i,j\in\left\{ 1,\dots,m+1\right\} \textrm{ s.t. }(j-i)\notin D & (5)\\ & \sum_{i,j\in\left\{ 1,\dots,m+1\right\} |j-i=d}w_{i,j} & \ge 1 & \forall d\in D & (6)\\ & z_{1} & = 1 & & (7) \end{array} \]

The objective (1) minimizes the number of integers used. Constraints (2) through (4) enforce the rule that $w_{ij}=1$ if and only if both $z_i$ and $z_j$ are $1$ (i.e., if and only if both $i$ and $j$ are included in the solution).  Constraint (5) precludes the inclusion of any pair $i < j$ whose difference $j - i$ is not in $D$, while constraint (6) says that for each difference $d \in D$ we must include at least one pair $i < j$ for that produces that difference ($j - i = d$). Finally, since we assumed that our solution starts with minimum value $1$, constraint (7) ensures that $1$ is in the solution. (This constraint is redundant, but appears to help the solver a little, although I can't be sure given the short run times.)

My Java code is available from my repository (bring your own CPLEX).

Tuesday, February 11, 2020

Collections of CPLEX Variables

Recently, someone asked for help online regarding an optimization model they were building using the CPLEX Java API. The underlying problem had some sort of network structure with $N$ nodes, and a dynamic aspect (something going on in each of $T$ periods, relating to arc flows I think). Forget about solving the problem: the program was running out of memory and dying while building the model.

A major issue was that they allocated two $N\times N\times T$ arrays of variables, and $N$ and $T$ were big enough that $2N^2T$ was, to use a technical term, ginormous. Fortunately, the network was fairly sparse, and possibly not every time period was relevant for every arc. So by creating only the IloNumVar instances they needed (meaning only for arcs that actual exist in time periods that were actually relevant), they were able to get the model to build.

That's the motivation for today's post. We have a tendency to write mathematical models using vectors or matrices of variables. So, for instance, $x_i \, (i=1,\dots,n)$ might be an inventory level at each of $n$ locations, or $y_{i,j} \, (i=1,\dots,m; j=1,\dots,n)$ might be the inventory of item $i$ at location $j$. It's a natural way of expressing things mathematically. Not coincidentally, I think, CPLEX APIs provide structures for storing vectors or matrices of variables and for passing them into or out of functions. That makes it easy to fall into the trap of thinking that variables must be organized into vectors or matrices.

Last year I did a post ("Using Java Collections with CPLEX") about using what Java calls "collections" to manage CPLEX variables. This is not unique to Java. I know that C++ has similar memory structures, and I think they exist in other languages you might use with CPLEX. The solution to the memory issue I mentioned at the start was to create a Java container class for each combination of an arc that actually exists and time epoch for which it would have a variable, and then associate instances of that class with CPLEX variables. So if we call the new class AT (my shorthand for "arc-time"), I suggested the model owner use a Map<AT, IloNumVar> to associate each arc-time combination with the variable representing it and a Map<IloNumVar, AT> to hold the reverse association. The particular type of map is mostly a matter of taste. (I generally use HashMap.) During model building, they would create only the AT instances they actually need, then create a variable for each and pair them up in the first map. When getting a solution from CPLEX, they would get a value for each variable and then use the second map to figure out for which arc and time that value applied. (As a side note, if you use maps and then need the variables in vector form, you can apply the values() method to the first map (or the getKeySet() method to the second one), and then apply the toArray() method to that collection.)

Now you can certainly get a valid model using just arrays of variables, which was all that was available to me back in the Dark Ages when I used FORTRAN, but I think there are some benefits to using collections. Using arrays requires you to develop an indexing scheme for your variables. The indexing scheme tells you that the flow from node 3 to node 7 at time 4 will be occupy slot 17 in the master variable vector. Here are my reasons for avoiding that.
  • Done correctly, the indexing scheme is, in my opinion, a pain in the butt to manage. Finding the index for a particular variable while writing the code is time-consuming and has been known to kill brain cells.
  • It is easy to make mistakes while programming (calculate an index incorrectly).
  • Indexing invites the error of declaring an array or vector with one entry for each combination of component indices (that $N\times N\times T$ matrix above), without regard to whether you need all those slots. Doing so wastes time and space, and the space, as we saw, may be precious.
  • Creating slots that you do not need can lead to execution errors. Suppose that I allocating a vector IloNumVar x = new IloNumVar[20] and use 18 slots, omitting slots 0 and 13. If I solve the model and then call getValues(x), CPLEX will throw an exception, because I am asking for values of two variables (x[0] and x[13]) that do not exist. Even if I create variables for those two slots, the exception will occur, because those two variables will not belong to the model being solved. (There is a way to force CPLEX to include those variables in the model without using them, but it's one more pain in the butt to deal with.) I've lost count of how many times I've seen messages on the CPLEX help forums about exceptions that boiled down to "unextracted variables".
So my advice is to embrace collections when building models where variables do not have an obvious index scheme (with no skips).

Thursday, January 30, 2020

Generic Callback Changes in CPLEX 12.10

CPLEX 12.10 is out, and there have been a few changes to the new(ish) generic callbacks. Rather than go into them in detail (and likely screw something up), I'll just point you to the slides for a presentation by Daniel Junglas of IBM at the 2019 INFORMS Annual Meeting.

I've written about half a dozen posts about generic callbacks since IBM introduced them (which you can find by typing "generic callback" in the search widget on the blog). A couple of things have been added recently, and I thought I would mention them. The generic callback approach uses a single callback function that can be called from a variety of contexts, including when CPLEX solves a node relaxation ("RELAXATION" context), when if finds a candidate solution ("CANDIDATE" context) and, now, when it is ready to split a node into children ("BRANCHING" context).

The branching context is one of the new features. It brings back most of the functionality of the branch callback in the legacy callback system. Unfortunately, it does not seem to have the ability to attach user information to the child nodes, which was a feature that was occasionally useful in the legacy system. You can get more or less equivalent functionality by creating a data store (array, map, whatever) in your global memory and storing the node information keyed by the unique index number of each child node. The catch is that you are now responsible for memory management (freeing up space when a node is pruned and the associated information is no longer needed), and for dealing with thread synchronization issues.

Another new feature is that you can now inject a heuristic solution (if you have one) from all three of the contexts I mentioned above. CPLEX gives you a variety of options for how it will handle the injected solution: "NoCheck" (CPLEX will trust you that it is feasible); "CheckFeasible" (CPLEX will check feasibility and ignore the solution if it is not feasible); "Propagate" (Daniel's explanation: CPLEX will "propagate fixed variables and accept if feasible"); and "Solve" (CPLEX will solve a MIP problem with fixed variables and accept the result if feasible). I assume the latter two mean that you provide a partial solution, fixing some variables but not others. (Unfortunately I was unable to make it to Daniel's talk, so I'm speculating here.)

I'm not sure if those are the only new features, but they are the ones that are most relevant to me. I invite you to read through Daniel's slides to get a more complete picture, including both the reasons for switching from legacy callbacks to generic callbacks and some of the technical issues in using them.

Tuesday, January 7, 2020

Greedy Methods Can Be Exact

We generally sort optimization algorithms (as opposed to models) into two or three categories, based on how certain we are that solutions will be either optimal or at least "good". An answer by Michael Feldmeier to a question I posted on OR Stack Exchange neatly summarizes the categories:
  • exact methods eventually cough up provably optimal solutions;
  • approximate methods eventually cough up solutions with some (meaningful) guarantee regarding how far from optimal they might be; and
  • heuristics provide no worst-case guarantees (but generally are either easy to implement, fast to execute or both).
I should explain my use of "meaningful" (which is not part of Michael's answer). A common way to estimate the "gap" between a solution and the optimum is to take $|z - \tilde{z}|/|z|$, where $z$ is the objective value of the solution produced by the algorithm and $\tilde{z}$ is some bound (lower bound in a minimization, upper bound in a maximization) of the optimal solution. Now suppose that we are minimizing a function known to be nonnegative. If we set $\tilde{s}=0$, we know that any method, no matter how stupid, will have a gap no worse than 100%. To me, that is not a meaningful guarantee. So I'll leave the definition of "meaningful" to the reader.

What brings all this to mind is a question posted on Mathematics Stack Exchange. The author of the question was trying to solve a nonlinear integer program. He approached it by applying a "greedy algorithm". Greedy algorithms are generally assumed to be heuristics, since it seldom is possible to provide useful guarantees on performance. In his case, though, the greedy algorithm is provably optimal, mainly due to the objective function being concave and separable. I'll state the problem and show a proof of optimality below (changing the original notation a bit). Brace yourself: the proof is a bit long-winded.

You start with $N$ workers to be assigned to $M$ work stations. The output of workstation $m$, as a function of the number of workers $x$ assigned to it, is given by $$f_{m}(x)=a_{m}x+b_{m}-\frac{c_{m}}{x},$$
where $a_{m},b_{m},c_{m}$ are all positive constants. Since $f(0)=-\infty$, we can assume that each work station gets at least one worker (and, consequently, that $N>M$). Since $f_{m}'(x)=a_{m}+c_{m}/x^{2}>0$, each $f_{m}()$ is monotonically increasing. Thus, we can safely assume that all $N$ workers will be assigned somewhere. $f_{m}''(x)=-2c_{m}/x^{3}<0$, so $f_{m}()$ is strictly concave (which we will need later). We also note, for future reference, that the impact of adding one worker to a current staff of $x$ at station $m$ is $$\Delta f_{m}(x)=a_{m}+\frac{c_{m}}{x(x+1)}>0.$$
Similarly, the impact of removing one worker at station $m$ is $$\delta f_{m}(x)=-a_{m}-\frac{c_{m}}{x(x-1)}<0.$$We see that $\delta f_{m}(x)$ is an increasing function of $x$ (i.e., it gets less negative as $x$ gets bigger). We also note that $\Delta f_{m}(x)=-\delta f_{m}(x+1)$.

The IP model is easy to state. Let $x_{m}$ be the number of workers assigned to work station $m$. The model is
$$\max\sum_{m=1}^{M}f_{m}(x_{m})$$
subject to
$$\sum_{m=1}^{M}x_{m}\le N$$
with
$$x\in\mathbb{Z}_{+}^{M}.$$

The greedy algorithm starts with a single worker at each station ($x=(1,\dots,1)$) and, at each step, adds one worker to the workstation where that worker produces the greatest increase in objective value (breaking ties arbitrarily). It stops when all $N$ workers are assigned. To prove that it actually finds an optimal solution, I'll use proof by contradiction.

Let $x^{(0)},x^{(1)},\dots,x^{(N-M)}$ be the sequence of solutions constructed by the greedy algorithm, with $x^{(0)}=(1,\dots,1)$, and let $x^{(k)}$ be the last solution in the sequence for which an optimal solution $x^{*}$ exists such that $x^{(k)}\le x^{*}$. The significance of the inequality is that if $x\le x^{*}$, it is possible to extend the partial solution $x$ to the optimal solution $x^{*}$ by adding unassigned workers to work stations. We know that $k$ is well defined because $x^{(0)}\le x^{*}$ for any optimal $x^{*}$. Since we are assuming that the greedy algorithm does not find an optimum, it must be that $k<N-M$.

Now identify the work station $j$ to which the greedy algorithm added a worker at step $k$, meaning that $x_{j}^{(k+1)}=x_{j}^{(k)}+1$ and $x_{i}^{(k+1)}=x_{i}^{(k)}$ for $i\neq j$. Since, by assumption, $x^{(k)}\le x^{*}$ but $x^{(k+1)}\not\le x^{*}$, it must be that $x_{j}^{(k)}=x_{j}^{*}$.

Next, since $x^{(k)}\le x^{*}$ and $x^{(k)}\neq x^{*}$ (else $x^{(k)}$ would be optimal), there is some work station $h\neq j$ such that $x_{h}^{(k)}<x_{h}^{*}$. Let $\tilde{x}$ be the solution obtained from $x^{(k)}$ by adding a worker to station $h$: $\tilde{x}_{h}=x_{h}^{(k)}+1$ and $\tilde{x}_{i}=x_{i}^{(k)}$ for $i\neq h$. Observe that $\tilde{x}\le x^{*}$. The greedy algorithm chose work station $j$ over work station $h$ at $x^{(k)}$, so it must be that
$$\Delta f_{j}(x_{j}^{(k)})\ge\Delta f_{h}(x_{h}^{(k)}). \quad (1)$$

Finally, let $\hat{x}$ be the result of starting from optimal solution $x^{*}$ and shifting one worker from station $h$ to station $j$. Since
$$x_{j}^{(k+1)}=x_{j}^{(k)}+1=x_{j}^{*}+1=\hat{x}_{j},$$ $$x_{h}^{(k+1)}=x_{h}^{(k)}<x_{h}^{*}\implies x_{h}^{(k+1)}\le\hat{x}_{h}$$
and
$$x_{i}^{(k+1)}=x_{i}^{(k)}\le x_{i}^{*}=\hat{x}_{i}\,\forall i\notin\{h,j\},$$
we have $x^{(k+1)}\le\hat{x}$. Under the assumption that $x^{(k)}$ was the last solution in the greedy sequence that could be extended to an optimal solution, it must be that $\hat{x}$ is not optimal. Thus the net change to the objective function at $x^{*}$ when shifting one worker from station $h$ to station $j$ must be negative, i.e.,
$$\Delta f_{j}(x_{j}^{*})+\delta f_{h}(x_{h}^{*})<0.\quad (2)$$

We showed previously that, under our assumptions, $x_{j}^{(k)}=x_{j}^{*}$, from which it follows that
$$\Delta f_{j}(x_{j}^{*})=\Delta f_{j}(x_{j}^{(k)}). \quad (3)$$
We also showed that $\delta f_{h}()$ is an increasing function. Since $\tilde{x}_{h}\le x_{h}^{*}$,
$$\delta f_{h}(x_{h}^{*})\ge\delta f_{h}(\tilde{x}_{h})=-\Delta f_{h}(x_{h}^{(k)}). \quad (4)$$
Combining (4) with (2), we have
$$\Delta f_{j}(x_{j}^{*})-\Delta f_{h}(x_{h}^{(k)})<0,$$
i.e.,
$$\Delta f_{j}(x_{j}^{*})<\Delta f_{h}(x_{h}^{(k)}). \quad (5)$$

Combining (3) with (5) yields
$$\Delta f_{j}(x_{j}^{(k)})<\Delta f_{h}(x_{h}^{(k)})$$
which contradicts (1).