Monday, July 20, 2020

Longest Increasing Subsequence

In a recent blog post (whose title I have shamelessly appropriated), Erwin Kalvelagen discusses a mixed-integer nonlinear programming formulation (along with possible linearizations) for a simple problem from a coding challenge: "Given an unsorted array of integers, find the length of longest increasing subsequence." The challenge stipulates at worst $O(n^2)$ complexity, where $n$ is the length of the original sequence. Erwin suggests the intent of the original question was to use dynamic programming, which makes sense and meets the complexity requirement.

I've been meaning for a while to start fiddling around with binary decision diagrams (BDDs), and this seemed like a good test problem. Decision diagrams originated in computer science, where the application was evaluation of possibly complicated logical expressions, but recently they have made their way into the discrete optimization arena. If you are looking to familiarize yourself with decision diagrams, I can recommend a book by Bergman et al. [1].

Solving this problem with a binary decision diagram is equivalent to solving it with dynamic programming. Let $[x_1, \dots, x_n]$ be the original sequence. Consistent with Erwin, I'll assume that the $x_i$ are nonnegative and that the subsequence extracted must be strictly increasing.

We create a layered digraph in which each node represents the value of the largest (and hence most recent) element in a partial subsequence, and has at most two children. Within a layer, no two nodes have the same state, but nodes in different layers can have the same state. We have $n+2$ layers, where in layer $j\in\lbrace 1,\dots,n \rbrace$ you are deciding whether or not to include $x_j$ in your subsequence. One child, if it exists, represents the state after adding $x_j$ to the subsequence. This child exists only if $x_j$ is greater than the state of the parent node (because the subsequence must be strictly increasing). The other child, which always exists, represents the state when $x_j$ is omitted (which will be the same as the state of the parent node). Layer 1 contains a root node (with state 0), layer $n+1$ contains nodes corresponding to completed subsequences, and layer $n+2$ contains a terminal node (whose state will be the largest element of the chosen subsequence). Actually, you could skip layer $n+1$ and follow layer $n$ with the terminal layer; in my code, I included the extra layer mainly for demonstration purposes (and debugging).

In the previous paragraph, I dealt a card off the bottom of the deck. The state of a node in layer $j$ is the largest element of a partial subsequence based on including or excluding $x_1,\dots,x_{j-1}$. The sneaky part is that more than one subsequence may be represented at that node (since more than one subsequence of $[x_1,\dots,x_{j-1}]$ my contain the same largest element). In addition to the state of a node, we also keep track at each node of the longest path from the root node to that node and the predecessor node along the longest path, where length is defined as the number of yes decisions from the root to that node. So although multiple subsequences may lead to the same node, we only care about one (the longest path, breaking ties arbitrarily). Note that by keeping track of the longest path from root to each node as we build the diagram, we actually solve the underlying problem during the construction of the BDD.

The diagram for the original example ($n=8$) is too big to fit here, so I'll illustrate this using a smaller initial vector: $x=[9, 2, 5, 3]$. The BDD is shown below (as a PDF file, so that you can zoom in or out while maintaining legibility).

The first four layers correspond to decisions on whether to use a sequence entry or not. (The corresponding entries are shown in the right margin.) Nodes "r" and "t" are root and terminus, respectively. The remaining nodes are numbered from 1 to 14. Solid arrows represent decisions to use a value, so for instance the solid arrow from node 4 to node 8 means that 5 ($x_3$) has been added to the subsequence. Dashed arrows represent decisions not to use a value, so the dashed arrow from node 4 to node 7 means that 5 ($x_3$) is not being added to the subsequence. Dotted arrows (from the fifth layer to the sixth) do not represent decisions, they just connect the "leaf" nodes to the terminus.

The green(ish) number to the lower left of a node is the state of the node, which is the largest element included so far in the subsequence. The subsequence at node 4 is just $[2]$ and the state is 2. At node 7, since we skipped the next element, the subsequence and state remain the same. At node 8, the subsequence is now $[2, 5]$ and the state changes to 5.

The red numbers $d_i:p_i$ to the lower right of a node $i$ are the distance (number of solid arcs) from the root to node $i$ along the longest path ($d_i$) and the predecessor of node $i$ on the longest path ($p_i$). Two paths converge at $i=13$: a path $r \dashrightarrow 2 \rightarrow 4 \dashrightarrow 7 \rightarrow 13$ of length 2 and a path $r \dashrightarrow 2 \dashrightarrow 5 \dashrightarrow 9 \rightarrow 13$ of length 1. So the longest path to node 13 has length 2 and predecessor node 7. Backtracking from the terminus (distance 2, predecessor either 12 or 13), we get optimal paths $r \dashrightarrow 2 \rightarrow 4 \rightarrow 8 \dashrightarrow 12 \dashrightarrow t$ (subsequence $[2, 5]$) and $r \dashrightarrow 2 \rightarrow 4 \dashrightarrow 7 \rightarrow 13 \dashrightarrow t$ (subsequence $[2, 3]$), the latter shown in blue.

In addition to the original example from the coding challenge ($n=8$), Erwin included an example with $n=100$ and longest increasing subsequence length 15. (There are multiple optimal solutions to both the original example and the larger one.) Gurobi solved the larger example to proven optimality in one second (probably less, since the output likely rounded up the time). My highly non-optimized Java code solved the $n=100$ example in 6 ms. on my PC (not including the time to print the results).

BDDs can get large in practice, with layers growing combinatorially. In this case, however, that is not a problem. Since the state of a node is the largest value of a subsequence, there can be at most $n$ different states. Given the stipulation that no two nodes in a layer have the same state, that means at most $n$ states in a layer. For Erwin's example with $n=100$, the largest layer in fact contained 66 nodes.

As I said earlier, using the BDD here is equivalent to using dynamic programming. With $n+2$ layers, at most $n$ nodes in a layer, and two operations on each node (figuring out the state and path length of the "yes" child and the "no" child), the solution process is clearly $O(n^2)$.

[1] D. Bergman, A. A. Cire, W.-J. van Hoeve and J. Hooker. Decision Diagrams for Optimization (B. O’Sullivan and M. Wooldridge, eds.).  Springer International Publishing AG, 2016.

Sunday, July 12, 2020

Mint 20 Upgrade Hiccup

Okay, "hiccup" might be an understatement. Something beginning with "cluster" might be more appropriate.

I tried to upgrade my MythTV backend box from Linux Mint 19.3 to Mint 20, using the Mint upgrade tool. Even on a fairly fast machine with a fast Internet connection and not much installed on it (MythTV plus the applications that come with Mint), this takes hours. A seemingly endless series of commands scroll in a terminal, and I don't dare walk away for too long, less the process stop waiting for some input from me (it periodically needs my password) or due to a glitch.

Speaking of glitches, I noticed that the scrolling stopped and the process seemed to freeze just after a couple of lines about installing symlinks for MySQL and MariaDB, two database programs. MariaDB, which I've never had installed before, is apparently a fork of MySQL. MythTV uses MySQL as its database manager. Before upgrading, I had killed the MythTV back end, but I noticed that the MySQL server was still running. On a hunch, I opened a separate terminal and shut down the MySQL server. Sure enough, the upgrade process resumed, with a message about a cancelled job or something, which I think referred to MariaDB. Whether this contributed to the unfolding disaster I do not know.

After a reboot, the good news was that everything that should start did start, and the frontend was able to see and play the recorded TV shows. The bad news was that (a) the backend got very busy doing a lot of (alleged) transcoding and scanning for commercials that should not have been necessary (having already been done on all recorded shows) and (b) I could not shut down, because the backend thought it was in a "shutdown/wakeup period", meaning (I think) that it thought it needed to start recording soon -- even though the next scheduled recording was not for a couple of days, and the front end was showing the correct date and time for the next recording. So I think the switch from MySQL to MariaDB somehow screwed up something in the database.

From there, things got worse. I had backed up the database, so I tried to restore the backup (using a MythTV script for just that purpose). The script failed because the database already contained data. Following suggestions online, I dropped the relevant table from the database and tried to run an SQL script (mc.sql) to restore a blank version of the table. No joy -- I needed the root MySQL password, and no password I tried would work. There is allegedly a way to reset the root password in a MySQL database, but that didn't work either, and in fact trying to shut the server down using "sudo service mysql stop" did not work (!). The only way to get rid of the service was to use "sudo pkill mysqld".

Fortunately, timeshift was able to restore the system to its pre-upgrade state (with a little help from a backup of the MythTV database and recordings folder). For reasons I do not understand (which describes pretty much everything discussed here), restoring the database backup did not cause MythTV to remember this week's schedule of recordings, but as soon as I reentered one (using MythWeb) it remembered the rest. And I thought my memory was acting a bit quirky ...

Monday, June 8, 2020

Zoom on Linux

Thanks to the pandemic, I'm been spending a lot of time on Zoom lately, and I'm grateful to have it. The Zoom Linux client seems to be almost as good as the other clients. The only thing that I know is missing is virtual backgrounds, which I do not particularly miss.

That said, I did run into one minor bug (I think). It has to do with what I think is called the "panel". (I've found it strangely hard to confirm this, despite a good bit of searching.) What I'm referring to is a widget that sits off to one side (and can be moved by me) when Zoom is running full screen and a presenter is holding the "spotlight" (owning the bulk of the window). The panel has four buttons at the top that let me choose its configuration. Three of them will show just my video (small, medium or large). The fourth one will show a stack of four videos, each a participant (excluding the presenter), with mine first and the other three selected by some rule I cannot fathom. (Empirical evidence suggests it is not selecting the three best looking participants.) Showing my camera image isn't exactly critical, but it's somewhat reassuring (meaning I know my camera is still working, and I'm in its field of view).

I'm running Zoom on both a desktop and a laptop, the latter exclusively for online taekwondo classes. On my desktop, the panel behaves as one would expect. On my laptop, however, the panel window assigned to my camera was intermittently blanking out. Randomly moving the cursor around would bring the image back (temporarily). This happened regardless of what panel configuration or size I chose.

On a hunch, I disabled the screen lock option on the laptop (which would normally blank the screen or show a lock screen if the laptop sat idle for too long. To be the clear, even with no keyboard/mouse input from me, the laptop was not showing the lock screen or sleeping -- the main presenter was never interrupted. It was just my camera feed that seemed to be napping. That said, disabling the lock screen seems to have helped somewhat. If the panel is showing only my camera, it still blanks after some amount of "idle" time; but if the panel is set to show a stack of four cameras (including mine), mine does not seem to blank out any more.

It's still a mystery to me why mine blanks when it's the only one in the panel, although it's clear there's a connection to my not providing any keyboard or mouse input for a while. The blanking never happens on my desktop. They're both running Linux Mint (the laptop having a somewhat newer version), and they're both running the latest version of the Zoom client. The laptop has a built-in camera whereas the desktop has a USB webcam. The desktop, unsurprisingly, has a faster processor, and probably better graphics. My typical desktop Zoom usage does not involve extended periods of inactivity on my part (if I'm not doing something useful as part of the call, I'm surreptitiously checking email or playing Minesweeper), so the lack of blanking on the laptop may just be lack of opportunity. It might be a matter of the desktop having better hardware. It might just be some minor computer deity figuring it's more entertaining to annoy me during a workout than during a meeting. Anyway, turning off the screensaver got rid of at least part of the problem. If anyone knows the real reason and/or the right fix, please leave a comment.

Monday, June 1, 2020

An Idea for an Agent-Based Simulation

I don't do agent-based simulations (or any other kind of simulations these days), so this is a suggested research topic for someone who does.

A number of supermarkets and other large stores have instituted one-way lanes, presumably thinking this will improve physical distancing of customers. I just returned from my local Kroger supermarket, where the narrower aisles have been marked one-way, alternating directions, for a few weeks now. The wider aisles remain bidirectional (or multidirectional, the way some people roll). Despite having been fairly clearly marked for weeks, I would say that close to half of all shoppers (possibly more than half) are either unaware of the direction limits or disregard them. Kroger offers a service where you order online, their employees grab and pack the food (using rather large, multilevel rolling carts), and then bring it out to your waiting car. Kroger refers to this as "Pickup" (formerly "Clicklist"). Interestingly, somewhere between 70% and 90% of the employees doing "Pickup" shopping that I encountered today were going the wrong direction on the directional aisles.

My perhaps naive thought is that unidirectional aisles are somewhere between useless and counterproductive, even if people obey the rules. That's based on two observations:
  1. the number of people per hour needing stuff from aisle 13 is unaffected by any directional restrictions on the aisle; and
  2. obeying the rules means running up extra miles on the cart, as the shopper zips down aisle 12 (which contains nothing he wants) in order to get to the other end, so that he can cruise aisle 13 in the designated direction.
Of course, OR types could mitigate item 2 by solving TSPs on the (partially directional) supermarket network, charitably (and in my case incorrectly) assuming that they knew which aisle held each item on their shopping list (and, for that matter, charitably assuming that they had a shopping list). I doubt any of us do have supermarket TSPs lying around, and that's beyond the skill set of most other people. So we can assume that shoppers arrive with a list, (mostly) pick up all items from the same aisle in one pass through it, and generally visit aisles in a vaguely ordered way (with occasional doubling back).

If I'm right, item 1 means that time spent stationary near other shoppers is not influenced by the one-way rules, and item 2 means that time spent passing shoppers increases (because shoppers have to log extra wasted miles just getting to the correct ends of aisles). So if any of you simulators out there would care to prove my point investigate this, knock yourselves out, and please let me know what you find.

Addendum: I heard an interview with Dr. Samuel Stanley, the current president of Michigan State University, regarding plans for reopening in Fall 2020. During the interview, he mentioned something about creating one-way pedestrian flows on campus. (Good luck with that -- herding undergrads makes herding cats look trivial.) The logic he expressed was that it would reduce face-to-face encounters among pedestrians. Dr. Stanley's academic field is infectious diseases, so presumably he knows whereof he speaks. On the other hand, my impression from various articles and interviews is that droplets emitted by COVID-infected people can linger in the air for a while. So there is a trade-off with one-way routing: an infected person passes fewer people face-to-face, but presumably spreads the virus over a greater area due to longer routes. Has anyone actually studied the trade-off?

Sunday, May 31, 2020

A Simple Constrained Optimization

A question posted to OR Stack Exchange, "Linear optimization problem with user-defined cost function", caught my eye. The question has gone through multiple edits, and the title is a bit misleading, in that the objective function is in at least some cases nonlinear. The constraints are both linear and very simple. The user is looking for weights to assign to $n$ vectors, and the weights $x_i$ satisfy $$\sum_{i=1}^n x_i = 1\\x \ge 0.$$ Emma, the original poster, put a working example (in Python) on GitHub. The simplified version of her cost function includes division of one linear expression by another, with an adjustment to deal with division by zero errors (converting the resulting NaN to 0).

The feasible region of the problem is a simplex, which triggered a memory of the Nelder-Mead algorithm (which was known as the "Nelder-Mead simplex algorithm" when I learned it, despite confusion with Dantzig's simplex algorithm for linear programs). The Nelder-Mead algorithm, published in 1965, attempts to optimize a nonlinear function (with no guarantee of convergence to the optimum in general), using only function evaluations (no derivatives). It is based on an earlier algorithm (by Spendley, Hext and Himsworth, in 1962), and I'm pretty sure there have been tweaks to Nelder-Mead over the subsequent years.

The Nelder-Mead algorithm is designed for unconstrained problems. That said, my somewhat fuzzy recollection was that Nelder-Mead starts with a simplex (hopefully containing an optimal solution) and progressively shrinks the uncertainty region, each time getting a simplex that is a subset of the previous simplex. So if we start with the unit simplex $\lbrace (1,0,0,\dots,0,0), (0,1,0,\dots,0,0),\dots,(0,0,0,\dots,0,1)\rbrace$, which is the full feasible region, every subsequent simplex should be comprised of feasible points. It turns out I was not quite right. Depending on the parameter values you use, there is one step (expansion) that can leave the current simplex and thus possibly violate the sign restrictions. That's easily fixed, though, by checking the step size and shrinking it if necessary.

There are several R packages containing a Nelder-Mead function, but most of them look like they are designed for univariate optimization, and the one I could find that was multivariate and allowed specification of the initial simplex would not work for me. So I coded my own, based on the Wikipedia page, which was easy enough. I used what that page describes as typical values for the four step size parameters. It hit my convergent limit (too small a change in the simplex) after 29 iterations, producing a solution that appears to be not quite optimal but close.

Just for comparison purposes, I thought I would try a genetic algorithm (GA). GAs are generally not designed for constrained problems, although there are exceptions. (Search "random key genetic algorithm" to find one.) That's easy to finesse in our case. Getting a GA to produce only nonnegative values is easy: you just have to require the code that generates new solutions (used to seed the initial population, and possibly for immigration) and the code that mutates existing solutions to use only nonnegative numbers. That might actually be the default in a lot of GA libraries. "Crossover" (their term for solutions having children) takes care of itself. So we just need to enforce the lone equation constraint, which we can do by redefining the objective function. We allow the GA to produce solutions without regard to the sum of their components, and instead optimize the function $$\hat{f}(x)=f\left(\frac{x}{\sum_{i=1}^n x_i}\right)$$where $f()$ is the original objective function.

R has multiple GA packages. I used the `genalg` package in my experiments. Running was 100 generations with a population of size 200 took several seconds (so longer than what Nelder-Mead took), but it produced a somewhat better solution. Since the GA is a random algorithm, running it repeatedly will produce different results, some worse, possibly some better. You could also try restarting Nelder-Mead when the polytope gets too small, starting from a new polytope centered around the current optimum, which might possibly improve on the solution obtained.

This was all mostly just to satisfy my curiosity. My R code for both the Nelder-Mead and GA approaches is in an R notebook you are free to download.


Saturday, May 23, 2020

Of ICUs and Simulations

I'm a fan of the INFORMS "Resoundingly Human" podcasts, particularly since they changed the format to shorter (~15 minute) installments. I just listened to a longer entry (40+ minutes) about the use of OR (and specifically simulation models) to help with hospital planning during the pandemic. (Grrrr. I'd hoped to keep the word "pandemic" out of my blog. Oh well.) The title is "The dangers of overcrowding: Helping ICUs preserve essential bed space", and the guest is Frances Sneddon, CTO of Simul8 Corporation. I thought the content was interesting, and Frances was very enthusiastic presenting it, so if you have any interest in simulation and/or how OR can help during the (here it comes again) pandemic, I do recommend giving it a listen.

One thing that definitely got my attention was Frances's emphasis on building simulation models in a rapid / interactive / iterative / agile way. ("Rapid" was very much her word, and she used "agile" toward the end of the podcast. "Interactive" and "iterative" are my words for the process she described.) Basically (again with my paraphrasing), she said that the best outcomes occur when simulations are born from discussions among users and modelers where the users ask questions, followed by fairly rapid building and running of a new/revised model, followed by discussions with the users and more of the same. Frances at one point drew an analogy to detective work, where running one simulation lets you ferret out clues that lead to questions that lead to the next model.

To some extent, I think the same likely holds true of other applications of OR in the real world, including optimization. Having one conversation with the end users, wandering off into a cave to build a model, and then presenting it as a fait accompli  is probably not a good way to build trust in the model results and may well leave the user with a model that fundamentally does not get the job done. As a small example, I once worked on a model for assigning school-age children to recreational league athletic teams. The version of the model satisfied all stated constraints, but the user told me it would not work. Some parents have multiple children enrolled in the league, and it is unworkable to expect them to ferry their kids to different teams playing or practicing in different places. So siblings must go on the same team. (There were other constraints that emerged after the initial specification, but I won't bore you with the details.)

So on the one hand, I'm predisposed to agree at least somewhat with what Frances said. Here comes the cognitive dissonance (as my erstwhile management colleagues would say). Once upon a time I actually taught simulation modeling. (I won't say exactly when, but in the podcast Frances mentions having been in the OR field for 20 years, and how saying that makes her feel old. The last time I taught simulation was before she entered the field.) Two significant issues, at least back then, were verifying and validating simulation models. I suspect that verification (essentially checking the correctness of the code, given the model) is a lot easier now, particularly if you are using GUI-based model design tools, where the coding looks a lot like drawing a flow chart from a palette of blocks. The model likely was also presented as a flow chart, so comparing code to model should be straightforward (put the two flow charts side by side). Validation, the process of confirming that the model is correct, may or may not be easier than in the past. To some extent you can achieve "face validity" by talking through the assumptions of the model with the users during those interactive sessions, helped by a flow chart.

Back in my day, we also talked about historical validation (running the model with historical inputs and seeing if the results reasonably tracked with historical outputs). When you are trying to answer "what if" questions (what if we reconfigure the ICU this way, or change admissions this way, or ...?), you likely don't have historical data for the alternate configurations, but you can at least validate that the model adequately captures the "base case", whatever that is. Also, "what if" questions are likely to lead you down paths for which you lack hard data for parameter estimates. What if we build a tent hospital in Central Park (which has never been done before)? What do we use for the rate at which patients experience allergy attacks (from plant life in the park that simply does not exist inside the hospital)? My guess is that your only recourse is to run the simulation for multiple values of the mystery parameter, which leads us to a geometric explosion of scenarios as we pile on uncertain parameters. So my question is this: in an interactive loop (meet with users - hack model - collect runs / synthesize output - repeat), can we take reasonable care to preserve validity without exhausting the parties involved, or overloading them with possibilities to the point that there is no actual take-away?

Informed opinions are welcome in the comments section. (It's an election year here, so I'm already maxed out on uninformed opinions.)

Friday, April 24, 2020

Generating Random Digraphs

In a recent post, OR consultant and blogger Erwin Kalvelagen discussed generating a random sparse network in GAMS. More specifically, he starts with a fixed set of nodes and a desired number of arcs, and randomly generates approximately that number of arcs. His best results, in terms of execution time, came from exporting the dimensions to R, running a script there, writing out the arcs and importing them back into GAMS.

There are three possible issues with the script. Erwin acknowledged the first, which applies to all his approaches: after removing duplicates, he wound up with fewer than the targeted number of arcs. In many applications this would not be a problems, since you would be looking for "about 1% density" rather than "exactly 1% density". Still, there might be times when you need a specific number of arcs, period. You could supplement Erwin's method with a loop that would test the number of arcs and, if short, would generate more arcs, remove duplicates, add the survivors to the network and repeat.

The second possible issue is the occurrence of self-loops (arcs with head and tail the same, such as (5, 5)). Again, this may or may not be a problem in practice, depending on the application. I rarely encounter network applications where self-loops are expected, or even tolerated. Again, you could modify Erwin's code easily to remove self-loops, and it would not increase execution time much.

The third possible issue is that some nodes may be "orphans" (no arcs in or out), and others may be accessible only one way (either inward degree 0 or outward degree 0). Once again, the application will dictate whether this is a matter of concern.

I decided to test a somewhat different approach to generating the network (using R). It has the advantage of guaranteeing the targeted number of arcs, with no self-loops. (It does not address the issue of orphan nodes.) It has the disadvantage of being slower than Erwin's algorithm (but by what I would call a tolerable amount). My approach is based on assigning an integer index to every possible arc. Assume there are $N$ nodes, indexed $0, \dots, N-1$. (Erwin uses 1-based indexing, but it is trivial to adjust from 0-based to 1-based after the network has been generated.) There are $N^2$ arcs, including self-loops, indexed from $0,\dots,N^2-1$. The arc with index $k$ is given by $$f(k) = (k \div n, k \mod n),$$where $\div$ denotes the integer quotient (so that $7 \div 3 = 2$). As self-loop is an arc whose index $k$ satisfies $k\div n = k \mod n$; those are precisely the arcs with indices $k=m(n + 1)$ for $m=0,\dots,n-1$. So my version of the algorithm is to start with the index set $\lbrace 0,\dots,n^2-1\rbrace$, remove the indices $0, n+1, 2n+2,\dots, n^2-1$, take a random subset of the survivors and apply $f()$ to them.

I have an R Notebook that compares the two algorithms, using the same dimensions Erwin had: $n=5000$ nodes, with a target density of 1% (250,000 arcs). Timing is somewhat random, even though I set a fixed random number seed for each algorithm. The notebook includes both code and output. As expected, my code gets the targeted number of arcs, with no self-loops. Erwin's code, as expected, comes up a bit short on the number of arcs, and contains a few (easily removed) self-loops. Somewhat interestingly, in my test runs every node had both inward and outward degree at least 1 for both algorithms. I think that is a combination of a fairly large arc count and a bit of luck (the required amount of luck decreasing as the network density increases). If orphans, or nodes with either no outbound or no inbound arcs, turn out to be problems, there is a fairly easy fix for both methods. First, randomly generate either one or two arcs incident on each node (depending on whether you need both inward and outward arcs everywhere). Then generate the necessary number of additional arcs by adjusting the sample size. As before, you might come up a few arcs short with Erwin's algorithm (unless you include a loop to add arcs until the target is reached). In my algorithm, you can easily calculate the indices of the initial set of arcs (the index of arc $(i,j)$ is $n\times i + j$) and then just remove those indices at the same time that you remove the indices of the self-loops, before generating the remaining arcs.

Tuesday, April 21, 2020

A CP Model for Toasting Bread

A question on Mathematics Stack Exchange deals with a problem (apparently from the book "Thinking Mathematically") about toasting bread on a grill. The grill can hold two slices at a time and can only toast one side of each at a time. You have three slices to toast, and the issue is to figure out how to do it in the minimum possible amount of time (what operations management people refer to as the makespan).

The questioner had a solution that I was able to prove is optimal, using a constraint programming (CP) model that I coded using the Java API to IBM's CPOptimizer (part of the CPLEX Studio product). I won't swear my model is elegant or efficient, since I'm pretty new to CPO, but I think it is correct. If anyone getting started with CPO and the Java API wants to see the source code, it is available in my repository. I'll describe a few key aspects below.

I assumed in my model that there is a single cook. The fundamental components of the model are CPO "interval variables" (instances of IloIntervalVar) for each task (inserting a slice, toasting one side, removing a slice, flipping a slice) along with a dummy task for being done and a placeholder task I called "reversing". Interval variables represent time spans during which tasks are done.

In the problem, there are two ways to get from toasting the front of a slice to toasting the back: you can leave it on the grill and flip it; or you can remove it and (later) replace it with the other side up. Since I didn't know a priori which slices will be handled which way, I created interval variables for removing each slice after the first side, replacing each slice with the second side up, and flipping each slice. Those variables are declared optional, meaning each interval may or may not show up in the solution. For each slice, there is an interval variable for the "reversing" task that is not optional. Each slice has to be reversed, one way or the other. The tasks for replacing a slice (after removing it) and for flipping the slice are listed as alternatives for the reversing task for that slice, which means exactly one of flipping or replacing must be in the solution. Separate constraints ensure that a slice is reinserted if and only if it was removed after the first side toasted. Those constraints use the IloCP.presenceOf function to test whether the remove and reinsert intervals are present, and set the results equal (so both present or neither present).

The sequencing of operations (insert, toast first side, reverse, toast second side, remove) is enforced through a bunch of constraints that use IloCP.endBeforeStart (which says the first argument has to end before the second argument starts). The dummy "done" task is sequenced to start only after each slice has been removed for the final time. I'm pretty sure the objective value (the time everything is done) could be handled other ways, but I tend to think of completion as a zero-length task.

The cook can only do one thing at a time. This is handled using the IloCP.noOverlap function. It is passed a list of every interval that requires the cook's attention (everything but the actual toasting tasks), and prevents any of those intervals from overlapping with any other.

Finally, I need to prevent more than two slices from occupying the grill at any one time. The noOverlap function is no help here. Instead, I use an instance of IloCumFunctionExpr, which represents a function over time that changes as intervals begin and end. In this case, the function measures occupancy.  This is handled by treating the usage as a combination of step functions (IloCP.stepAtStart and IloCP.stepAtEnd). Usage steps up by one at the start of the task of inserting a slice and steps down by one at the end of the task of removing a slice. (Toasting and flipping have no effect on occupancy.) The Javadoc for the relevant functions a bit, um, sparse, essentially saying nothing about the step height argument. Thus I discovered the hard way (through error messages) that adding a step with height -1 when a slice was removed was not acceptable. Instead, I had to subtract a step of height +1.

Although it was not really necessary on a problem this small, I removed some symmetry resulting from the arbitrary ordering of the slices by setting precedence constraints saying that slice 1 is started before slice 2, which in turn is started before slice 3.

It is possible to model the problem as an integer program, and in fact I initially leaned that direction. The IP model, however, would be bulkier and less expressive (which would make it more prone to logic errors), and quite possibly would be slower to solve. CPOptimizer is designed specifically with scheduling problems in mind, so it is the better tool for this particular job.

Friday, April 17, 2020

Objective Constraints (Again)

Long ago, I did a couple of posts [1, 2] about constraints designed to bound objective functions. We are referring here to constraints that explicitly bound the value of the objective function in an integer or mixed-integer linear program. The typical application is when the user is minimizing $f(x)$ subject to $x\in X$ and specifies a bound $f(x) \le d$. (If maximizing the inequality becomes $f(x)\ge d$.) The reason for doing so is to help the solver prune inferior nodes (nodes where $f(x) > d$ when minimizing) faster.

One way to accomplish the goal is to set a feasible starting solution $x^0 \in X$ for which $f(x)\le d$. This of course requires you to know such a solution. Also, setting a starting solution, even a good one, will likely steer the solver in a different direction than what it would have taken without the starting solution (meaning it will build a different tree), and this can wind up either faster or slower than not having the start, depending on where you sit on Santa's naughty/nice list and assorted random factors. (Asserting the bound by any of the other methods listed below can also have unintended consequences. Pretty much anything you do with a MIP can have unintended consequences.)

Assuming you have a bound in mind but not a starting solution, you have a few options. The main takeaways from those two posts were basically the following.
  1. If your solver has the capability, your best bet is probably to specify the bound via a parameter. (CPLEX has the "upper cutoff" parameter for min problems and the "lower cutoff" parameter for max problems to do just this.)
  2. Failing that, you can introduce a variable $z$ to represent your objective function, add a defining constraint $z = f(x)$, minimize $z$ and then specify $d$ as an upper bound for $z$. This may slow the solver some (for reasons explained in the prior posts) but is likely not as bad as the last option.
  3. The last option, which is the most obvious (and thus one users gravitate to), is to add the constraint $f(x) \le d$ to the model. This can slow the solver down noticeably.
The short version of why the last option is undesirable is that if the last constraint is not binding  (which will happen if $d$ is not the optimal value and the solver has found an optimal or near optimal solution), it is just excess baggage. If it is binding, it can cause dual degeneracy.

Someone recently asked about this, and I waved my hand and invoked "dual degeneracy", but I'm not sure how clear I was. So I thought I would augment the two previous posts with a small example.

Suppose that we are solving a MIP model, and at some node we are staring at the following LP relaxation:$$\begin{alignat*}{5} \min & {}-{}5x_{1} & {}+{}40x_{2} & {}-{}5x_{3} & {}+{}5x_{4}\\ \textrm{s.t.} & \phantom{\{\}-}x_{1} & {}-{}\phantom{4}6x_{2} & & {}-{}3x_{4} & {}+{}s_{1} & & & =-3\\ & \phantom{\{\}-}x_{1} & {}-{}\phantom{4}2x_{2} & {}+\phantom{5}{}x_{3} & {}+{}\phantom{4}x_{4} & & {}+{}s_{2} & & =\phantom{-}0\\ & {}-{}5x_{1} & {}+{}40x_{2} & {}-{}5x_{3} & {}+{}5x_{4} & & & {}+{}s_{3} & =-6 &\quad (*)\end{alignat*}$$where the variables are nonnegative, the $s$ variables are slacks, and the constraint (*) is our way of imposing an upper bound of -6 on the objective function. In matrix terms the problem is\begin{align} \min\quad & \bar{c}'\bar{x}\\ \textrm{s.t.}\quad & \bar{A}\bar{x}=\bar{b}\\ & \bar{x}\ge0 \end{align} with $\bar{x}=(x_1,\dots,x_4,s_1,\dots,s_3)'$, $\bar{c}=(-5,40,-5,5,0,0,0)'$, $\bar{b}=(-3,0,-6)'$ and $$\bar{A}=\left[\begin{array}{rrrrrrr} 1 & -6 & 0 & -3 & 1 & 0 & 0\\ 1 & -2 & 1 & 1 & 0 & 1 & 0\\ -5 & 40 & -5 & 5 & 0 & 0 & 1 \end{array}\right].$$The initial basis would be the slack variables, giving us an infeasible solution $x=0$, $s=(-3,0,-6)$ with reduced costs $r = \bar{c}$. The negative values of $s_1$ and $s_3$ cause the infeasibility.

MIP solvers commonly use the dual simplex method to eliminate infeasibility in a node LP. Dual simplex would pivot in the row $i$ with the most negative right-hand side value $\bar{b}_i$, and in the column $j$ for which the ratio $r_j/\bar{a}_{ij}$ is minimal among those where $\bar{a}_{ij}\lt 0$. Here $i=3$ and $j$ is either 1 or 3 (the ratio in both column 1 and column 3 being $-5/-5=1$). Suppose that the solver chooses column 1, making the new basis (in row order) $(s_1, s_2, x_1).$ After the pivot, the reduced cost vector becomes $\hat{r}=c(0,0,0,0,0,0,-1)$, the new right-hand side vector is $\hat{b}=(-4.2, -1.2, 1.2)'$, and the new constraint matrix is $$\hat{A} = \left[\begin{array}{rrrrrrr} 0 & 2 & -1 & -2 & 1 & 0 & 0.2\\ 0 & 6 & 0 & 2 & 0 & 1 & 0.2\\ 1 & -8 & 1 & -1 & 0 & 0 & -0.2 \end{array}\right].$$The solution is still infeasible, and dual simplex will look to pivot in row 1 (where $\hat{b}$ is most negative. There are two possible pivot columns, columns 3 and 4, but the ratio used to distinguish them is zero in both cases because the reduced cost vector is all zeros (except for $s_3$, the slack in the objective constraint).

The same thing happens if we pivot in column 3 rather than column 1, and in fact it is possible to show that the reduced cost vector will be all zeros other than the slack in the constraint limit as long as the slack in the constraint limit is nonbasic. Since that slack variable will typically be nonbasic so long as the constraint is binding, and the constraint is useful only when binding, we can expect to see a lot of LPs where this occurs.  The tie is survivable (we've already seen one tie for pivot column), but picture this occurring where there are many dual pivots required, with perhaps many eligible columns (negative coefficients) for each pivot, and they all have ratio 0. The solver will be flying somewhat blind when it picks pivot columns, which could plausibly slow things down.

References


[1] "Objective Functions Make Poor Constraints"
[2] "Objective Constraints: The Sequel"

Saturday, April 4, 2020

Tangents v. Secants Part II

This is a continuation of a recent post ("Approximating Nonlinear Functions: Tangents v. Secants") on how to work a nonlinear function into a mixed-integer linear programming model. As before, I'm sticking to functions of one variable. To recap the take-aways from that post, there are basically four ways I know to approximate a nonlinear function:
  1. use a piecewise-linear function based on secants;
  2. use a piecewise-linear function based on tangents;
  3. use a surrogate variable that is bounded (below if the function is convex, above if the function is concave) by a collection of linear functions derived from tangents; or 
  4. use the third technique plus a callback that adds additional linear tangent functions whenever a candidate solution underestimates (convex case) or overestimates (concave case) the true value of the function.
The first two methods apply generally, meaning that the function need not be convex or concave, and the constraint it appears in need not be a particular type ($\ge$, $\le$, $=$). The third and fourth methods only work when the function is convex or concave. For convex functions, tangents will underestimate the true value and secants will overestimate it; for concave functions, the opposite is true. For specific cases (convex function in a $\le$ constraint, concave function in a $\ge$ constraint), one of secants or tangents will produce feasible but potentially suboptimal solutions while the other will produce superoptimal but potentially infeasible solutions.

Before going on, I want to make two observations that I should have made in the first post. First, for the specific cases I just mentioned, if you solve the problem twice, once using secants and once using tangents, the true optimal objective value will fall between the values of the two solutions, so you will have an idea of how close to optimal the possibly suboptimal solution is. (I'll illustrate this below.) For nonlinear functions in equality constraints, or for functions that are neither convex nor concave, this would not work. Second, if the argument to the nonlinear function is integer-valued, it makes sense to construct a piecewise-linear function (first two options) with integer-valued break points or to construct tangents (third option) at integer-valued points. That way, you are guaranteed correct function values at least at some points in the domain of the function. This is easy to do with secants but considerably more work with tangents.

I have one more observation to make before getting to an example. In the fourth method I listed, with a convex function in a $\le$ constraint or a concave function in a $\ge$ constraint, if the solver finishes the search with an "optimal" solution, the solution will really be optimal. These are the cases where we would normally risk superoptimality, but the callback will prevent that from happening.

At this point, I'm going to present an example of one possible scenario. The problem is to select repeating order quantities for a collection of products. In the model to follow, capital letters will be parameters and lower case letters will be indices or variables. We start with $N$ products. For each product $i$, we know the annual demand ($D_i$), the unit price ($P_i$), the unit annual holding cost ($H_i$), the cost to place an order for the product ($S_i$, regardless of how much is being ordered), and the unit volume ($V_i$, the amount of storage space one unit occupies). In addition, we know the total storage capacity $C$ of the warehouse where the products will be stored. We will somewhat laughably assume that everything is deterministic.

Let $q_i$ denote the quantity of product $i$ ordered each time an order is placed, and $f_i$ the frequency (number of orders per year) with which product $i$ is replenished. The total annual cost, to be minimized, is $$\sum_{i=1}^N \left[P_iD_i + H_i\frac{q_i}{2} + S_i f_i \right],\tag{1}$$where the first term is the total cost of purchasing products (which is constant), the second term is the total cost of storing them (based on the average inventory level, which is half the order size), and the last term is the total cost for placing orders.

The nonlinear function in this problem is the one relating order quantity to order frequency:$$f_i = \frac{D_i}{q_i}\quad \forall i=1,\dots,N.\tag{2}$$For a single product, it would be easy to substitute out $f_i$ from the objective, leaving a function of just $q_i$, and then differentiate. The first order optimality condition leads to the well known economic order quantity (EOQ) formula$$q_i^* = \sqrt{\frac{2D_iS_i}{H_i}}.$$The catch here is that ordering the EOQ for every item might exceed our storage space. So we resort to a mixed-integer program, minimizing (1) subject to $$\frac{1}{2}\sum_{i=1}^n V_i q_i \le C \tag{3}$$with $$q_i\in \lbrace 1,\dots,D_i\rbrace\quad \forall i\in\lbrace 1,\dots,N\rbrace$$ and $$f_i \in\left[ 1,D_i\right]\quad \forall i\in\lbrace 1,\dots,N\rbrace ,$$plus some constraint(s) to connect $f_i$ to $q_i$. It's worth pausing here to note that at most one of $f_i$ and $q_i$ needs to be discrete. Here I am assuming that order quantities must be integers (we are ordering something like appliances, where a third of a washing machine is not a meaningful concept) but that order frequencies need not be integers (2.5 orders per year just means two orders one year, three the next, and so on).

What is left is to pick one of the methods for approximating the relationship between quantity and frequency, equation (2). For methods 1 and 2,  we can add to (1) and (3) the constraint $$f_i = \ell_i(q_i)\quad\forall i\in\lbrace 1,\dots,N\rbrace,\tag{4}$$where $\ell_i()$ is a piecewise linear function derived from either tangents or secants to the reciprocal function $g(x)=1/x$. For method 3, we can instead compute $M_i$ tangent functions $\ell_i()$ for each $i$ and add the constraint $$f_i \ge \ell_j(q_i) \quad\forall i\in\lbrace 1,\dots,i\rbrace,\,\forall j \in\lbrace 1,\dots,M_i\rbrace.\tag{4'}$$Note that this may underestimate $f_i$ (and thus the cost of a solution) but will not affect feasibility (since the space constraint (3) does not contain $f_i$). Finally, for the fourth method, we can minimize (1) subject to (3) and (4') and also use a callback to add more constraints like (4') on the fly.

I tried all four methods, using CPLEX 12.10, on a small test problem ($N=10$ products). I used a constant holding cost rate ($H_i = 0.2$) for all products, and set the space limit to $C=2136.41$. (Yes, it looks goofy, but I used random numbers to generate the problem.) The product level data was as follows:

ProductUnit CostOrder CostDemandSize
04.042.5417061.57
12.372.5512034.68
23.822.0112061.56
32.922.5213732.00
41.373.3810294.79
52.562.5217002.91
64.553.1213144.28
71.073.1812234.06
83.643.7419163.32
91.972.136302.52

For method 1, I used 10 evenly spaced breakpoints (so nine chords). For the other methods, I computed tangents at 10 evenly spaced points. Of course, more breakpoints or tangents would make for a more accurate approximation. The objective value for each of the four methods is as follows:

MethodNominalActual
1672.48670.05
2490.2728946.48
3490.2728946.48
4633.40633.53

Here "nominal" is the objective value reported by CPLEX and "actual" is the actual cost, using the order quantities from the CPLEX solution but recalculating the order frequencies according to (2). Method 1, based on secants, overestimates frequency (and thus cost) slightly. Methods 2 and 3 massively underestimate some frequencies, and thus the overall cost. The reason is apparent from the next table, which shows the nominal and actual order frequencies for each product:

ProductDemandQuantityActual FreqNominal Freq
0170611706.0019.89
1120311203.0019.80
2120611206.0019.85
3137311373.0019.83
410291377.516.69
5170011700.0019.82
6131411314.0019.83
712231647.466.64
8191611916.0019.91
9630857.416.61

For products with nontrivial order quantities, frequencies are underestimated a bit, but for products with order quantity 1 frequencies are massively underestimated (which is what attracts the solver to using such a small order quantity). Basically, the piecewise-linear approximation of the reciprocal relation (2) stinks at the low end of the quantity range, because the curve is steep and none of the tangents are close enough to that end of the quantity domain. This could be improved by forcing the piecewise-linear functions in method 2 to have a breakpoint at $q_i=1$ or by including a tangent function $\ell_i()$ calculated at $q_i=1$ in method 3. Still, to get a reasonable approximation you might need to add a bunch of tangents at that end of the curve.

Method 4 produces a solution that is actually optimal (to within convergence tolerances). The callback ignored discrepancies between nominal and actual order frequency below 0.01. Cost is very slightly underestimated, which I think could be fixed by setting that 0.01 tolerance even smaller. That might cause the program to generate a huge number of tangents in the callback, slowing it down considerably. As it was, the callback added 392 tangents to the initial set of 10 tangents during the solution run.

I mentioned earlier that using both tangents and secants (in separate runs) would bracket the true optimal value in cases of convex (concave) functions in less than (greater than) constraints. Here the true optimal cost (around 633.5) is indeed below the nominal cost of the secant approach (672.58) and above the nominal cost of the tangent approach (490.27). Note that we have to use the nominal costs, which in the case of the tangent approach is at once horribly inaccurate for the solution produced but still a valid lower bound on the actual optimal cost.

If you would like to look at or play with my code (add breakpoints or tangents, add products, make the frequency rather than the order quantity discrete), you can find it at https://gitlab.msu.edu/orobworld/secants.

Friday, March 27, 2020

A Minimum Variance Partition Problem

Someone posed a question on Mathematics Stack Exchange that I answered there. This post is to explain the model I used (and generalize a bit).

In general terms, you start with a collection $\Omega=[\omega_1, \dots, \omega_n]$ of positive integers. I choose to think of the integers as weights. The mission is to partition $\Omega$ into subcollections $S_1, \dots, S_m$ such that (a) you use as few subcollections as possible (i.e., minimize $m$), (b) the variance of their weight totals is minimal, and (c) no subcollection in the partition has total weight greater than some specified constant $L$. (If, for example, $S_1 = [\omega_4, \omega_9, \omega_{10}]$ then the weight total for $S_1$ is $\omega_4 + \omega_9 + \omega_{10}$.)  Why am I saying "collections" rather than sets? This would be a set partitioning problem were it not for the fact that $\Omega$ may contain duplicate entries and so is technically a multiset.

The term "variance" is slightly ambiguous (population variance or sample variance), but happily that will not matter here. Let $$s_j =\sum_{i : \omega_i \in S_j} \omega_i$$be the weight of subcollection $S_j$. Since we are talking about a partition here (every item in exactly one subcollection), $$\sum_{j=1}^m s_j = \sum_{i=1}^n \omega_i = n \bar{\omega}$$where $\bar{\omega}$ is the (constant) mean of all the item weights, and so the mean of the subcollection weights is $$\bar{s} = \frac{n\bar{\omega}}{m},$$which for a given number $m$ of subcollections is constant. The variance of the subcollection weights is $$\frac{1}{m}\sum_{j=1}^m (s_j - \bar{s})^2=\frac{1}{m}\sum_{j=1}^m s_j^2-\bar{s}^2.$$To minimize that, we can just minimize $\sum_j s_j^2$ in the optimization model, and then do the arithmetic to get the variance afterward.

As originally articulated, the problem has two objectives: minimize the variance of the subcollection weights and minimize the number of subcollections used. Normally, a multiobjective problem involves setting priorities or assigning weights to objectives or something along those lines. For the original question, where there are only $n=12$ elements in $\Omega$ (and thus at most $m=12$ sets in the partition), a plausible approach is to find a minimum variance solution for each possible value of $m$ and present the user with the efficient frontier of the solution space, letting the user determine the trade-off between partition size and variance that most appeals to them. For the particular example in the SE question, I was a bit surprised to find that the variance increased monotonically with the number of sets in the partition, as seen in the following table.

# of Subcollections   Minimum Variance



1 infeasible
2 infeasible
3 infeasible
4 infeasible
5 672
6 1048.22
7 1717.39
8 3824.75
9 7473.73
10 9025.80
11 9404.81
12 11667.06

So the five subcollection partition is minimal both in number of subcollections and weight variance. I would be surprised if that generalized.

The QP formulation uses two sets of variables. For $i=1,\dots,n$ and $j=1,\dots,m$, $x_{ij}\in \lbrace 0, 1\rbrace$ determines whether $\omega_i$ belongs to $S_j$ ($x_{ij}=1$) or not ($x_{ij}=0$). For $j=1,\dots,m$, $s_j \ge 0$ is the total weight of subcollection $j$. The objective function is simply $$\min_{x,s} \sum_{j=1}^m s_j^2$$which happily is convex. The first constraint enforces the requirement that the subcollections form a partition (every element being used exactly once): $$\sum_{j=1}^m x_{ij} = 1 \quad\forall i\in \lbrace 1,\dots,n\rbrace.$$For a partition to be valid, we do not want any empty sets, which leads to the following constraint:$$\sum_{i=1}^n x_{ij} \ge 1\quad\forall j\in\lbrace 1,\dots,m\rbrace.$$ The next constraint defines the weight variables: $$s_j = \sum_{i=1}^n \omega_i x_{ij}\quad\forall j\in\lbrace 1,\dots,m\rbrace.$$What about the limit $L$ on the weight of any subcollection? We can enforce that by limiting the domain of the $s$ variables to $s_j\in [0,L]$ for all $j$.

Finally, there is symmetry in the model. One source is duplicate items in $\Omega$ ($\omega_i = \omega_{i'}$ for some $i\neq i'$). That happens in the data of the SE question, but it's probably not worth dealing with in an example that small. Another source of symmetry is that, given any partition, you get another partition with identical variance by permuting the indices of the subcollections. That one is easily fixed by requiring the subcollections to be indexed in ascending (or at least nondecreasing) weight order:$$s_j \le s_{j+1}\quad\forall j\in\lbrace 1,\dots,m-1\rbrace.$$Is it worth bothering with? With the anti-symmetry constraint included, total run time for all 12 models was between one and two seconds on my PC. Without the anti-symmetry constraint, the same solutions for the same 12 models were found in a bit over 100 minutes.

If you would like to see my Java code (which requires CPLEX), you can find it at https://gitlab.msu.edu/orobworld/mvpartition.

Tuesday, March 24, 2020

Approximating Nonlinear Functions: Tangents v. Secants


In a (much) earlier post [1], I discussed using SOS2 constraints to approximate a nonlinear function in a mixed-integer linear program (MILP). In this post, and probably the next one or two, I'm going to expand on that theme a bit. (By "a bit" I mean "endlessly".)

Summary


To save you some reading, in case you end up not interested, I'll summarize the take-aways here.
  1. We can approximate nonlinear functions in a MILP model using piecewise-linear functions based on either tangents or secants. Piecewise-linear functions implicitly add either binary variables or SOS2 constraints (or maybe both).
  2. For convex functions only appearing in $\le$ constraints or concave functions only appearing in $\ge$ constraints, we can say that using tangents may produce superoptimal solutions (look optimal but are actually infeasible) but will not produce suboptimal solutions, while using secants may produce suboptimal solutions but will not produce infeasible solutions. In any other case, super- and sub-optimality are both risks.
  3. For the same two combinations in the previous paragraph, we have a third option: using a set of linear inequalities based on tangents, rather than a piecewise-linear function. Besides being somewhat easier to set up, this has a potential advantage: using a callback, we can refine the approximation on the fly (by adding additional constraints based on new tangents as they are needed).

Gory details


Suppose that we are working on a MILP, and we need to incorporate a nonlinear function of some of the variables. To keep things simple, I'll illustrate with functions $f:\mathbb{R}\rightarrow\mathbb{R}$ of a single variable, but the concepts presented will extend (not necessarily easily) to functions of multiple variables. In what follows, I will refer to the set of points $(x,f(x))$ as the surface of $f$. A tangent to the surface at $x$ is a line (more generally, hyperplane) that touches the surface at $x$ and has the same slope as the surface at that one point. A secant to the surface is a line (more generally, hyperplane) that intersects the surface at multiple points. For a function of one dimension, a chord is a segment of a secant between two consecutive intersection points. ("Consecutive" means that there are no other intersections of secant and surface between the endpoints of the chord.) I'm not sure if the term "chord" is used in higher dimensions, although the concept extends to higher dimensions.

Figure 1 shows a convex function together with a few tangents (in blue).


Figure 1: Tangents to convex function
Figure 2 shows the same function with a few secants (in green). One of the secants intersects the surface at $(-5,15)$ and $(0,0)$. The dashed line segment between those points is a chord.
Figure 2: Secants to convex function
Figures 3 and 4 do the same for a concave function.
Figure 3: Tangents to concave function

Figure 4: Secants to concave function

A useful property of convex and concave functions is that tangents of convex functions are always less than or equal to the functions and tangents of concave functions are always greater than or equal to the functions. Similar claims cannot be made for secants. A chord of a convex function will always be greater than or equal to the function, but other points on its secant can be less than the function. For instance, in Figure 2 the chord from $x=-5$ to $x=0$ lies above the function surface, but the secant to which it belongs falls below the function surface for $x<-5$ or $x>0$. It is worth noting that tangents to nonconvex functions do not stay on one side of the surface. Figure 5 shows a nonconvex function along with one tangent (in blue) and one secant (in green). Both the tangent and the secant are above the function surface some places and below it other places.
Figure 5: Nonconvex function

In what follows, when I talk about convexity of the feasible region, I will be referring to the feasible region of the continuous relaxation (all integrality restrictions relaxed). Convexity of that region is critical to algorithms using LP bounds. There are two cases of initial interest, because they preserve that convexity. If the function $f()$ is convex and appears only in constraints of the form $f(x)+\dots\le b$ (where $\dots$ is empty or contains only convex terms), or is concave and appears only in constraints of the form $f(x)+\dots\ge b$ (where $\dots$ is empty or contains only concave terms), and if the other constraints do nothing to violate convexity, then the feasible region remains convex when $f()$ is introduced. Under any other scenario, you lose convexity of the feasible region when you introduce $f()$. So for now we will only consider convex $f()$ in $\le$ constraints or concave $f()$ in $\ge$ constraints. If $f()$ is convex or concave but appears in an inequality constraint with the wrong direction, or appears in an equation constraint, or if $f()$ is neither convex nor concave, we will deal with it later (or never, whichever comes second).

For simplicity, suppose that $f()$ is convex. We introduce a new variable $z$ that is a surrogate for $f(x)$, changing each constraint of the form $f(x)+\dots\le b$ to $z+\dots\le b$. Now we have to tie $z$ to $f(x)$, but without using the explicit constraint $z=f(x)$ (due to that whole nonlinearity thing). We can approximate the relationship either with tangents or with secants. With secants, we pick an arbitrary collection of values of $x$, calculate the corresponding values of $f(x)$, define a piecewise-linear function $\hat{f}(x)$ based on those values, and set $z=\hat{f}(x)$. In Figure 2, $\hat{f}(x)$ consists of the four chords shown (with endpoints at $x\in\left\{ -10,-5,0,5,10\right\} $).

With tangents, we have a choice. Both options involve computing tangents at an arbitrary collection of values of $x$. In one version, we find where those tangents intersect and use the intersection points to define a piecewise-linear function. In Figure 1, the five tangents shown intersect at $\left\{ (-7.5,35),(-2.5,-5),(2.5,5),(7.5,65)\right\} $. We can add endpoints for where the first and last tangents exit the interval of interest (say $(-10,80)$ and $(10,120)$ for Figure 1). In the other version, we add the constraints $z\ge\ell_{j}(x)$ for $j=1,\dots,t$, where $t$ is the number of tangents computed and $\ell_{1}(x),\dots,\ell_{t}(x)$ are the linear functions corresponding to the tangents. In Figure 1, $t=5$ and for any $x$ we are constraining $z$ to be greater than or equal to the ordinates of all five dotted lines at that $x$. If $f()$ is concave and appears in the constraint $f(x)\ge b$, the only change is that the added constraints are $z\le\ell_{j}(x)$.

At this point, there are several things we can note.
  1. Calculating chords only requires the evaluation of $f(x)$ for various values of $x$ (plus some simple algebra). Calculating tangents requires computing the first derivative of $f(x)$ when $f()$ is smooth. (More generally, it involves calculating the subgradient of $f()$ at $x$.) It then requires calculating where the tangents intersect, if we are going to make a piecewise-linear function from the tangents. So if $f()$ is painful to work with, or if sorting out a lot of intersections does not appeal to you, you might prefer chords.
  2. Chords involve embedding a piecewise-linear function in a model. As discussed in [1], this can be done using type two special ordered sets (SOS2), which turns an LP into a MIP and turns a MIP into a somewhat more complicated MIP. In another post [2], I discussed a handy feature in the CPLEX APIs that allows for easy entry of a piecewise-linear function. (In the Java API, the method is IloCplexModeler.piecewiseLinear(), which in turn implements a method from the IloMPModeler  interface.) The API method still introduces either binary variables or SOS2 constraints "under the hood", but at least the user is spared the tedious business of computing them. With tangents, the first option described does the same thing, but the second option just involves adding a bunch of linear constraints. (In CPLEX, if you calculate a lot of tangents, you might consider adding the constraints as "lazy constraints".)
  3. The way the approximations affect the solution differs between tangents and secants.
    1. In Figures 1 and 3, it is easy to see that if we replace $f(x)$ with a finite set of tangents, every feasible point remains feasible, but some infeasible points appear feasible. For instance, take the constraint $f(x)\le38$ (which becomes $z\le38$) with the function in Figure 1, redrawn below in Figure 6. The horizontal dotted line (in red) shows the limit (38) imposed by the constraint. The vertical dotted line (in cyan) shows what happens when $x=-7.5$. The black point, $(-7.5,f(-7.5))$ is above the red dotted line, indicating that $x=-7.5$ is infeasible. The blue point, $(-7.5,35)$, is however feasible in the relaxed problem, since it is on or above all the selected tangents and below the constraint limit. So the solver might accept a solution containing $x=-7.5$.
    2. In Figures 2 and 4, it is likewise easy to see that if we replace $f(x)$ with a piecewise-linear function consisting of chords, solutions that are feasible in the relaxed problem will also be feasible in the original problem, but "optimal" solutions to the relaxed problem may actually be suboptimal. The secant approximation to our convex function in Figure 2 is redrawn below in Figure 7. Staying with the constraint $z\le38$, consider what happens at $x=-7$ (the dotted vertical line). The actual function value $f(-7)=35$ is below the constraint limit, so $x=-7$ satisfies the constraint. (This is the black point on the graph.) Barring exclusion due to other constraints, it would be feasible and might be part of an optimal solution. The green point $(-7,41)$, on the piecewise-linear function defined by the secants, is not feasible, however, and so the solver will reject any solution containing $x=-7$, including possibly the true optimum.
Figure 6: Infeasible point

Figure 7: Feasible point excluded

Since tangents may produce infeasible "solutions" and secants may produce suboptimal solutions, we may need to solve the problem, get a sense of where the optimal solution might be located, refine the approximation near there and repeat. Note that if we use tangents for convex or concave functions (which produces an "outer approximation" to the original problem) and the final solution is really feasible (using the actual $f()$), there is no need to refine further. Unfortunately, with secants, I see no obvious way to know that our approximation is "good enough". With functions that are neither convex nor concave, or with convex/concave functions in equations or inequalities going the "wrong" direction, tangents no longer provide an outer approximation (Figure 5), and we may need to refine and repeat.

This brings me to my last point, which pertains to the cases where tangents do provide an outer approximation and where we add all the tangents as constraints rather than turning them into a piecewise-linear function. With solvers, like CPLEX, that allow users to post new constraints during the branch-and-bound process, we can refine on the fly. Referring back to Figure 6, suppose that I am solving the problem using CPLEX, with a lazy constraint callback attached, and CPLEX identifies a solution containing $x=-7.5$. Inside the callback, I can compute $f(-7.5)$, observe that it is greater than the constraint cutoff of $38$ (and also greater than $z=35$), calculate a new tangent at the black point, and add it as a lazy constraint. This will make the proposed new incumbent infeasible. Figure 8 shows this, with the orange dashed line being the tangent at $x=-7.5$. After adding it to our set of tangents (and adding the constraint $z\ge\ell(x)$, where $\ell(x)$ is the linear function given by the new tangent), the blue point no longer gives a possible value for $z$. At $x=-7.5$, the maximum value of any tangent will be the value of $\ell(-7.5)$, and that will equal $f(-7.5)$ since we calculated the tangent there. (It's hard to see in the figure, but trust me, the orange line goes through the black point.) So the solver will update $z$ to the correct value and, in this case, reject the solution as violating the constraint $z\le38$.
Figure 8: Adding a tangent

References

[1] "Linearize That!"
[2] "Piecewise Linear Functions in CPLEX"

Monday, February 17, 2020

Reversing Differences

Fellow blogger Håkan Kjellerstrand posted an interesting question on OR Stack Exchange recently. Starting from a list of integers, it is trivial to compute the list of all pairwise absolute differences, but what about going in the other direction? Given the pairwise (absolute) differences, with duplicates removed, can you recover the source list (or a source list)?

We can view the source "list" as a vector $x\in\mathbb{Z}^n$ for some dimension $n$ (equal to the length of the list). With duplicates removed, we can view the differences as a set $D\subset \mathbb{Z}_+$. So the question has to do with recovering $x$ from $D$. Our first observation kills any chance of recovering the original list with certainty:
If $x$ produces difference set $D$, then for any $t\in\mathbb{R}$ the vector $x+t\cdot (1,\dots,1)'$ produces the same set $D$ of differences.
Translating all components of $x$ by a constant amount has no effect on the differences. So there will be an infinite number of solutions for a given difference set $D$. A reasonable approach (proposed by Håkan in his question) is to look for the shortest possible list, i.e., minimize $n$.

Next, observe that $0\in D$ if and only if two components of $x$ are identical. If $0\notin D$, we can assume that all components of $x$ are distinct. If $0\in D$, we can solve the problem for $D\backslash\{0\}$ and then duplicate any one component of the resulting vector $x$ to get a minimum dimension solution to the original problem.

Combining the assumption that $0\notin D$ and the observation about adding a constant having no effect, we can assume that the minimum element of $x$ is 1. That in turn implies that the maximum element of $x$ is $1+m$ where $m=\max(D)$.

From there, Håkan went on to solve a test problem using constraint programming (CP). Although I'm inclined to suspect that CP will be more efficient in general than an integer programming (IP) model, I went ahead and solved his test problem via an IP model (coded in Java and solved using CPLEX 12.10). CPLEX's solution pool feature found the same four solutions to Håkan's example that he did, in under 100 ms. How well the IP method scales is an open question, but it certainly works for modest size problems.

The IP model uses binary variables $z_1, \dots, z_{m+1}$ to decide which of the integers $1,\dots,m+1$ are included in the solution $x$. It also uses variables $w_{ij}\in [0,1]$ for all $i,j\in \{1,\dots,m+1\}$ such that $i \lt j$. The intent is that $w_{ij}=1$ if both $i$ and $j$ are included in the solution, and $w_{ij} = 0$ otherwise. We could declare the $w_{ij}$ to be binary, but we do not need to; constraints will force them to be $0$ or $1$.

The full IP model is as follows:

\[ \begin{array}{lrlrc} \min & \sum_{i=1}^{m+1}z_{i} & & & (1)\\ \textrm{s.t.} & w_{i,j} & \le z_{i} & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (2)\\ & w_{i,j} & \le z_{j} & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (3)\\ & w_{i,j} & \ge z_{i}+z_{j}-1 & \forall i,j\in\left\{ 1,\dots,m+1\right\} ,i\lt j & (4)\\ & w_{i,j} & =0 & \forall i,j\in\left\{ 1,\dots,m+1\right\} \textrm{ s.t. }(j-i)\notin D & (5)\\ & \sum_{i,j\in\left\{ 1,\dots,m+1\right\} |j-i=d}w_{i,j} & \ge 1 & \forall d\in D & (6)\\ & z_{1} & = 1 & & (7) \end{array} \]

The objective (1) minimizes the number of integers used. Constraints (2) through (4) enforce the rule that $w_{ij}=1$ if and only if both $z_i$ and $z_j$ are $1$ (i.e., if and only if both $i$ and $j$ are included in the solution).  Constraint (5) precludes the inclusion of any pair $i < j$ whose difference $j - i$ is not in $D$, while constraint (6) says that for each difference $d \in D$ we must include at least one pair $i < j$ for that produces that difference ($j - i = d$). Finally, since we assumed that our solution starts with minimum value $1$, constraint (7) ensures that $1$ is in the solution. (This constraint is redundant, but appears to help the solver a little, although I can't be sure given the short run times.)

My Java code is available from my repository (bring your own CPLEX).

Tuesday, February 11, 2020

Collections of CPLEX Variables

Recently, someone asked for help online regarding an optimization model they were building using the CPLEX Java API. The underlying problem had some sort of network structure with $N$ nodes, and a dynamic aspect (something going on in each of $T$ periods, relating to arc flows I think). Forget about solving the problem: the program was running out of memory and dying while building the model.

A major issue was that they allocated two $N\times N\times T$ arrays of variables, and $N$ and $T$ were big enough that $2N^2T$ was, to use a technical term, ginormous. Fortunately, the network was fairly sparse, and possibly not every time period was relevant for every arc. So by creating only the IloNumVar instances they needed (meaning only for arcs that actual exist in time periods that were actually relevant), they were able to get the model to build.

That's the motivation for today's post. We have a tendency to write mathematical models using vectors or matrices of variables. So, for instance, $x_i \, (i=1,\dots,n)$ might be an inventory level at each of $n$ locations, or $y_{i,j} \, (i=1,\dots,m; j=1,\dots,n)$ might be the inventory of item $i$ at location $j$. It's a natural way of expressing things mathematically. Not coincidentally, I think, CPLEX APIs provide structures for storing vectors or matrices of variables and for passing them into or out of functions. That makes it easy to fall into the trap of thinking that variables must be organized into vectors or matrices.

Last year I did a post ("Using Java Collections with CPLEX") about using what Java calls "collections" to manage CPLEX variables. This is not unique to Java. I know that C++ has similar memory structures, and I think they exist in other languages you might use with CPLEX. The solution to the memory issue I mentioned at the start was to create a Java container class for each combination of an arc that actually exists and time epoch for which it would have a variable, and then associate instances of that class with CPLEX variables. So if we call the new class AT (my shorthand for "arc-time"), I suggested the model owner use a Map<AT, IloNumVar> to associate each arc-time combination with the variable representing it and a Map<IloNumVar, AT> to hold the reverse association. The particular type of map is mostly a matter of taste. (I generally use HashMap.) During model building, they would create only the AT instances they actually need, then create a variable for each and pair them up in the first map. When getting a solution from CPLEX, they would get a value for each variable and then use the second map to figure out for which arc and time that value applied. (As a side note, if you use maps and then need the variables in vector form, you can apply the values() method to the first map (or the getKeySet() method to the second one), and then apply the toArray() method to that collection.)

Now you can certainly get a valid model using just arrays of variables, which was all that was available to me back in the Dark Ages when I used FORTRAN, but I think there are some benefits to using collections. Using arrays requires you to develop an indexing scheme for your variables. The indexing scheme tells you that the flow from node 3 to node 7 at time 4 will be occupy slot 17 in the master variable vector. Here are my reasons for avoiding that.
  • Done correctly, the indexing scheme is, in my opinion, a pain in the butt to manage. Finding the index for a particular variable while writing the code is time-consuming and has been known to kill brain cells.
  • It is easy to make mistakes while programming (calculate an index incorrectly).
  • Indexing invites the error of declaring an array or vector with one entry for each combination of component indices (that $N\times N\times T$ matrix above), without regard to whether you need all those slots. Doing so wastes time and space, and the space, as we saw, may be precious.
  • Creating slots that you do not need can lead to execution errors. Suppose that I allocating a vector IloNumVar x = new IloNumVar[20] and use 18 slots, omitting slots 0 and 13. If I solve the model and then call getValues(x), CPLEX will throw an exception, because I am asking for values of two variables (x[0] and x[13]) that do not exist. Even if I create variables for those two slots, the exception will occur, because those two variables will not belong to the model being solved. (There is a way to force CPLEX to include those variables in the model without using them, but it's one more pain in the butt to deal with.) I've lost count of how many times I've seen messages on the CPLEX help forums about exceptions that boiled down to "unextracted variables".
So my advice is to embrace collections when building models where variables do not have an obvious index scheme (with no skips).

Thursday, January 30, 2020

Generic Callback Changes in CPLEX 12.10

CPLEX 12.10 is out, and there have been a few changes to the new(ish) generic callbacks. Rather than go into them in detail (and likely screw something up), I'll just point you to the slides for a presentation by Daniel Junglas of IBM at the 2019 INFORMS Annual Meeting.

I've written about half a dozen posts about generic callbacks since IBM introduced them (which you can find by typing "generic callback" in the search widget on the blog). A couple of things have been added recently, and I thought I would mention them. The generic callback approach uses a single callback function that can be called from a variety of contexts, including when CPLEX solves a node relaxation ("RELAXATION" context), when if finds a candidate solution ("CANDIDATE" context) and, now, when it is ready to split a node into children ("BRANCHING" context).

The branching context is one of the new features. It brings back most of the functionality of the branch callback in the legacy callback system. Unfortunately, it does not seem to have the ability to attach user information to the child nodes, which was a feature that was occasionally useful in the legacy system. You can get more or less equivalent functionality by creating a data store (array, map, whatever) in your global memory and storing the node information keyed by the unique index number of each child node. The catch is that you are now responsible for memory management (freeing up space when a node is pruned and the associated information is no longer needed), and for dealing with thread synchronization issues.

Another new feature is that you can now inject a heuristic solution (if you have one) from all three of the contexts I mentioned above. CPLEX gives you a variety of options for how it will handle the injected solution: "NoCheck" (CPLEX will trust you that it is feasible); "CheckFeasible" (CPLEX will check feasibility and ignore the solution if it is not feasible); "Propagate" (Daniel's explanation: CPLEX will "propagate fixed variables and accept if feasible"); and "Solve" (CPLEX will solve a MIP problem with fixed variables and accept the result if feasible). I assume the latter two mean that you provide a partial solution, fixing some variables but not others. (Unfortunately I was unable to make it to Daniel's talk, so I'm speculating here.)

I'm not sure if those are the only new features, but they are the ones that are most relevant to me. I invite you to read through Daniel's slides to get a more complete picture, including both the reasons for switching from legacy callbacks to generic callbacks and some of the technical issues in using them.