Friday, October 16, 2020

Multilogit Fit via LP

 A recent question on OR Stack Exchange has to do with getting an $L_1$ regression fit to some data. (I'm changing notation from the original post very slightly to avoid mixing sub- and super-scripts.) The author starts with $K$ observations $y_1, \dots, y_K$ of the dependent variable and seeks to find $x_{i,k} \ge 0$ ($i=1,\dots,N$, $k=1,\dots,K$) so as to minimize the $L_1$ error $$\sum_{k=1}^K \left|y_k - \sum_{i=1}^N \frac{e^{x_{i,k}}}{\sum_{j=1}^K e^{x_{i,j}}}\right|.$$ The author was looking for a way to linearize the objective function.

The solution I proposed there begins with a change of variables: $$z_{i,k}=\frac{e^{x_{i,k}}}{\sum_{j=1}^K e^{x_{i,j}}}.$$ The $z$ variables are nonnegative and must obey the constraint $$\sum_{k=1}^{K}z_{i, k}=1\quad\forall i=1,\dots,N.$$ With this change of variables, the objective becomes $$\sum_{k=1}^K \left|y_k - \sum_{i=1}^N z_{i,k} \right|.$$ Add nonnegative variables $w_k$ ($k=1,\dots, K$) and the constraints $$-w_k \le y_k - \sum_{i=1}^N z_{i,k} \le w_k \quad \forall k=1,\dots,K,$$ and the objective simplifies to minimizing $\sum_{k=1}^K w_k$, leaving us with an easy linear program to solve.

That leaves us with the problem of getting from the LP solution $z$ back to the original variables $x$. It turns out the transformation from $x$ to $z$ is invariant with respect to the addition of constant offsets. More precisely, for any constants $\lambda_i$ ($i=1,\dots,N$), if we set $$\hat{x}_{i,k}=x_{i,k} + \lambda_i \quad \forall i,k$$ and perform the $x\rightarrow z$ transformation on $\hat{x}$, we get $$\hat{z}_{i,k}=\frac{e^{\lambda_{i}}e^{x_{i,k}}}{\sum_{j=1}^{K}e^{\lambda_{i}}e^{x_{i,j}}}=z_{i,k}\quad\forall i,k.$$ This allows us to convert from $z$ back to $x$ as follows. For each $i$, set $j_0=\textrm{argmin}_j z_{i,j}$ and note that $$\log\left(\frac{z_{i,k}}{z_{i,j_0}}\right) = x_{i,k} - x_{i, j_0}.$$ Given the invariance to constant offsets, we can set $x_{i, j_0} = 0$ and use the log equation to find $x_{i,k}$ for $k \neq j_0$.

Well, almost. I dealt one card off the bottom of the deck. There is nothing stopping the LP solution $z$ from containing zeros, which will automatically be the smallest elements since $z \ge 0$. That means the log equation involves dividing by zero, which has been known to cause black holes to erupt in awkward places. We can fix that with a slight fudge: in the LP model, change $z \ge 0$ to $z \ge \epsilon$ for some small positive $\epsilon$ and hope that the result is not far from optimal.

I tested this with an R notebook. In it, I generated values for $y$ uniformly over $[0, 1]$, fit $x$ using the approach described above, and also fit it using a genetic algorithm for comparison purposes. In my experiment (with dimensions $K=100$, $N=10$), the GA was able to match the LP solution if I gave it enough time. Interestingly, the GA solution was dense (all $x_{i,j} > 0$) while the LP solution was quite sparse (34 of 1,000 values of $x_{i,j}$ were nonzero). As shown in the notebook (which you can download here), the LP solution could be made dense by adding positive amounts $\lambda_i$ as described above, while maintaining the same objective value. I tried to make the GA solution sparse by subtracting $\lambda_i = \min_k x_{i,k}$ from the $i$-th row of $x$. It preserved nonnegativity of $x$ and maintained the same objective value, but reduce density only from 1 to 0.99.

Wednesday, September 30, 2020

A Greedy Heuristic Wins

 A problem posted on OR Stack Exchange starts as follows: "I need to find two distinct values to allocate, and how to allocate them in a network of stores." There are $n$ stores (where, according to the poster, $n$ can be close to 1,000). The two values (let's call them $x_1$ and $x_2$) must be integer, with $x_1 \in \lbrace 1, \dots, k_1 \rbrace$ and $x_2 \in \lbrace k_1, \dots, k_2 \rbrace$ for given parameters $k_1 < k_2$. Additionally, there is an additional set of parameters $s_{i3}$ and a balance constraint saying $$0.95 g(k_1 e) \le g(x_1, x_2) \le 1.05 g(k_1 e)$$ where $$g(y) = \sum_{i=1} \frac{s_{i3}}{y_i}$$ for any allocation $y$ and $e = (1,\dots, 1).$

The cost function (to be minimized) has the form $$f(x_1, x_2) = a\sum_{i=1}^n \left[ s_{i1}\cdot \left( \frac{s_{i2}}{y_i} \right)^b \right]$$with $a$, $s_{i1}$, $s_{i2}$ and $b$ all parameters and $y_i \in \lbrace x_1, x_2 \rbrace$ is the allocation to store $i$. There are two things to note about $f$. First, the leading coefficient $a (> 0)$ can be ignored when looking for an optimum. Second, given choices $x_1$ and $x_2>x_1$, the cheaper choice at all stores will be $x_1$ if $b < 0$ and $x_2$ if $b > 0$.

It's possible that a nonlinear solver might handle this, but I jumped straight to metaheuristics and, in particular, my go-to choice among metaheuristics -- a genetic algorithm. Originally, genetic algorithms were intended for unconstrained problems, and were tricky to use with constrained problems. (You could bake a penalty for constraint violations into the fitness function, or just reject offspring that violated any constraints, but neither of those approaches was entirely satisfactory.) Then came a breakthrough, the random key genetic algorithm [1]. A random key GA uses a numeric vector $v$ (perhaps integer, perhaps byte, perhaps double precision) as the "chromosome". The user is required to supply a function that translates any such chromosome into a feasible solution to the original problem.

I did some experiments in R, using the GA package to implement a random key genetic algorithm. The package requires all "genes" (think "variables") to be the same type, so I used a double-precision vector of dimension $n_2$ for chromosomes. The last two genes have domains $(1, k_1 + 1)$ and $(k_1, k_2 + 1)$; the rest have domain $(0, 1)$. Decoding a chromosome $v$ proceeds as follows. First, $x_1 = \left\lfloor v_{n+1}\right\rfloor $ and $x_2 = \left\lfloor v_{n+2}\right\rfloor $, where $\left\lfloor z \right\rfloor$ denotes the "floor" (greatest lower integer) of $z$. The remaining values $v_1, \dots, v_{n}$ are sorted into ascending order, and their sort order is applied to the stores. So, for instance, if $v_7$ is the smallest of those genes and $v_{36}$ is the largest, then store $7$ will be first in the sorted list of stores and store $36$ will be last. (The significance of this sorting will come out in a second.)

 

Armed with this, my decoder initially assigns every store the cheaper choice between $x_1$ and $x_2$ and computes the value of $g()$. If $g()$ does not fall within the given limits, the decoder runs through the stores in their sorted order, switching the allocation to the more expensive choice and updating $g()$, until $g()$ meets the balance constraint. As soon as it does, we have the decoded solution. This cheats a little on the supposed guarantee of feasibility in a decoded solution, since there is a (small?) (nearly zero?) chance that the decoding process will fail with $g()$ jumping from below the lower bound to above the upper bound (or vice versa) after some swap. If it does, my code discards the solution. This did not seem to happen in my testing.

 

The GA seemed to work okay, but it occurred to me that I might be over-engineering the solution a bit. (This would not be the first time I did that.) So I also tried a simple greedy heuristic. Since $k_1$ and $k_2$ seem likely to be relatively small in the original poster's problem (whereas $n$ is not), my greedy heuristic loops through all valid combinations of $x_1$ and $x_2$. For each combination, it sets $v_1$ equal to the cheaper choice and $v_2$ equal to the more expensive choice, assigns the cheaper quantity $v_1$ to every store and computes $g()$. It also computes, for each store, the ratio \[ \frac{|\frac{s_{i3}}{v_{2}}-\frac{s_{i3}}{v_{1}}|}{s_{i1}\left(\left(\frac{s_{i2}}{v_{2}}\right)^{b}-\left(\frac{s_{i1}}{v_{1}}\right)^{b}\right)} \]in which the numerator is the absolute change in balance at store $i$ when switching from the cheaper allocation $v_1$ to the more expensive allocation $v_2$, and the denominator is the corresponding change in cost. The heuristic uses these ratios to select stores in descending "bang for the buck" order, switching each store to the more expensive allocation until the balance constraint is met.


Both the GA decoder and the greedy heuristic share the approach of initially allocating every store the cheaper choice and then switching stores to the more expensive choice until balance is attained. My R notebook generates a random problem instance with $n=1,000$ and then solves it twice, first with the GA and then with the greedy heuristic. The greedy heuristic stops when all combinations of $x_1$ and $x_2$ have been tried. Stopping criteria for the GA are more arbitrary. I limited it to at most 1,000 generations (with a population of size 100) or 20 consecutive generations with no improvement, whichever came first.

 

The results on a typical instance were as follows. The GA ran for 49 seconds and got a solution with cost 1065.945. The greedy heuristic needed only 0.176 seconds to get a solution with cost 1051.735. This pattern (greedy heuristic getting a better solution in much less time) repeated across a range of random number seeds and input parameters, including switching between positive and negative values of $b$.


If you are interested, you can browse my R notebook (which includes both code and results).

 

[1] Bean, J. C. (1994). Genetic Algorithms and Random Keys for Sequencing and Optimization. ORSA Journal on Computing, 6, 154-160.

Thursday, September 3, 2020

Installing Rcplex and cplexAPI

I've previously mentioned solving MIP models in R, using CPLEX. In one post [1], I used the OMPR package, which provides a domain specific language for model construction. OMPR uses the ROI package, and in particular the ROI.plugin.cplex package, to communicate with CPLEX. That, in turn, uses the Rcplex package. In another post [2], I used Rcplex directly. Meanwhile, there is still another package, cplexAPI, that provides a low-level API to CPLEX.

Both Rcplex and cplexAPI will install against CPLEX Studio 12.8 and earlier, but neither one installs with CPLEX Studio 12.9 or 12.10. Fortunately, IBM's Daniel Junglas was able to hack solutions for both of them. I'll spell out the steps I used to get Rcplex working with CPLEX 12.10. You can find the solutions for both in the responses to this question on the IBM Decision Optimization community site. Version information for what follows is: Linux Mint 19.3; CPLEX Studio 12.10; R 3.6.3; and Rcplex 0.3-3. Hopefully essentially the same hack works with Windows.

  1. Download Rcplex_0.3-3.tar.gz, put it someplace harmless (the Downloads folder in my case, but /tmp would be fine) and expand it, producing a folder named Rcplex.
  2. Go to the Rcplex folder and open the 'configure' file in a text editor (one you would use for plain text files).
  3. Line 1548 should read as follows:
    CPLEX_LIBS="-L${CPLEXLIBDIR} `${AWK} 'BEGIN {FS = " = "} /^CLNFLAGS/ {print $2}' ${CPLEX_MAKEFILE}`"
    .
    Replace it with
    CPLEX_LIBS="-L${CPLEXLIBDIR} `${AWK} 'BEGIN {FS = " = "} /^CLNFLAGS/ {print $2}' ${CPLEX_MAKEFILE} | sed -e 's,\$(CPLEXLIB),cplex,'`"
    .
    Save the modified file.
  4. Open a terminal in the parent directory of the Rcplex folder and run the following command:
    R CMD INSTALL --configure-args="--with-cplex-dir=.../CPLEX_Studio1210/cplex/" ./Rcplex
    .
    Adjust the file path (particularly the ...) so that it points to the 'cplex' directory in your CPLEX Studio installation (the one that has subdirectories named "bin", "examples", "include" etc.).
  5. Assuming there were no error messages during installation, you should be good to go.

[1] https://orinanobworld.blogspot.com/2016/11/mip-models-in-r-with-ompr.html

[2] https://orinanobworld.blogspot.com/2020/08/a-group-selection-problem.html

Update: Version 1.4.0 of cplexAPI, released on 2020-09-21, installs correctly against CPLEX 12.10 (and presumably 12.9), at least on my system (Linux Mint).

Saturday, August 29, 2020

A Group Selection Problem

Someone posted an interesting question about nonlinear integer programming with grouped binary variables on Stack Overflow, and it drew multiple responses. The problem is simple to state. You have 52 binary variables $x_i$ partitioned into 13 groups of four each, with a requirement that exactly one variable in each group take the value 1. So the constraints are quite simple:

\begin{align*} x_{1}+\dots+x_{4} & =1\\ x_{5}+\dots+x_{8} & =1\\ \vdots\\ x_{49}+\dots+x_{52} & =1. \end{align*}

The objective function is a cubic function of the form

\[ \left(\alpha\sum_{i}a_{i}x_{i}\right)\times\left(\beta\sum_{j}b_{j}x_{j}+\beta_{0}\right)\times\left(\gamma\sum_{k}c_{k}x_{k}+\gamma_{0}\right) \] where $\alpha = 1166/2000$, $\beta = 1/2100$, $\beta_0 = 0.05$, $\gamma = 1/1500$ and $\gamma_0 = 1.5$. (In the original post, there is a minus sign in front of the function and the author minimizes; for various reasons I am omitting the minus sign and maximizing here.) Not only is the objective nonlinear, it is nonconvex if minimizing (nonconcave if maximizing). The author of the question was working in R.

Fellow blogger Erwin Kalvelagen solved the problem with a variety of nonlinear optimizers, obtaining a solution with objective value -889.346. Alex Fleischer of IBM posted an answer with the same objective value, using a constraint programming model written in OPL and solved with CP Optimizer.

My initial thought was to linearize the objective function by introducing continuous variables $y_{ij} = x_i \cdot x_j$ and $z_{ijk} = x_i \cdot x_j \cdot x_k$ with domain [0,1]. Many of those variables can be eliminated, due in part to symmetry ($y_{ij} = y_{ji}$, $z_{ijk} = z_{ikj}=\dots=z_{kji}$ and in part due to the observation that $y_{ii}=z_{iii}=x_i$. Also useful is that for $i<j<k$ $z_{ijk}=x_i \cdot y_{jk}$. I have an R notebook that you can download, in which I build the model using standard linearizations for the product of two binarys, then try to solve it with CPLEX using the Rcplex package (and the Matrix package, which allows a sparse representation of the constraint matrix). The results were, shall we say, unspectacular. With a five minute time limit (much longer than what Erwin or Alex needed), CPLEX found an incumbent with value 886.8748 (not bad but not optimal) and a rather dismal optimality gap of 146.5% (due mainly to a loose and slow moving bound).

Out of curiosity, I took a second shot using a genetic algorithm and the GA package for R. I was geeked to see that the GA package includes both an island model (using parallel processing) and a permutation variant (which lets me use permutations of the indices 1 to 52 as chromosomes with no extra work on my part). The permutation approach allows me to treat a chromosome as a prioritization of the 52 binary variables, which I decode into a solution $x$ by scanning the $x_i$ in priority order and setting each to 1 if and only none of the other variables in its group of four has been set to 1. That R notebook is also available for download.

As a metaheuristic, the GA does not offer a proof of optimality, and in fact may or may not find the optimal solution. With my inspired choice of random number seed (123), I matched Erwin's and Alex's solution (889.3463). The settings I used resulted in a run time of about 36 seconds on my PC, more than half of which was spent after the best solution had been found. It's still slower than what Erwin and Alex achieved, but it is a "pure R" solution, meaning it requires nothing besides open-source R packages.

Sunday, August 23, 2020

Multiobjective Optimization in CPLEX

In my previous post, I discussed multiobjective optimization and ended with a simple example. I'll use this post to discuss some of the new (as of version 12.9) features in CPLEX related to multiobjective optimization, and then apply them to the example from the previous post. My Java code can be downloaded from my GitLab repository.

Currently (meaning as of CPLEX version 12.10), CPLEX supports multiple objectives in linear and integer programs. It allows mixtures of "blended" objective functions (weighted combinations of original criteria) and "lexicographic" hierarchical objectives. Basically, you set one or more hierarchy (priority) levels, and in each one you can have a single criterion or a weighted combination of criteria. So the "classical" preemptive priority approach would involve multiple priority levels with a single criterion in each, while the "classical" weighted combination approach would involve one priority level with a blended objective in it. Overall, you are either maximizing or minimizing, but you can use negative weights for criteria that should go the opposite direction of the rest. In the example here, which is a minimization problem, the third priority level gives maximum provider utilization a weight of +1 (because we want to minimize it) and minimum provider utilization a weight of -1 (because we want to maximize it).

There are some limitations to the use of multiple objectives. The ones I think are of most general interest are the following:

  • objectives and constraints must be linear (no quadratic terms); and
  • all generic callbacks and legacy information callbacks can be used, but other legacy callbacks, and in particular legacy control callbacks (branch callbacks, cut callbacks etc.) cannot be used. So if you need to use callbacks with a multiobjective problem, now would be a good time to learn the new generic callback system.

Every criterion you specify has a priority level and, within that priority level, a weight. A feature that I appreciate, and which I will use in the code, is that you can also specify an absolute and/or a relative tolerance for each criterion. The tolerances tell CPLEX how much it can sacrifice in that criterion to improve lower priority criteria. The default tolerance is zero, meaning higher priority criteria must be optimized before lower priority criteria are even considered. A nonzero tolerance basically tells CPLEX that is allowed to sacrifice some amount (either an absolute amount or a percentage of the optimal value) in that criterion in order to improve lower priority criteria.

Defining the variables and building the constraints of a multiobjective model is no different from a typical single criterion model. Getting the solution after solving the model is also unchanged. The differences come mainly in how you specify the objectives and how you invoke the solver.

To build the objective function, you need to use one of the several overloads of IloCplex.staticLex(). The all take as first argument a one dimensional array of expressions IloNumExpr[], and they all return an instance of the new interface IloCplexMultiCriterionExpr. In addition to an array of objective expressions, one of the overloads lets you also specify arrays of weights, priorities and tolerances (absolute and relative). That's the version used in my sample code. 

This brings me to a minor inconvenience relative to a conventional single objective problem. Ordinarily, I would use IloCplexModeler.addMinimize(expr) or IloCplexModeler.addMaximize(expr) to add an objective to a model, where expr is an instance of IloNumExpr. I naively thought to do the same here, using the output of staticLex() as the expression, but that is not (currently) supported. There's no overload of addMinimize() or addMaximize() that accepts a multicriterion expression. So it's a three step process: use cplex.staticLex(...) to create the objective and save it to a temporary variable (where cplex is your IloCplex instance); pass that variable to either cplex.minimize(...) or cplex.maximize(...) and save the resulting instance of IloObjective in a temporary variable; and then invoke cplex.add(...) on that variable.

When you are ready to solve the model, you invoke the solve() method on it. You can continue to use the version of solve() that takes no arguments (which is what my code does), or you can use a new version that takes as argument an array of type IloCplex.ParameterSet[]. This allows you to specify different parameter settings for different priority levels.

Other methods you might be interested in are IloCplex.getMultiObjNSolves() (which gets the number of subproblems solved) and IloCplex.getMultiObjInfo() (which lets you look up a variety of things that I really have not explored yet).

The output from my code (log file), which is in the repository, is too lengthy to show here, but if you want you can use this link to open it in a new tab. Here's a synopsis. I first optimized each of the three objective functions separately. (Recall that maximum and minimum provider utilization are blended into one objective.) This gives what is sometimes referred to as the "Utopia point" or "ideal point". This is column "U" in the table below. Next, I solved the prioritized multiobjective problem. The results are in column "O" of the table. Finally, to demonstrate the ability to be flexible with priorities, I resolved the multiobjective problem using a relative tolerance of 0.1 (10%) for the top priority objective (average distance traveled) and 0.05 (5%) for the second priority objective (maximum distance traveled). Those results are in column "F".


U O F
Avg. distance 14.489 14.489 15.888
Max distance 58.605 58.605 60.000
Utilization spread 0.030 0.267 0.030
Max utilization 0.710 0.880 0.710
Min utilization 0.680 0.613 0.680

There are a few things to note.

  1. The solution to the multiobjective model ("O") achieved the ideal values for the first two objectives. One would expect to match the ideal value on the highest priority objective; matching on the second objective was luck. The third objective (utilization spread) was, not surprisingly, somewhat worse than the ideal value.
  2. Absolute and relative tolerances appear to work the same way that absolute and relative gap tolerances do: if a solution is within either absolute or relative tolerance of the best possible value on a higher priority objective, it can be accepted. In the third run, I set relative tolerances but let the absolute tolerances stay at default values.
  3. The relative tolerances I set in the last run would normally allow CPLEX to accept a solution with an average travel distance as large as $(1 + 0.1)*14.489 = 15.938$ and a maximum travel distance as large as $(1 + 0.05)*58.605 = 61.535$. There is a constraint limiting travel distance to at most 60, though, which supersedes the tolerance setting.
  4. The "flexible" solution (column "F") exceeds the ideal average distance by about 9.7%, hits the cap of 60 on maximum travel distance, and actually achieves the ideal utilization spread. However, without knowing the ideal point you would not realize that last part. I put a fairly short time limit (30 seconds) on the run, and it ended with about a 21% gap due to a very slow-moving best bound.

I'll close with one last observation. At the bottom of the log, after solving the "flexible" variant, you will see the following lines.

Solver status = Unknown.
Objective 0: Status = 101, value = 14.489, bound = 14.489.
Objective 1: Status = 101, value = 58.605, bound = 58.605.
Objective 2: Status = 107, value = 0.030, bound = 0.023.
Final value of average distance traveled = 15.888.
Final value of longest distance traveled = 60.000.
Final value of maximum provider utilization = 0.710.
Final value of minimum provider utilization = 0.680.

The first four lines are printed by CPLEX, the last four by my code. Note the mismatch in objective values of the first two criteria (bold for CPLEX, italic for my results). CPLEX prints the best value it achieved for each objective before moving on to lower priority objectives. When you are using the default tolerances of zero (meaning priorities are absolute), the printed values will match what you get in the final solution. When you specify non-zero tolerances, though, CPLEX may "give back" some of the quality of the higher priority results to improve lower priority results, so you will need to recover the objective values yourself.

Thursday, August 20, 2020

Multiobjective Optimization

Multiobjective optimization (making "optimal" decisions involving multiple, frequently conflicting, criteria) is a big subject. I will only nibble at the fringes of it here. In the next post, I'll describe recent additions to CPLEX that facilitate solving some multiobjective problems.

Among the various approaches to multiobjective problems, two are probably the most common, weighting and prioritization. The first approach is to merge the various criteria into a single one, usually (almost always?) by taking a weighted sum of the criteria. The CPLEX documentation refers to this as a blended objective. For this to make sense, the units of the various criteria really should be commensurable (e.g., all monetary values), but I'm pretty sure having criteria that are not commensurable doesn't stop people from trying. The weights serve two roles. First, they bring the units into some semblance of parity (so if $f()$ is in dollars and $g()$ in millions of dollars, $g()$ gets a weight roughly on millionth the size of the weight of $f()$). Second, they convey relative importance of various criteria.

The second approach is to prioritize the criteria. The solver initially optimizes the highest priority criterion, without regard to any others. Once an optimal value of the highest priority criterion is known, maintaining that value becomes a constraint, and the solver moves to the second highest priority criterion, and so on. The CPLEX documentation refers to this as a lexicographic objective, meaning that the objective function is vector-valued rather than scalar-valued, and optimization means achieving the lexicographically largest or smallest objective vector possible. A variant of this allows a little "slippage" in the value of each criterion, so that for example the solver can accept a solution that is 1% below optimal on the first criterion in return for optimizing the second criterion. A key limitation here is the solver will trade any amount of degradation in a lower priority criterion, no matter how much, for any improvement in a higher priority criterion, no matter how small.

Although they are not relevant to the recent CPLEX additions, I will mention two other approaches. One is a variant of the priority method, known as goal programming (GP). This was originally developed as an extension of linear programming, but the same general approach can be extended to problems with integer variables. The user sets target levels for each criterion, and then prioritizes them. If a goal is underachieved, work on meeting lower priority goals cannot sacrifice any amount of the higher priority criterion. On the other hand, if a goal is overachieved, any portion of the overachievement can be sacrificed in the quest to reach a lower priority goal. An interesting attribute of goal programming is that the same criterion can be used with more than one goal. Suppose that you are building a GP model allocating a budget to various conservation projects. Your highest priority goal might be to allocate at least 50% of the budget to projects in underserved communities (USCs, to save me typing, with apologies to the universities of South Carolina and Southern Califonia). Your second highest priority goal might be to allocate at least 30% of the budget to projects with matching funds from outside sources. Your third highest priority goal might be to allocate at least 75% of the budget to USCs. The other approach is to investigate the Pareto frontier, the set of all solutions for which no other solution does as well in all criteria and better in at least one. In essence, you want to present the decision-maker with the entire Pareto frontier and say "here, pick one". In practice, computing the Pareto frontier can be very computationally expensive, and trying to make sense of it might cause the decision maker to melt down.

To close this post, I'll pose a small sample problem and formulate the model for it. Suppose that we have $N$ patients in a health care system and $M$ providers, and that each patient needs to be assigned to a single provider. Provider $j$ has a limit $c_j$ on the number of patients they can handle. (To keep the example simple, and at the expense of some realism, we treat all patients as identical with regard to their capacity consumption.) We are given a matrix $D\in \mathbb{R}^{N\times M}$ of distances from patients to providers, as well as a cap $D_{max}$ on the distance that a patient can be required to travel. There are four criteria to be considered:

  • the average distance patients will travel (minimize, highest priority);
  • the maximum distance any patient must travel (minimize, second highest priority);
  • the maximum utilization of any provider as a fraction of their capacity (minimize, tied for third highest priority); and
  • the minimum utilization of any provider as a fraction of their capacity (maximize, tied for third highest priority).

So we have a mix of three things to minimize and one to maximize, with the last two criteria combining to somewhat level the workload across providers. 

Let $x_{ij}$ be 1 if patient $i$ is assigned to provider $j$ and 0 if not, let $w$ be the longest distance traveled by any patient, let $y_j$ be the fraction of provider $j$'s capacity that is utilized, and let $z_{lo}$ and $z_{hi}$ be the minimum and maximum capacity utilization rates, respectively (where 0 means the provider is unused and 1 means the provider is operating at capacity). The objective expression is $f\in\mathbb{R}^3$, whose lexicographic minimum we seek, where

\[ f=\left[\begin{array}{c} \frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{M}d_{ij}x_{ij}\\ w\\ z_{hi}-z_{lo} \end{array}\right]. \]

The first and second components of $f$ are the average and maximum client travel distances. The third component is a weighted mix of maximum and minimum provider utilization, where the weights (+1, -1) are equal in magnitude to reflect the equal importance I am assigning to them and the negative coefficient for minimum utilization allows it to be maximized in what is otherwise a minimization problem.


The constraints of the model are easy to state:

\begin{align*} \sum_{j=1}^{M}x_{ij} & =1\quad\forall i\in\left\{ 1,\dots,N\right\} & (1)\\ d_{ij}x_{ij} & \le w\quad\forall i\in\left\{ 1,\dots,N\right\} ,\forall j\in\left\{ 1,\dots,M\right\} & (2)\\ \frac{1}{c_{j}}\sum_{i=1}^{N}x_{ij} & =y_{j}\quad\forall j\in\left\{ 1,\dots,M\right\} & (3)\\ y_{j} & \le z_{hi}\quad\forall j\in\left\{ 1,\dots,M\right\} & (4)\\ y_{j} & \ge z_{lo}\quad\forall j\in\left\{ 1,\dots,M\right\} & (5)\\ x & \in\left\{ 0,1\right\} ^{N\times M} & (6)\\ x_{ij} & =0\quad\forall i,j\ni d_{ij}>D_{max} & (7)\\ y & \in\left[0,1\right]^{M} & (8)\\ z_{hi},z_{lo} & \in\left[0,1\right] & (9)\\ w & \in\left[0,D_{max}\right] & (10) \end{align*} 

  • Constraint (1) ensures that each patient is assigned to exactly one provider.
  • Constraint (2) defines $w$, the maximum distance traveled.
  • Constraint (3) defines the fraction $y_j$ of capacity used at each provider $j$.
  • Constraints (4) and (5) define $z_{lo}$ and $z_{hi}$.
  • Constraints (6), (8), (9) and (10) define variable domains. The upper bound of 1 for $y_j$ in (8) ensures that no provider is assigned more patients than their capacity allows.
  • Constraint (7) enforces the travel distance limit $D_{max}$ by preventing any assignments that would violate the limit (effectively removing those assignment variables from the model).

In the next post, I will show how to solve the model using CPLEX (with, as usual, the Java API).

 

Tuesday, August 18, 2020

A Partitioning Problem

 A recent question on Mathematics Stack Exchange dealt with reducing the number of sets in a partition of a set of items. I'll repeat it here but with slightly different terminology from the original question. You start with $N$ items partitioned into $M$ disjoint sets. Your goal is to generate a smaller partition of $K < M$ sets (which I will henceforth call "collections" to distinguish them from the original sets). It is required that all items from any original set end up in the same collection (i.e., you cannot split the original sets). The criterion for success is that "the new [collection] sizes should be as close to even as possible".

This is easily done with an integer programming model. The author of the question thought about minimizing the variance in the collection sizes, which would work, but I'm fond of keeping things linear, so I will minimize the range of collection sizes. I'll denote the cardinality of original set $i$ by $n_i$. Let $x_{ij}$ be a binary variable which is 1 if set $i\in \lbrace 1,\dots, M\rbrace$ is assigned to collection $j\in \lbrace 1,\dots,K\rbrace$  and 0 if not. Let $y$ and $z$ denote the sizes of the smallest and largest collections. Finally, for $j\in \lbrace 1,\dots,K\rbrace$ let $s_j$ be the size (cardinality) of collection $j$. A MILP model for the problem is the following:

\begin{align} \min\,z-y\\ \textrm{s.t. }\sum_{j=1}^{K}x_{ij} & =1\;\; \forall i\in\left\{ 1,\dots M\right\} \\ \sum_{i=1}^{M}n_{i}x_{ij} & =s_{j}\;\; \forall j\in\left\{ 1,\dots,K\right\} \\ s_{j} & \le z\;\; \forall j\in\left\{ 1,\dots,K\right\} \\ s_{j} & \ge y\;\; \forall j\in\left\{ 1,\dots,K\right\} \\ y,z,s_{\cdot} & \ge0\\ x_{\cdot\cdot} & \in\left\{ 0,1\right\} \end{align} 

The author of the question also indicated an interest in "fast greedy approximate solutions" (and did not specify problem dimensions). The first greedy heuristic that came to my mind was a simple one. Start with $K$ empty collections and sort the original sets into descending size order. Now assign each set, in turn, to the collection that currently has the smallest size (breaking times whimsically). Why work from largest to smallest set? There will be times when you will want to offset a large set in one collection with two or more smaller sets in another collection, and that will be easier to do if you start big and keep the smaller sets in reserve as long as is possible. Rob Pratt, owner of a rather massive reputation score on MSE, correctly noted that this is equivalent to the "longest processing time" heuristic for assigning jobs to machines so as to minimize makespan.

I put together an R notebook to test this "greedy" heuristic against the optimization model (solved with CPLEX). The notebook uses Dirk Schumacher's OMPR package for building the MILP model. It in turn uses the ROI package (which requires the Rcplex package) in order to communicate with CPLEX. On a test run using nice, round values of $N$, $M$ and $K$ that all ended in zeros (and in particular where $K$ divided evenly into both $M$ and $N$), the greedy heuristic nearly found the optimal solution. When I switched to less round numbers ($N=5723$, $M=137$, $K=10$), though, the heuristic did not fare as well. It was fast (well under one second on my PC) but it produced a solution where collection sizes ranged from 552 to 582 (a spread of 30), while CPLEX (in about 21 seconds) found an optimal solution where all collections had size either 572 or 573 (spread of 1). So I tacked on a second heuristic to refine the solution of the first heuristic. The second heuristic attempts pairwise swaps of the smallest set from the smallest collection with a larger set from a larger collection (trying collections in descending size order). Swaps are constrained not to leave the second collection (the one donating the larger set) smaller than the first collection started out. The intuition is to shrink the range by making the smallest collection bigger while shrinking the largest collection if possible and, if not, at least some collection that is larger than the smallest one. The new heuristic also ran in well under one second and shrank the range of collection sizes from 30 to 3 -- still not optimal, but likely good enough for the application the original questioner had in mind.

You are free to use the R code (which can be extracted from the notebook linked above) under the Creative Commons license that governs the blog.

Saturday, August 15, 2020

Firefox and the New Blogger Interface

Blogger has a (relatively) new interface, to which I switched a while back. The one major annoyance I found was that clicking the "Preview" button while editing a post did not actually generate a preview. I got a notification (lower left) that the preview was being prepared, and then ... nothing. To get a preview, I had to save my work, exit the edit screen (going back to the Blogger control panel), and do the preview there.

It wasn't just me, either. Checking the Blogger help community, I found a ton of posts about this, on pretty much all operating systems and browsers, with some dated this month. A tip about fixing the problem on Safari worked for me. The key (somewhat obvious in hindsight) is that Blogger needs permission to open a pop-up. This was not entirely obvious to me, since I don't consider opening a tab the same as opening a pop-up, but so be it. In Firefox, with any Blogger screen displayed, click the padlock icon in the URL bar, and under "Permissions" allow the site to open pop-ups.

Other users said they had the same problem with Chrome, which is interesting in that preview works fine for me on Chrome, and I don't recall giving explicit permission there. At any rate, I seem to be back in business.

And yes, I previewed this entry before posting it.


Monday, July 20, 2020

Longest Increasing Subsequence

In a recent blog post (whose title I have shamelessly appropriated), Erwin Kalvelagen discusses a mixed-integer nonlinear programming formulation (along with possible linearizations) for a simple problem from a coding challenge: "Given an unsorted array of integers, find the length of longest increasing subsequence." The challenge stipulates at worst $O(n^2)$ complexity, where $n$ is the length of the original sequence. Erwin suggests the intent of the original question was to use dynamic programming, which makes sense and meets the complexity requirement.

I've been meaning for a while to start fiddling around with binary decision diagrams (BDDs), and this seemed like a good test problem. Decision diagrams originated in computer science, where the application was evaluation of possibly complicated logical expressions, but recently they have made their way into the discrete optimization arena. If you are looking to familiarize yourself with decision diagrams, I can recommend a book by Bergman et al. [1].

Solving this problem with a binary decision diagram is equivalent to solving it with dynamic programming. Let $[x_1, \dots, x_n]$ be the original sequence. Consistent with Erwin, I'll assume that the $x_i$ are nonnegative and that the subsequence extracted must be strictly increasing.

We create a layered digraph in which each node represents the value of the largest (and hence most recent) element in a partial subsequence, and has at most two children. Within a layer, no two nodes have the same state, but nodes in different layers can have the same state. We have $n+2$ layers, where in layer $j\in\lbrace 1,\dots,n \rbrace$ you are deciding whether or not to include $x_j$ in your subsequence. One child, if it exists, represents the state after adding $x_j$ to the subsequence. This child exists only if $x_j$ is greater than the state of the parent node (because the subsequence must be strictly increasing). The other child, which always exists, represents the state when $x_j$ is omitted (which will be the same as the state of the parent node). Layer 1 contains a root node (with state 0), layer $n+1$ contains nodes corresponding to completed subsequences, and layer $n+2$ contains a terminal node (whose state will be the largest element of the chosen subsequence). Actually, you could skip layer $n+1$ and follow layer $n$ with the terminal layer; in my code, I included the extra layer mainly for demonstration purposes (and debugging).

In the previous paragraph, I dealt a card off the bottom of the deck. The state of a node in layer $j$ is the largest element of a partial subsequence based on including or excluding $x_1,\dots,x_{j-1}$. The sneaky part is that more than one subsequence may be represented at that node (since more than one subsequence of $[x_1,\dots,x_{j-1}]$ my contain the same largest element). In addition to the state of a node, we also keep track at each node of the longest path from the root node to that node and the predecessor node along the longest path, where length is defined as the number of yes decisions from the root to that node. So although multiple subsequences may lead to the same node, we only care about one (the longest path, breaking ties arbitrarily). Note that by keeping track of the longest path from root to each node as we build the diagram, we actually solve the underlying problem during the construction of the BDD.

The diagram for the original example ($n=8$) is too big to fit here, so I'll illustrate this using a smaller initial vector: $x=[9, 2, 5, 3]$. The BDD is shown below (as a PDF file, so that you can zoom in or out while maintaining legibility).

The first four layers correspond to decisions on whether to use a sequence entry or not. (The corresponding entries are shown in the right margin.) Nodes "r" and "t" are root and terminus, respectively. The remaining nodes are numbered from 1 to 14. Solid arrows represent decisions to use a value, so for instance the solid arrow from node 4 to node 8 means that 5 ($x_3$) has been added to the subsequence. Dashed arrows represent decisions not to use a value, so the dashed arrow from node 4 to node 7 means that 5 ($x_3$) is not being added to the subsequence. Dotted arrows (from the fifth layer to the sixth) do not represent decisions, they just connect the "leaf" nodes to the terminus.

The green(ish) number to the lower left of a node is the state of the node, which is the largest element included so far in the subsequence. The subsequence at node 4 is just $[2]$ and the state is 2. At node 7, since we skipped the next element, the subsequence and state remain the same. At node 8, the subsequence is now $[2, 5]$ and the state changes to 5.

The red numbers $d_i:p_i$ to the lower right of a node $i$ are the distance (number of solid arcs) from the root to node $i$ along the longest path ($d_i$) and the predecessor of node $i$ on the longest path ($p_i$). Two paths converge at $i=13$: a path $r \dashrightarrow 2 \rightarrow 4 \dashrightarrow 7 \rightarrow 13$ of length 2 and a path $r \dashrightarrow 2 \dashrightarrow 5 \dashrightarrow 9 \rightarrow 13$ of length 1. So the longest path to node 13 has length 2 and predecessor node 7. Backtracking from the terminus (distance 2, predecessor either 12 or 13), we get optimal paths $r \dashrightarrow 2 \rightarrow 4 \rightarrow 8 \dashrightarrow 12 \dashrightarrow t$ (subsequence $[2, 5]$) and $r \dashrightarrow 2 \rightarrow 4 \dashrightarrow 7 \rightarrow 13 \dashrightarrow t$ (subsequence $[2, 3]$), the latter shown in blue.

In addition to the original example from the coding challenge ($n=8$), Erwin included an example with $n=100$ and longest increasing subsequence length 15. (There are multiple optimal solutions to both the original example and the larger one.) Gurobi solved the larger example to proven optimality in one second (probably less, since the output likely rounded up the time). My highly non-optimized Java code solved the $n=100$ example in 6 ms. on my PC (not including the time to print the results).

BDDs can get large in practice, with layers growing combinatorially. In this case, however, that is not a problem. Since the state of a node is the largest value of a subsequence, there can be at most $n$ different states. Given the stipulation that no two nodes in a layer have the same state, that means at most $n$ states in a layer. For Erwin's example with $n=100$, the largest layer in fact contained 66 nodes.

As I said earlier, using the BDD here is equivalent to using dynamic programming. With $n+2$ layers, at most $n$ nodes in a layer, and two operations on each node (figuring out the state and path length of the "yes" child and the "no" child), the solution process is clearly $O(n^2)$.

[1] D. Bergman, A. A. Cire, W.-J. van Hoeve and J. Hooker. Decision Diagrams for Optimization (B. O’Sullivan and M. Wooldridge, eds.).  Springer International Publishing AG, 2016.

Sunday, July 12, 2020

Mint 20 Upgrade Hiccup

Okay, "hiccup" might be an understatement. Something beginning with "cluster" might be more appropriate.

I tried to upgrade my MythTV backend box from Linux Mint 19.3 to Mint 20, using the Mint upgrade tool. Even on a fairly fast machine with a fast Internet connection and not much installed on it (MythTV plus the applications that come with Mint), this takes hours. A seemingly endless series of commands scroll in a terminal, and I don't dare walk away for too long, less the process stop waiting for some input from me (it periodically needs my password) or due to a glitch.

Speaking of glitches, I noticed that the scrolling stopped and the process seemed to freeze just after a couple of lines about installing symlinks for MySQL and MariaDB, two database programs. MariaDB, which I've never had installed before, is apparently a fork of MySQL. MythTV uses MySQL as its database manager. Before upgrading, I had killed the MythTV back end, but I noticed that the MySQL server was still running. On a hunch, I opened a separate terminal and shut down the MySQL server. Sure enough, the upgrade process resumed, with a message about a cancelled job or something, which I think referred to MariaDB. Whether this contributed to the unfolding disaster I do not know.

After a reboot, the good news was that everything that should start did start, and the frontend was able to see and play the recorded TV shows. The bad news was that (a) the backend got very busy doing a lot of (alleged) transcoding and scanning for commercials that should not have been necessary (having already been done on all recorded shows) and (b) I could not shut down, because the backend thought it was in a "shutdown/wakeup period", meaning (I think) that it thought it needed to start recording soon -- even though the next scheduled recording was not for a couple of days, and the front end was showing the correct date and time for the next recording. So I think the switch from MySQL to MariaDB somehow screwed up something in the database.

From there, things got worse. I had backed up the database, so I tried to restore the backup (using a MythTV script for just that purpose). The script failed because the database already contained data. Following suggestions online, I dropped the relevant table from the database and tried to run an SQL script (mc.sql) to restore a blank version of the table. No joy -- I needed the root MySQL password, and no password I tried would work. There is allegedly a way to reset the root password in a MySQL database, but that didn't work either, and in fact trying to shut the server down using "sudo service mysql stop" did not work (!). The only way to get rid of the service was to use "sudo pkill mysqld".

Fortunately, timeshift was able to restore the system to its pre-upgrade state (with a little help from a backup of the MythTV database and recordings folder). For reasons I do not understand (which describes pretty much everything discussed here), restoring the database backup did not cause MythTV to remember this week's schedule of recordings, but as soon as I reentered one (using MythWeb) it remembered the rest. And I thought my memory was acting a bit quirky ...

Monday, June 8, 2020

Zoom on Linux

Thanks to the pandemic, I'm been spending a lot of time on Zoom lately, and I'm grateful to have it. The Zoom Linux client seems to be almost as good as the other clients. The only thing that I know is missing is virtual backgrounds, which I do not particularly miss.

That said, I did run into one minor bug (I think). It has to do with what I think is called the "panel". (I've found it strangely hard to confirm this, despite a good bit of searching.) What I'm referring to is a widget that sits off to one side (and can be moved by me) when Zoom is running full screen and a presenter is holding the "spotlight" (owning the bulk of the window). The panel has four buttons at the top that let me choose its configuration. Three of them will show just my video (small, medium or large). The fourth one will show a stack of four videos, each a participant (excluding the presenter), with mine first and the other three selected by some rule I cannot fathom. (Empirical evidence suggests it is not selecting the three best looking participants.) Showing my camera image isn't exactly critical, but it's somewhat reassuring (meaning I know my camera is still working, and I'm in its field of view).

I'm running Zoom on both a desktop and a laptop, the latter exclusively for online taekwondo classes. On my desktop, the panel behaves as one would expect. On my laptop, however, the panel window assigned to my camera was intermittently blanking out. Randomly moving the cursor around would bring the image back (temporarily). This happened regardless of what panel configuration or size I chose.

On a hunch, I disabled the screen lock option on the laptop (which would normally blank the screen or show a lock screen if the laptop sat idle for too long. To be the clear, even with no keyboard/mouse input from me, the laptop was not showing the lock screen or sleeping -- the main presenter was never interrupted. It was just my camera feed that seemed to be napping. That said, disabling the lock screen seems to have helped somewhat. If the panel is showing only my camera, it still blanks after some amount of "idle" time; but if the panel is set to show a stack of four cameras (including mine), mine does not seem to blank out any more.

It's still a mystery to me why mine blanks when it's the only one in the panel, although it's clear there's a connection to my not providing any keyboard or mouse input for a while. The blanking never happens on my desktop. They're both running Linux Mint (the laptop having a somewhat newer version), and they're both running the latest version of the Zoom client. The laptop has a built-in camera whereas the desktop has a USB webcam. The desktop, unsurprisingly, has a faster processor, and probably better graphics. My typical desktop Zoom usage does not involve extended periods of inactivity on my part (if I'm not doing something useful as part of the call, I'm surreptitiously checking email or playing Minesweeper), so the lack of blanking on the laptop may just be lack of opportunity. It might be a matter of the desktop having better hardware. It might just be some minor computer deity figuring it's more entertaining to annoy me during a workout than during a meeting. Anyway, turning off the screensaver got rid of at least part of the problem. If anyone knows the real reason and/or the right fix, please leave a comment.

Monday, June 1, 2020

An Idea for an Agent-Based Simulation

I don't do agent-based simulations (or any other kind of simulations these days), so this is a suggested research topic for someone who does.

A number of supermarkets and other large stores have instituted one-way lanes, presumably thinking this will improve physical distancing of customers. I just returned from my local Kroger supermarket, where the narrower aisles have been marked one-way, alternating directions, for a few weeks now. The wider aisles remain bidirectional (or multidirectional, the way some people roll). Despite having been fairly clearly marked for weeks, I would say that close to half of all shoppers (possibly more than half) are either unaware of the direction limits or disregard them. Kroger offers a service where you order online, their employees grab and pack the food (using rather large, multilevel rolling carts), and then bring it out to your waiting car. Kroger refers to this as "Pickup" (formerly "Clicklist"). Interestingly, somewhere between 70% and 90% of the employees doing "Pickup" shopping that I encountered today were going the wrong direction on the directional aisles.

My perhaps naive thought is that unidirectional aisles are somewhere between useless and counterproductive, even if people obey the rules. That's based on two observations:
  1. the number of people per hour needing stuff from aisle 13 is unaffected by any directional restrictions on the aisle; and
  2. obeying the rules means running up extra miles on the cart, as the shopper zips down aisle 12 (which contains nothing he wants) in order to get to the other end, so that he can cruise aisle 13 in the designated direction.
Of course, OR types could mitigate item 2 by solving TSPs on the (partially directional) supermarket network, charitably (and in my case incorrectly) assuming that they knew which aisle held each item on their shopping list (and, for that matter, charitably assuming that they had a shopping list). I doubt any of us do have supermarket TSPs lying around, and that's beyond the skill set of most other people. So we can assume that shoppers arrive with a list, (mostly) pick up all items from the same aisle in one pass through it, and generally visit aisles in a vaguely ordered way (with occasional doubling back).

If I'm right, item 1 means that time spent stationary near other shoppers is not influenced by the one-way rules, and item 2 means that time spent passing shoppers increases (because shoppers have to log extra wasted miles just getting to the correct ends of aisles). So if any of you simulators out there would care to prove my point investigate this, knock yourselves out, and please let me know what you find.

Addendum: I heard an interview with Dr. Samuel Stanley, the current president of Michigan State University, regarding plans for reopening in Fall 2020. During the interview, he mentioned something about creating one-way pedestrian flows on campus. (Good luck with that -- herding undergrads makes herding cats look trivial.) The logic he expressed was that it would reduce face-to-face encounters among pedestrians. Dr. Stanley's academic field is infectious diseases, so presumably he knows whereof he speaks. On the other hand, my impression from various articles and interviews is that droplets emitted by COVID-infected people can linger in the air for a while. So there is a trade-off with one-way routing: an infected person passes fewer people face-to-face, but presumably spreads the virus over a greater area due to longer routes. Has anyone actually studied the trade-off?

Sunday, May 31, 2020

A Simple Constrained Optimization

A question posted to OR Stack Exchange, "Linear optimization problem with user-defined cost function", caught my eye. The question has gone through multiple edits, and the title is a bit misleading, in that the objective function is in at least some cases nonlinear. The constraints are both linear and very simple. The user is looking for weights to assign to $n$ vectors, and the weights $x_i$ satisfy $$\sum_{i=1}^n x_i = 1\\x \ge 0.$$ Emma, the original poster, put a working example (in Python) on GitHub. The simplified version of her cost function includes division of one linear expression by another, with an adjustment to deal with division by zero errors (converting the resulting NaN to 0).

The feasible region of the problem is a simplex, which triggered a memory of the Nelder-Mead algorithm (which was known as the "Nelder-Mead simplex algorithm" when I learned it, despite confusion with Dantzig's simplex algorithm for linear programs). The Nelder-Mead algorithm, published in 1965, attempts to optimize a nonlinear function (with no guarantee of convergence to the optimum in general), using only function evaluations (no derivatives). It is based on an earlier algorithm (by Spendley, Hext and Himsworth, in 1962), and I'm pretty sure there have been tweaks to Nelder-Mead over the subsequent years.

The Nelder-Mead algorithm is designed for unconstrained problems. That said, my somewhat fuzzy recollection was that Nelder-Mead starts with a simplex (hopefully containing an optimal solution) and progressively shrinks the uncertainty region, each time getting a simplex that is a subset of the previous simplex. So if we start with the unit simplex $\lbrace (1,0,0,\dots,0,0), (0,1,0,\dots,0,0),\dots,(0,0,0,\dots,0,1)\rbrace$, which is the full feasible region, every subsequent simplex should be comprised of feasible points. It turns out I was not quite right. Depending on the parameter values you use, there is one step (expansion) that can leave the current simplex and thus possibly violate the sign restrictions. That's easily fixed, though, by checking the step size and shrinking it if necessary.

There are several R packages containing a Nelder-Mead function, but most of them look like they are designed for univariate optimization, and the one I could find that was multivariate and allowed specification of the initial simplex would not work for me. So I coded my own, based on the Wikipedia page, which was easy enough. I used what that page describes as typical values for the four step size parameters. It hit my convergent limit (too small a change in the simplex) after 29 iterations, producing a solution that appears to be not quite optimal but close.

Just for comparison purposes, I thought I would try a genetic algorithm (GA). GAs are generally not designed for constrained problems, although there are exceptions. (Search "random key genetic algorithm" to find one.) That's easy to finesse in our case. Getting a GA to produce only nonnegative values is easy: you just have to require the code that generates new solutions (used to seed the initial population, and possibly for immigration) and the code that mutates existing solutions to use only nonnegative numbers. That might actually be the default in a lot of GA libraries. "Crossover" (their term for solutions having children) takes care of itself. So we just need to enforce the lone equation constraint, which we can do by redefining the objective function. We allow the GA to produce solutions without regard to the sum of their components, and instead optimize the function $$\hat{f}(x)=f\left(\frac{x}{\sum_{i=1}^n x_i}\right)$$where $f()$ is the original objective function.

R has multiple GA packages. I used the `genalg` package in my experiments. Running was 100 generations with a population of size 200 took several seconds (so longer than what Nelder-Mead took), but it produced a somewhat better solution. Since the GA is a random algorithm, running it repeatedly will produce different results, some worse, possibly some better. You could also try restarting Nelder-Mead when the polytope gets too small, starting from a new polytope centered around the current optimum, which might possibly improve on the solution obtained.

This was all mostly just to satisfy my curiosity. My R code for both the Nelder-Mead and GA approaches is in an R notebook you are free to download.


Saturday, May 23, 2020

Of ICUs and Simulations

I'm a fan of the INFORMS "Resoundingly Human" podcasts, particularly since they changed the format to shorter (~15 minute) installments. I just listened to a longer entry (40+ minutes) about the use of OR (and specifically simulation models) to help with hospital planning during the pandemic. (Grrrr. I'd hoped to keep the word "pandemic" out of my blog. Oh well.) The title is "The dangers of overcrowding: Helping ICUs preserve essential bed space", and the guest is Frances Sneddon, CTO of Simul8 Corporation. I thought the content was interesting, and Frances was very enthusiastic presenting it, so if you have any interest in simulation and/or how OR can help during the (here it comes again) pandemic, I do recommend giving it a listen.

One thing that definitely got my attention was Frances's emphasis on building simulation models in a rapid / interactive / iterative / agile way. ("Rapid" was very much her word, and she used "agile" toward the end of the podcast. "Interactive" and "iterative" are my words for the process she described.) Basically (again with my paraphrasing), she said that the best outcomes occur when simulations are born from discussions among users and modelers where the users ask questions, followed by fairly rapid building and running of a new/revised model, followed by discussions with the users and more of the same. Frances at one point drew an analogy to detective work, where running one simulation lets you ferret out clues that lead to questions that lead to the next model.

To some extent, I think the same likely holds true of other applications of OR in the real world, including optimization. Having one conversation with the end users, wandering off into a cave to build a model, and then presenting it as a fait accompli  is probably not a good way to build trust in the model results and may well leave the user with a model that fundamentally does not get the job done. As a small example, I once worked on a model for assigning school-age children to recreational league athletic teams. The version of the model satisfied all stated constraints, but the user told me it would not work. Some parents have multiple children enrolled in the league, and it is unworkable to expect them to ferry their kids to different teams playing or practicing in different places. So siblings must go on the same team. (There were other constraints that emerged after the initial specification, but I won't bore you with the details.)

So on the one hand, I'm predisposed to agree at least somewhat with what Frances said. Here comes the cognitive dissonance (as my erstwhile management colleagues would say). Once upon a time I actually taught simulation modeling. (I won't say exactly when, but in the podcast Frances mentions having been in the OR field for 20 years, and how saying that makes her feel old. The last time I taught simulation was before she entered the field.) Two significant issues, at least back then, were verifying and validating simulation models. I suspect that verification (essentially checking the correctness of the code, given the model) is a lot easier now, particularly if you are using GUI-based model design tools, where the coding looks a lot like drawing a flow chart from a palette of blocks. The model likely was also presented as a flow chart, so comparing code to model should be straightforward (put the two flow charts side by side). Validation, the process of confirming that the model is correct, may or may not be easier than in the past. To some extent you can achieve "face validity" by talking through the assumptions of the model with the users during those interactive sessions, helped by a flow chart.

Back in my day, we also talked about historical validation (running the model with historical inputs and seeing if the results reasonably tracked with historical outputs). When you are trying to answer "what if" questions (what if we reconfigure the ICU this way, or change admissions this way, or ...?), you likely don't have historical data for the alternate configurations, but you can at least validate that the model adequately captures the "base case", whatever that is. Also, "what if" questions are likely to lead you down paths for which you lack hard data for parameter estimates. What if we build a tent hospital in Central Park (which has never been done before)? What do we use for the rate at which patients experience allergy attacks (from plant life in the park that simply does not exist inside the hospital)? My guess is that your only recourse is to run the simulation for multiple values of the mystery parameter, which leads us to a geometric explosion of scenarios as we pile on uncertain parameters. So my question is this: in an interactive loop (meet with users - hack model - collect runs / synthesize output - repeat), can we take reasonable care to preserve validity without exhausting the parties involved, or overloading them with possibilities to the point that there is no actual take-away?

Informed opinions are welcome in the comments section. (It's an election year here, so I'm already maxed out on uninformed opinions.)

Friday, April 24, 2020

Generating Random Digraphs

In a recent post, OR consultant and blogger Erwin Kalvelagen discussed generating a random sparse network in GAMS. More specifically, he starts with a fixed set of nodes and a desired number of arcs, and randomly generates approximately that number of arcs. His best results, in terms of execution time, came from exporting the dimensions to R, running a script there, writing out the arcs and importing them back into GAMS.

There are three possible issues with the script. Erwin acknowledged the first, which applies to all his approaches: after removing duplicates, he wound up with fewer than the targeted number of arcs. In many applications this would not be a problems, since you would be looking for "about 1% density" rather than "exactly 1% density". Still, there might be times when you need a specific number of arcs, period. You could supplement Erwin's method with a loop that would test the number of arcs and, if short, would generate more arcs, remove duplicates, add the survivors to the network and repeat.

The second possible issue is the occurrence of self-loops (arcs with head and tail the same, such as (5, 5)). Again, this may or may not be a problem in practice, depending on the application. I rarely encounter network applications where self-loops are expected, or even tolerated. Again, you could modify Erwin's code easily to remove self-loops, and it would not increase execution time much.

The third possible issue is that some nodes may be "orphans" (no arcs in or out), and others may be accessible only one way (either inward degree 0 or outward degree 0). Once again, the application will dictate whether this is a matter of concern.

I decided to test a somewhat different approach to generating the network (using R). It has the advantage of guaranteeing the targeted number of arcs, with no self-loops. (It does not address the issue of orphan nodes.) It has the disadvantage of being slower than Erwin's algorithm (but by what I would call a tolerable amount). My approach is based on assigning an integer index to every possible arc. Assume there are $N$ nodes, indexed $0, \dots, N-1$. (Erwin uses 1-based indexing, but it is trivial to adjust from 0-based to 1-based after the network has been generated.) There are $N^2$ arcs, including self-loops, indexed from $0,\dots,N^2-1$. The arc with index $k$ is given by $$f(k) = (k \div n, k \mod n),$$where $\div$ denotes the integer quotient (so that $7 \div 3 = 2$). As self-loop is an arc whose index $k$ satisfies $k\div n = k \mod n$; those are precisely the arcs with indices $k=m(n + 1)$ for $m=0,\dots,n-1$. So my version of the algorithm is to start with the index set $\lbrace 0,\dots,n^2-1\rbrace$, remove the indices $0, n+1, 2n+2,\dots, n^2-1$, take a random subset of the survivors and apply $f()$ to them.

I have an R Notebook that compares the two algorithms, using the same dimensions Erwin had: $n=5000$ nodes, with a target density of 1% (250,000 arcs). Timing is somewhat random, even though I set a fixed random number seed for each algorithm. The notebook includes both code and output. As expected, my code gets the targeted number of arcs, with no self-loops. Erwin's code, as expected, comes up a bit short on the number of arcs, and contains a few (easily removed) self-loops. Somewhat interestingly, in my test runs every node had both inward and outward degree at least 1 for both algorithms. I think that is a combination of a fairly large arc count and a bit of luck (the required amount of luck decreasing as the network density increases). If orphans, or nodes with either no outbound or no inbound arcs, turn out to be problems, there is a fairly easy fix for both methods. First, randomly generate either one or two arcs incident on each node (depending on whether you need both inward and outward arcs everywhere). Then generate the necessary number of additional arcs by adjusting the sample size. As before, you might come up a few arcs short with Erwin's algorithm (unless you include a loop to add arcs until the target is reached). In my algorithm, you can easily calculate the indices of the initial set of arcs (the index of arc $(i,j)$ is $n\times i + j$) and then just remove those indices at the same time that you remove the indices of the self-loops, before generating the remaining arcs.

Tuesday, April 21, 2020

A CP Model for Toasting Bread

A question on Mathematics Stack Exchange deals with a problem (apparently from the book "Thinking Mathematically") about toasting bread on a grill. The grill can hold two slices at a time and can only toast one side of each at a time. You have three slices to toast, and the issue is to figure out how to do it in the minimum possible amount of time (what operations management people refer to as the makespan).

The questioner had a solution that I was able to prove is optimal, using a constraint programming (CP) model that I coded using the Java API to IBM's CPOptimizer (part of the CPLEX Studio product). I won't swear my model is elegant or efficient, since I'm pretty new to CPO, but I think it is correct. If anyone getting started with CPO and the Java API wants to see the source code, it is available in my repository. I'll describe a few key aspects below.

I assumed in my model that there is a single cook. The fundamental components of the model are CPO "interval variables" (instances of IloIntervalVar) for each task (inserting a slice, toasting one side, removing a slice, flipping a slice) along with a dummy task for being done and a placeholder task I called "reversing". Interval variables represent time spans during which tasks are done.

In the problem, there are two ways to get from toasting the front of a slice to toasting the back: you can leave it on the grill and flip it; or you can remove it and (later) replace it with the other side up. Since I didn't know a priori which slices will be handled which way, I created interval variables for removing each slice after the first side, replacing each slice with the second side up, and flipping each slice. Those variables are declared optional, meaning each interval may or may not show up in the solution. For each slice, there is an interval variable for the "reversing" task that is not optional. Each slice has to be reversed, one way or the other. The tasks for replacing a slice (after removing it) and for flipping the slice are listed as alternatives for the reversing task for that slice, which means exactly one of flipping or replacing must be in the solution. Separate constraints ensure that a slice is reinserted if and only if it was removed after the first side toasted. Those constraints use the IloCP.presenceOf function to test whether the remove and reinsert intervals are present, and set the results equal (so both present or neither present).

The sequencing of operations (insert, toast first side, reverse, toast second side, remove) is enforced through a bunch of constraints that use IloCP.endBeforeStart (which says the first argument has to end before the second argument starts). The dummy "done" task is sequenced to start only after each slice has been removed for the final time. I'm pretty sure the objective value (the time everything is done) could be handled other ways, but I tend to think of completion as a zero-length task.

The cook can only do one thing at a time. This is handled using the IloCP.noOverlap function. It is passed a list of every interval that requires the cook's attention (everything but the actual toasting tasks), and prevents any of those intervals from overlapping with any other.

Finally, I need to prevent more than two slices from occupying the grill at any one time. The noOverlap function is no help here. Instead, I use an instance of IloCumFunctionExpr, which represents a function over time that changes as intervals begin and end. In this case, the function measures occupancy.  This is handled by treating the usage as a combination of step functions (IloCP.stepAtStart and IloCP.stepAtEnd). Usage steps up by one at the start of the task of inserting a slice and steps down by one at the end of the task of removing a slice. (Toasting and flipping have no effect on occupancy.) The Javadoc for the relevant functions a bit, um, sparse, essentially saying nothing about the step height argument. Thus I discovered the hard way (through error messages) that adding a step with height -1 when a slice was removed was not acceptable. Instead, I had to subtract a step of height +1.

Although it was not really necessary on a problem this small, I removed some symmetry resulting from the arbitrary ordering of the slices by setting precedence constraints saying that slice 1 is started before slice 2, which in turn is started before slice 3.

It is possible to model the problem as an integer program, and in fact I initially leaned that direction. The IP model, however, would be bulkier and less expressive (which would make it more prone to logic errors), and quite possibly would be slower to solve. CPOptimizer is designed specifically with scheduling problems in mind, so it is the better tool for this particular job.

Friday, April 17, 2020

Objective Constraints (Again)

Long ago, I did a couple of posts [1, 2] about constraints designed to bound objective functions. We are referring here to constraints that explicitly bound the value of the objective function in an integer or mixed-integer linear program. The typical application is when the user is minimizing $f(x)$ subject to $x\in X$ and specifies a bound $f(x) \le d$. (If maximizing the inequality becomes $f(x)\ge d$.) The reason for doing so is to help the solver prune inferior nodes (nodes where $f(x) > d$ when minimizing) faster.

One way to accomplish the goal is to set a feasible starting solution $x^0 \in X$ for which $f(x)\le d$. This of course requires you to know such a solution. Also, setting a starting solution, even a good one, will likely steer the solver in a different direction than what it would have taken without the starting solution (meaning it will build a different tree), and this can wind up either faster or slower than not having the start, depending on where you sit on Santa's naughty/nice list and assorted random factors. (Asserting the bound by any of the other methods listed below can also have unintended consequences. Pretty much anything you do with a MIP can have unintended consequences.)

Assuming you have a bound in mind but not a starting solution, you have a few options. The main takeaways from those two posts were basically the following.
  1. If your solver has the capability, your best bet is probably to specify the bound via a parameter. (CPLEX has the "upper cutoff" parameter for min problems and the "lower cutoff" parameter for max problems to do just this.)
  2. Failing that, you can introduce a variable $z$ to represent your objective function, add a defining constraint $z = f(x)$, minimize $z$ and then specify $d$ as an upper bound for $z$. This may slow the solver some (for reasons explained in the prior posts) but is likely not as bad as the last option.
  3. The last option, which is the most obvious (and thus one users gravitate to), is to add the constraint $f(x) \le d$ to the model. This can slow the solver down noticeably.
The short version of why the last option is undesirable is that if the last constraint is not binding  (which will happen if $d$ is not the optimal value and the solver has found an optimal or near optimal solution), it is just excess baggage. If it is binding, it can cause dual degeneracy.

Someone recently asked about this, and I waved my hand and invoked "dual degeneracy", but I'm not sure how clear I was. So I thought I would augment the two previous posts with a small example.

Suppose that we are solving a MIP model, and at some node we are staring at the following LP relaxation:$$\begin{alignat*}{5} \min & {}-{}5x_{1} & {}+{}40x_{2} & {}-{}5x_{3} & {}+{}5x_{4}\\ \textrm{s.t.} & \phantom{\{\}-}x_{1} & {}-{}\phantom{4}6x_{2} & & {}-{}3x_{4} & {}+{}s_{1} & & & =-3\\ & \phantom{\{\}-}x_{1} & {}-{}\phantom{4}2x_{2} & {}+\phantom{5}{}x_{3} & {}+{}\phantom{4}x_{4} & & {}+{}s_{2} & & =\phantom{-}0\\ & {}-{}5x_{1} & {}+{}40x_{2} & {}-{}5x_{3} & {}+{}5x_{4} & & & {}+{}s_{3} & =-6 &\quad (*)\end{alignat*}$$where the variables are nonnegative, the $s$ variables are slacks, and the constraint (*) is our way of imposing an upper bound of -6 on the objective function. In matrix terms the problem is\begin{align} \min\quad & \bar{c}'\bar{x}\\ \textrm{s.t.}\quad & \bar{A}\bar{x}=\bar{b}\\ & \bar{x}\ge0 \end{align} with $\bar{x}=(x_1,\dots,x_4,s_1,\dots,s_3)'$, $\bar{c}=(-5,40,-5,5,0,0,0)'$, $\bar{b}=(-3,0,-6)'$ and $$\bar{A}=\left[\begin{array}{rrrrrrr} 1 & -6 & 0 & -3 & 1 & 0 & 0\\ 1 & -2 & 1 & 1 & 0 & 1 & 0\\ -5 & 40 & -5 & 5 & 0 & 0 & 1 \end{array}\right].$$The initial basis would be the slack variables, giving us an infeasible solution $x=0$, $s=(-3,0,-6)$ with reduced costs $r = \bar{c}$. The negative values of $s_1$ and $s_3$ cause the infeasibility.

MIP solvers commonly use the dual simplex method to eliminate infeasibility in a node LP. Dual simplex would pivot in the row $i$ with the most negative right-hand side value $\bar{b}_i$, and in the column $j$ for which the ratio $r_j/\bar{a}_{ij}$ is minimal among those where $\bar{a}_{ij}\lt 0$. Here $i=3$ and $j$ is either 1 or 3 (the ratio in both column 1 and column 3 being $-5/-5=1$). Suppose that the solver chooses column 1, making the new basis (in row order) $(s_1, s_2, x_1).$ After the pivot, the reduced cost vector becomes $\hat{r}=c(0,0,0,0,0,0,-1)$, the new right-hand side vector is $\hat{b}=(-4.2, -1.2, 1.2)'$, and the new constraint matrix is $$\hat{A} = \left[\begin{array}{rrrrrrr} 0 & 2 & -1 & -2 & 1 & 0 & 0.2\\ 0 & 6 & 0 & 2 & 0 & 1 & 0.2\\ 1 & -8 & 1 & -1 & 0 & 0 & -0.2 \end{array}\right].$$The solution is still infeasible, and dual simplex will look to pivot in row 1 (where $\hat{b}$ is most negative. There are two possible pivot columns, columns 3 and 4, but the ratio used to distinguish them is zero in both cases because the reduced cost vector is all zeros (except for $s_3$, the slack in the objective constraint).

The same thing happens if we pivot in column 3 rather than column 1, and in fact it is possible to show that the reduced cost vector will be all zeros other than the slack in the constraint limit as long as the slack in the constraint limit is nonbasic. Since that slack variable will typically be nonbasic so long as the constraint is binding, and the constraint is useful only when binding, we can expect to see a lot of LPs where this occurs.  The tie is survivable (we've already seen one tie for pivot column), but picture this occurring where there are many dual pivots required, with perhaps many eligible columns (negative coefficients) for each pivot, and they all have ratio 0. The solver will be flying somewhat blind when it picks pivot columns, which could plausibly slow things down.

References


[1] "Objective Functions Make Poor Constraints"
[2] "Objective Constraints: The Sequel"

Saturday, April 4, 2020

Tangents v. Secants Part II

This is a continuation of a recent post ("Approximating Nonlinear Functions: Tangents v. Secants") on how to work a nonlinear function into a mixed-integer linear programming model. As before, I'm sticking to functions of one variable. To recap the take-aways from that post, there are basically four ways I know to approximate a nonlinear function:
  1. use a piecewise-linear function based on secants;
  2. use a piecewise-linear function based on tangents;
  3. use a surrogate variable that is bounded (below if the function is convex, above if the function is concave) by a collection of linear functions derived from tangents; or 
  4. use the third technique plus a callback that adds additional linear tangent functions whenever a candidate solution underestimates (convex case) or overestimates (concave case) the true value of the function.
The first two methods apply generally, meaning that the function need not be convex or concave, and the constraint it appears in need not be a particular type ($\ge$, $\le$, $=$). The third and fourth methods only work when the function is convex or concave. For convex functions, tangents will underestimate the true value and secants will overestimate it; for concave functions, the opposite is true. For specific cases (convex function in a $\le$ constraint, concave function in a $\ge$ constraint), one of secants or tangents will produce feasible but potentially suboptimal solutions while the other will produce superoptimal but potentially infeasible solutions.

Before going on, I want to make two observations that I should have made in the first post. First, for the specific cases I just mentioned, if you solve the problem twice, once using secants and once using tangents, the true optimal objective value will fall between the values of the two solutions, so you will have an idea of how close to optimal the possibly suboptimal solution is. (I'll illustrate this below.) For nonlinear functions in equality constraints, or for functions that are neither convex nor concave, this would not work. Second, if the argument to the nonlinear function is integer-valued, it makes sense to construct a piecewise-linear function (first two options) with integer-valued break points or to construct tangents (third option) at integer-valued points. That way, you are guaranteed correct function values at least at some points in the domain of the function. This is easy to do with secants but considerably more work with tangents.

I have one more observation to make before getting to an example. In the fourth method I listed, with a convex function in a $\le$ constraint or a concave function in a $\ge$ constraint, if the solver finishes the search with an "optimal" solution, the solution will really be optimal. These are the cases where we would normally risk superoptimality, but the callback will prevent that from happening.

At this point, I'm going to present an example of one possible scenario. The problem is to select repeating order quantities for a collection of products. In the model to follow, capital letters will be parameters and lower case letters will be indices or variables. We start with $N$ products. For each product $i$, we know the annual demand ($D_i$), the unit price ($P_i$), the unit annual holding cost ($H_i$), the cost to place an order for the product ($S_i$, regardless of how much is being ordered), and the unit volume ($V_i$, the amount of storage space one unit occupies). In addition, we know the total storage capacity $C$ of the warehouse where the products will be stored. We will somewhat laughably assume that everything is deterministic.

Let $q_i$ denote the quantity of product $i$ ordered each time an order is placed, and $f_i$ the frequency (number of orders per year) with which product $i$ is replenished. The total annual cost, to be minimized, is $$\sum_{i=1}^N \left[P_iD_i + H_i\frac{q_i}{2} + S_i f_i \right],\tag{1}$$where the first term is the total cost of purchasing products (which is constant), the second term is the total cost of storing them (based on the average inventory level, which is half the order size), and the last term is the total cost for placing orders.

The nonlinear function in this problem is the one relating order quantity to order frequency:$$f_i = \frac{D_i}{q_i}\quad \forall i=1,\dots,N.\tag{2}$$For a single product, it would be easy to substitute out $f_i$ from the objective, leaving a function of just $q_i$, and then differentiate. The first order optimality condition leads to the well known economic order quantity (EOQ) formula$$q_i^* = \sqrt{\frac{2D_iS_i}{H_i}}.$$The catch here is that ordering the EOQ for every item might exceed our storage space. So we resort to a mixed-integer program, minimizing (1) subject to $$\frac{1}{2}\sum_{i=1}^n V_i q_i \le C \tag{3}$$with $$q_i\in \lbrace 1,\dots,D_i\rbrace\quad \forall i\in\lbrace 1,\dots,N\rbrace$$ and $$f_i \in\left[ 1,D_i\right]\quad \forall i\in\lbrace 1,\dots,N\rbrace ,$$plus some constraint(s) to connect $f_i$ to $q_i$. It's worth pausing here to note that at most one of $f_i$ and $q_i$ needs to be discrete. Here I am assuming that order quantities must be integers (we are ordering something like appliances, where a third of a washing machine is not a meaningful concept) but that order frequencies need not be integers (2.5 orders per year just means two orders one year, three the next, and so on).

What is left is to pick one of the methods for approximating the relationship between quantity and frequency, equation (2). For methods 1 and 2,  we can add to (1) and (3) the constraint $$f_i = \ell_i(q_i)\quad\forall i\in\lbrace 1,\dots,N\rbrace,\tag{4}$$where $\ell_i()$ is a piecewise linear function derived from either tangents or secants to the reciprocal function $g(x)=1/x$. For method 3, we can instead compute $M_i$ tangent functions $\ell_i()$ for each $i$ and add the constraint $$f_i \ge \ell_j(q_i) \quad\forall i\in\lbrace 1,\dots,i\rbrace,\,\forall j \in\lbrace 1,\dots,M_i\rbrace.\tag{4'}$$Note that this may underestimate $f_i$ (and thus the cost of a solution) but will not affect feasibility (since the space constraint (3) does not contain $f_i$). Finally, for the fourth method, we can minimize (1) subject to (3) and (4') and also use a callback to add more constraints like (4') on the fly.

I tried all four methods, using CPLEX 12.10, on a small test problem ($N=10$ products). I used a constant holding cost rate ($H_i = 0.2$) for all products, and set the space limit to $C=2136.41$. (Yes, it looks goofy, but I used random numbers to generate the problem.) The product level data was as follows:

ProductUnit CostOrder CostDemandSize
04.042.5417061.57
12.372.5512034.68
23.822.0112061.56
32.922.5213732.00
41.373.3810294.79
52.562.5217002.91
64.553.1213144.28
71.073.1812234.06
83.643.7419163.32
91.972.136302.52

For method 1, I used 10 evenly spaced breakpoints (so nine chords). For the other methods, I computed tangents at 10 evenly spaced points. Of course, more breakpoints or tangents would make for a more accurate approximation. The objective value for each of the four methods is as follows:

MethodNominalActual
1672.48670.05
2490.2728946.48
3490.2728946.48
4633.40633.53

Here "nominal" is the objective value reported by CPLEX and "actual" is the actual cost, using the order quantities from the CPLEX solution but recalculating the order frequencies according to (2). Method 1, based on secants, overestimates frequency (and thus cost) slightly. Methods 2 and 3 massively underestimate some frequencies, and thus the overall cost. The reason is apparent from the next table, which shows the nominal and actual order frequencies for each product:

ProductDemandQuantityActual FreqNominal Freq
0170611706.0019.89
1120311203.0019.80
2120611206.0019.85
3137311373.0019.83
410291377.516.69
5170011700.0019.82
6131411314.0019.83
712231647.466.64
8191611916.0019.91
9630857.416.61

For products with nontrivial order quantities, frequencies are underestimated a bit, but for products with order quantity 1 frequencies are massively underestimated (which is what attracts the solver to using such a small order quantity). Basically, the piecewise-linear approximation of the reciprocal relation (2) stinks at the low end of the quantity range, because the curve is steep and none of the tangents are close enough to that end of the quantity domain. This could be improved by forcing the piecewise-linear functions in method 2 to have a breakpoint at $q_i=1$ or by including a tangent function $\ell_i()$ calculated at $q_i=1$ in method 3. Still, to get a reasonable approximation you might need to add a bunch of tangents at that end of the curve.

Method 4 produces a solution that is actually optimal (to within convergence tolerances). The callback ignored discrepancies between nominal and actual order frequency below 0.01. Cost is very slightly underestimated, which I think could be fixed by setting that 0.01 tolerance even smaller. That might cause the program to generate a huge number of tangents in the callback, slowing it down considerably. As it was, the callback added 392 tangents to the initial set of 10 tangents during the solution run.

I mentioned earlier that using both tangents and secants (in separate runs) would bracket the true optimal value in cases of convex (concave) functions in less than (greater than) constraints. Here the true optimal cost (around 633.5) is indeed below the nominal cost of the secant approach (672.58) and above the nominal cost of the tangent approach (490.27). Note that we have to use the nominal costs, which in the case of the tangent approach is at once horribly inaccurate for the solution produced but still a valid lower bound on the actual optimal cost.

If you would like to look at or play with my code (add breakpoints or tangents, add products, make the frequency rather than the order quantity discrete), you can find it at https://gitlab.msu.edu/orobworld/secants.