## Friday, April 20, 2012

### K Best Solutions

Someone recently asked me how to find the $K$ best solutions to an optimization. It's a fairly common question, and I've suggested methods in the past, but this time I decided to do a little experimenting to see if what I had in mind was appropriate.  I'll share some results in this post.

First, it's fair to ask when the question is even appropriate.  If your problem has continuous variables (for instance, a linear program), forget about it.  Barring a few unusual (if not pathological) cases I won't attempt to enumerate here, if a continuous problem has an optimal solution, there will typically be uncountably many feasible points in a neighborhood of the optimum with objective values arbitrarily close to optimal.

This also typically holds for problems with a mix of discrete and continuous variables; there may be a single combination of values of the discrete variables that leads to an optimal solution, and perhaps a clear "second best" set of values for the discrete variables, but for each combination of the discrete variables there is liable to be an uncountable number of ways to fill in the continuous variables giving a solution with objective value arbitrarily close to the best given those discrete values.  So we can restrict the question to problems where all variables are discrete, or to mixed problems if by "second best", "third best" etc. we mean second, third, ... best choices for the discrete variables.  Put another way, if we have a mixed problem (maximizing $f$) with optimal solution $x=x_0$ (discrete), $y=y_0$ (continuous) and objective value $f=f_0$, we can ask for a solution $(x,y)=(x_1,y_1)$ with $x_1\neq x_0$ and objective value $f=f_1$ such that no feasible solution $(x_2,y_2)$ with $x_0\neq x \neq x_1$ has objective value $f_2$ such that $f_1<f_2\leq f_0$, while recognizing that there could be a feasible solution $(x_0,y^*)$ with objective value $f_1 < f_0-\epsilon < f(x_0,y^*) \leq f_0$ for arbitrarily small positive $\epsilon$.

Second, context plays a role here.  A common reason to seek "runner-up" solutions is that the analysis of the model will be presented to a human decision maker, who wants options.  Decision makers, with some justification, do not always take well to being handed a single solution and told "this is optimal, you need to do this".  If the idea is to present options, then you may not need the $K$ best solutions; you may just need the best solution and $K-1$ "good" alternatives.  This can alter the method used to find the alternatives.  In fact, after finding one optimal solution, you may want to put a premium on "diversity" over objective value (while stipulating that a solution with a poor objective value should not be included).  Details of how to do that will have to wait for another day.  This post will be too long as it is.

So here are three possible approaches to finding the $K$ best solutions to a purely discrete problem, specifically an integer program (IP).  They all rely on having a good solver program, one that represents current technology.

#### 1. Use the solver's solution pool.

"Solution pool" is a phrase used by CPLEX.  Not every solver has the equivalent of a solution pool, but given the competitiveness of the solver market, my guess is that a number of other solvers have something similar.  Traditional branch-and-bound/branch-and-cut algorithms delete nodes from the search tree as soon as they are pruned, either due to infeasibility or due to having an inferior node bound.  Solution pools retain either feasible but suboptimal nodes or at least those with demonstrated integer solutions.  Various parameters let you control which solutions are retained.  CPLEX allows you to modulate their solution pool to encourage diversity (mentioned above) in the pool of retained solutions.

CPLEX in fact provides three ways of using the solution pool.  If you simply solve the problem, by default CPLEX will retain some of the incumbents found along the way.  Alternatively, you can use a populate method to fill the pool with possible solutions, or solve the problem and then invoke populate.  Simply tracking solutions found along the road to the optimum will typically prove unsatisfactory.  In fact, if the optimum is found early in the search (usually a good thing), you may not get any incumbents besides the optimum.  So I ignored that approach in the experiments and tried both populate alone and solve followed by populate.  Results for those two approaches were identical in all test runs, but I would not be willing to bet that they are always identical.  The nature of the test problem (described below) probably was a major reason why populate and solve-populate produced identical results.

#### 2. Use an incumbent callback to track and reject solutions.

Most contemporary solvers provide callbacks in their APIs.  One type of callback lets you inspect a proposed new incumbent solution before accepting it.  In the past, the method I've suggested for finding the $K$ best solutions to an IP is to use an incumbent callback to store the new solution, if it is one of the $K$ best found so far, and then reject it, forcing the solver to continue.  Since every solution is rejected, the solver eventually exhausts the search tree.  It will either declare the problem infeasible or point to a thoroughly suboptimal solution and call it optimal (because we lied to it), at which point the $K$ solutions currently being stored are the $K$ best.

Note that the incumbent callback should not reject every incumbent it sees.  Once there are $K$ solutions stored, if the solver proposes a solution with objective value inferior to the worst of those stored solutions, the incumbent callback should let the solver accept the solution.  That will move the bound and help the solver prune other nodes that have no chance of producing a nearly-best solution.

My experiments pointed out one wrinkle that I had not previously considered.  When you reject a solution, that may not be the last time you see it.  If the solution was found by a heuristic at some node, a different heuristic may find it again, or it may pop up as the solution to the continuous relaxation of some node in the tree.  Therefore, if you inspect only the objective value of each incumbent, you may end up with duplicate solutions in your final list (and thus fewer than $K$ distinct solutions).

During my experiments, I tried turning off all of CPLEX's heuristics (including "probing"), and duplicates still occurred.  This may mean I missed a heuristic somewhere, but there is another possible explanation.  Suppose that the continuous relaxation of a node produces an integer-feasible solution (incumbent), which you reject.  The solver now has to decide whether to prune that node or branch on it.  My understanding (obtained anecdotally) is that in this situation CPLEX will look for a branch callback to guide its branching decision and, failing to find one, will branch on an arbitrarily chosen integer variable.  I suspect that means that the solution you just rejected will remain feasible in one of the child nodes and, thus, will crop up again.

So, if you use an incumbent callback, it needs to inspect not just the objective value but also the values of the decision variables.  If the solution is good enough to displace one of the $K$ solutions already stored (or if you have not yet reached $K$ solutions), and if its objective value is distinct from those of the stored solutions, it is clearly a new solution; but if its objective value matches one of the stored solutions, you need to look at the decision variables to ensure this is not a rerun.

Update: I learned (the hard way) that to use either this approach or the next one (incumbent callback with solution injection), it is necessary to turn off both dual presolve reductions (set the CPLEX Presolve parameter to 1, which is primal reductions only) and the symmetry breaking feature (set the Symmetry parameter to 0). During presolve, either of those things can by design make changes to the model that will cut off feasible solutions recognized by the presolve logic as suboptimal -- but which might be among the $K$ best. Using the solution pool at intensity 4 causes CPLEX to make these changes for you. An explanation (in a different but related context) is posted in this IBM support document. Thanks to Dr. Ed Klotz for clearing this up for me.

#### 3. Use an incumbent callback with solution injection.

Let's consider what happens, in the previous approach, when you already have $K$ solutions stored and a new solution is found that is better than the worst of them.  The worst of those stored solutions will be discarded.  Call its objective value $f$.  We now know that none of the ultimate $K$ best solutions will have an objective value worse than $f$, but the solver does not know that.  So the solver may waste time exploring nodes whose bounds are inferior to $f$.  We can avert that wasted time by belatedly accepting this solution (which the incumbent callback had previously rejected).  In the case of CPLEX, we can store this solution somewhere in program memory and use a heuristic callback to inject it as a new incumbent, thus moving the bound by a safe amount.

#### Experiments

I put these three approaches to the test using a 0-1 knapsack problem.  The objective was maximization, so in the results below, larger objective values are preferable.  Test were done using CPLEX 12.4, on a quad core PC.  A few disclaimers are in order before diving into the results:
• Results using CPLEX may not generalize to other solvers.
• Results from a 0-1 knapsack problem may not generalize to other problems.
• Anyone who expects some sort of consistency or monotonicity from computational results involving integer programs would do well to heed Emerson.
• Solution time is not a monotonic function of number of nodes processed.  Processing a child node after a parent is frequently cheaper than jumping to some unrelated node (due to the number of dual simplex iterations required).  In some cases, CPLEX processed only one node (the root) but took an order of magnitude longer doing so than it did processing a large number of nodes with different parameter settings.  I suspect this may be due to time spent probing, but I have no concrete evidence to support that suspicion.
In the interests of conserving space, I will report results as nodes processed and the objective values of the final set of solutions.  Solution times did not vary much.  Most runs had negligible times (under one second), and the longest runs took less than nine seconds.

The second and third approaches listed above (incumbent solution with or without injection of discarded solutions) require no parameters.  The solution pool approach, in contrast, involves several parameters.  The solution pool capacity (number of solutions retained) was set to $K$.  There is a solution pool intensity parameter, an integer from 0 to 4, where 0 (the default) means CPLEX chooses, and 1 through 4 represent increasing intensity levels.  In all trials, the results for 0 and 2 were identical, so apparently CPLEX consistently defaulted to level 2.  There is also a population limit parameter (default 20), which limits the number of integer solutions CPLEX generates during the populate method.  I tested both the default value (20) and (200), but results might differ if other (in particular, higher) values were used.

I ran the binary knapsack model with the number of items ($N$) set to either 20 or 30 and the number of solutions to retain ($K$) set to either 3 or 5.  For the solution pool methods, I tried all possible combinations of populate or solve-populate, population limit 20 or 200, and intensity in $\{0,\dots,4\}$.  To conserve space, I will report just the best solution pool result (the result with the highest objective value for the suboptimal solutions) for each population limit, with the limit and intensity in parentheses.  There was no difference, at any population limit/intensity combination, between populate and solve-populate.  The second and third methods, using callbacks, are labeled "reject" and "inject" in the table.

$N$ $K$ Method Nodes Results
20 3 pool (20, 3) 211 2065, 2073, 2077
pool (200, 1) 112 2073, 2076, 2077
reject 59 2076, 2076, 2077
inject 56 2076, 2076, 2077
20 5 pool (20, 3) 221 2048, 2050, 2065, 2073, 2077
pool (200, 3) 5647 2065, 2073, 2076, 2076, 2077
reject 157 2072, 2073, 2076, 2076, 2077
inject 112 2072, 2073, 2076, 2076, 2077
30 3 pool (20, 1) 1 6878, 6879, 6883
pool (200, 1) 145 6878, 6879, 6883
reject 100 6878, 6879, 6883
inject 109 6878, 6879, 6883
30 5 pool (20, 1) 1 6862, 6872, 6878, 6879, 6883
pool (200, 1) 73 6866, 6872, 6878, 6879, 6883
reject 142 6877, 6878, 6878, 6879, 6883
inject 153 6877, 6878, 6878, 6879, 6883

There are few generalizations I can take away from this.   Every approach found the optimal solution.  The default intensity setting for the CPLEX pool approaches was never best (but there is no consistency in which setting was preferable).  In only one case ($N=30,\ K=3$) was I able to find the actual $K$ best solutions using a pool method.  It is possible that setting the population limit much larger would do the trick, or that there is some other parameter setting that I simply missed.  Injecting discarded solutions in the third approach did not have a consistent impact: sometimes it increased the node count, sometimes it decreased the node count.  The node count itself is deceptive, in that the instances where only one node was reported by CPLEX tended to have the longest execution times.

Code (Java, very unpolished) and results (spreadsheet) are available by request.

Addendum: responding to Pratim's question, I posted a follow-up doing the same thing in AMPL.

1. Paul,

we did something in constraint progrmaming that might interest you

Michel Lefebvre, Jean-François Puget and Petr Vilim. Route Finder : Efficiently Finding K Shortest PathsUsing Constraint Programming, in proceedings of CP 2011

2. J-F: Thanks for the pointer. I should be able to acquire the paper through our library system.

3. You could also mention your last blog post. One could K-times exclude the current optimal solution and reoptimize. This should ensure finding the K best solutions.

4. @Thomas: Indeed. It would be slower than the callback approach, since you would be restarting the search tree each time. Also, as I mentioned in the last post, it is easy to exclude solutions when the variables are all binary, not so easy when they are general integer. (It's easy to exclude a specific general integer solution if you are using constraint programming, though.)

5. Paul, Have you used Cplex solution pool with AMPL? If you did, could you share the codes you have:
1) to populate the solution pool after solving the MIP, and
2) to poluate the pool only.
Thanks.

6. It's not hard to do, but the answer was long enough to warrant a separate post. The link is in the addendum above.

7. Dear Paul,

This is an excellent article.

I don't have AMPL. But I can run CPLEX through C++/Java. It would be very good for me if you provide me the Java code that you have.

My email :

...@gmail.com

Best regards,

Bissa Nath

1. Bissa: I just sent you the Java code. -- Paul

8. Paul, I've been looking to generate K best solutions where K depend on the problem size (e.g. K=N in your example). I came across your blog. I also code in c++ and use concert technology. Could you please e-mail me your code? My e-mail is bkeskin@cba.ua.edu

Thanks,
Burcu

1. Just sent you the code.

Paul

9. Hello Paul.,
I have a question on Solution Pool method of CPLEX. Does it give us the best K solutions or it might be few solutions left better than those K ones? I have been using the integer-cut constraint to generate all the solutions. This will reassure me that I will find the exact K best solutions and in the increasing value of objective function (while minimizing). However this needs restarting the optimization every time I add the constraint. Does the pool method give me the ordered list of solutions?

1. With the solution pool method, I'm not sure there is any guarantee that you always get the K best solutions. Based on my experiments (reported in the post), the safest way to use the pool is to set the pool capacity as high as possible. The default is 2.1 billion, but how high you can go clearly depends on how much memory you have, and how high you need to go depends on how many feasible solutions exist (which you typically won't know). My take is that it is a gamble, but with the default capacity setting probably a pretty good gamble.

If the pool method does find the K best solutions, I do not think you can extract them in objective order. I believe you will have to scan all the solutions in the pool and figure out which are the K best.

As far as the method you are using, I think that with all the restarts it is slower (perhaps much slower) than the callback methods (labeled "reject" and "inject") I described above. They, like your current method, are guaranteed to produce the K best solutions (though not necessarily in ascending objective order).

2. Thanks. That will help me a lot.

If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on OR-Exchange.