Wednesday, January 16, 2013

Finding All Solutions (or Not)

People occasionally ask how to find all optimal solutions (or, less frequently, all feasible solutions) to a mathematical programming problem. To keep the length of this post reasonable (?), I'll restrict myself to linear and (mixed) integer linear programs, and make only some general observations. Specific methods will be left to future posts.

Let's consider a linear optimization problem of the form \[ \begin{array}{lrcl} \textrm{minimize} & & f(x,y)\\ \textrm{s.t.} & (x,y) & \in & \mathcal{P}\\ & x & \in & \mathbb{Z}^{m}\\ & y & \in & \mathbb{R}^{n} \end{array} \] where (without loss of generality) we are minimizing a linear function \(f\) over a convex polytope \(\mathcal{P}\). For a linear program (LP), omit the integer variables $x$; for a pure integer linear program (ILP), omit the continuous variables $y$. Again without loss of generality, we assume that the feasible region $\tilde{\mathcal{P}}=\mathcal{P}\cap\left(\mathbb{Z}^{m}\times\mathbb{R}^{n}\right)$ is not empty (else there is nothing to find) and that $\mathcal{P}$ (and therefore $\tilde{\mathcal{P}}$) is bounded. In any realistic situation, $\mathcal{P}$ can be bounded by adding "box constraints" (bounds) for $x$ and $y$. Unrealistic situations are left to the reader as an exercise. Since the continuous hull of $\mathcal{P}$ (relaxing the integrality constraints) is closed (courtesy of convexity) and bounded (by assumption), and $f$ is continuous (courtesy of linearity), and since the feasible region $\tilde{\mathcal{P}}$ is nonempty (by assumption), at least one optimal solution must exist.

The two questions are equivalent. Suppose that we are looking for all feasible solutions (i.e., the contents of $\tilde{\mathcal{P}}$). That's the same as finding all optimal solutions with a constant-valued objective function $f(x,y)=c$. On the other hand, finding all optimal solutions to the problem above is equivalent to finding all points in the modified feasible region \[ \left\{ x\in\mathbb{Z}^{m},y\in\mathbb{R}^{n}:(x,y)\in\mathcal{P}^{*}\right\} \] where $\mathcal{P}^{*}=\mathcal{P}\cap\left\{ (x,y):f(x,y)\le f^{*}\right\} $ and $f^{*}$ is the optimal objective value in the original problem. Since we obtained $\mathcal{P}^{*}$, the set of optimal solutions, by intersecting $\mathcal{P}$ with a half-space, $\mathcal{P}^{*}$ is another convex polytope.

Henceforth I'll restrict myself to the question of finding all optimal solutions.

Uncountably infinite is a lot. Start with a linear program (no $x$). If $\hat{y}$ and $\tilde{y}$ are two optimal solutions with objective value $f^{*}$, then any convex combination $\lambda\hat{y}+(1-\lambda)\tilde{y}$ with $0\le\lambda\le1$ belongs to $\mathcal{P}$ (by convexity) and has the same objective value $f^{*}$ (by linearity of $f$). So right away we have an uncountable number of optima. With a pure integer program (no $y$), the situation is rosier: since we have assumed that $\mathcal{P}$ is bounded, the feasible region $\tilde{\mathcal{P}}$ will contain a finite number of integer lattice points, and so the number of optimal solutions will be finite. (Note, however, that "finite" does not preclude "annoyingly large".)

With a mixed integer linear program (MILP), things are trifle more complex. The projection of the set of optimal solutions onto $\mathbb{Z}^{m}$, i.e., the set of $x$ for which a $y$ exists such that $(x,y)$ is optimal, will be finite. If for each such $x$ there is exactly one $y$ yielding an optimal solution, the number of optima is finite. As soon as we find one $x$ such that $(x,y)$ and $(x,y')$ are both optimal with $y\neq y'$, we're back to having an uncountable number of optima.

Putting all that together, for ILPs we can ask to find all optima. For LPs, the best we can do is ask to find all extreme points (vertices) of $\mathcal{P}$ that are optimal. For MILPs, we can ask for all $x^{*}$ that correspond to optimal solutions, and for each such $x^{*}$ we can ask for a single $y^{*}$, or at most all extreme points of $\left\{ y\in\mathbb{R}^{n}:\left(x^{*},y^{*}\right)\in\mathcal{P}^{*}\right\} $ (yet another polytope).

This could take a while. Let's consider the LP case again (no $x$). Per the Wikipedia entry, the Avis-Fukuda algorithm for enumerating the vertices of a polytope $\mathcal{P}$ expressed as the solution to a set of linear inequalities in $y$ has time complexity of $O(knv)$, where $k$ is the number of nonredundant inequalities defining $\mathcal{P}$, $n$ is the dimension of $y$ and $v$ is the number of vertices to be enumerated. (Sorry, my notation is slightly different from the Wikipedia's. Deal with it.) This does not look too bad ... until you stop to think about $v$. If the polynomial is a hypercube, $k=2n$ and $v=2^{n}$, so we're looking at $O\left(n^{2}2^{n}\right)$ time. That's fine for small $n$, but it blows up fairly quickly as $n$ increases.

What about the integer case? Well, ILPs and MILPs are NP-hard in general,
and that's just to find one optimal solution. Finding all optimal
solutions would be, what, NP-NP-hard?

Is this really what you want? This brings me around to motivation. Why do you want all optimal (let alone all feasible) solutions? One possible answer is to offer a decision maker some choices, charitably assuming that more than one optimum exists. (Here is a related post.) Another is that there are criteria not included in the model that distinguish the "quality" of various solutions (so not all optimal solutions are equally good, even though they are all equally optimal).

My suggestion in both those cases is to find one optimal solution $\left(x^{*},y^{*}\right)$ with objective value $f^{*}$, then find "epsilon-optimal" solutions satisfying some diversity criterion. We pick our favorite $\epsilon>0$, add the constraint $f(x,y)\le f^{*}+\epsilon$, and change the objective to something else. Possibilities include (but are by no means limited to) maximizing the $L_{1}$ distance of the next solution from $\left(x^{*},y^{*}\right)$ or from the centroid of previously found solutions, maximizing the minimum distance from all previously found solutions, maximizing or minimizing one particular variable (e.g., $\max x_{i}$ or $\min y_{i}$), or optimizing a randomly generated linear function of $x$ and $y$.

If you have some alternate criterion, not part of the original model, that you were going to use to pick among competing optima, and if that criterion is linear (or can be approximated by a linear function), it makes an obvious choice for the second stage objective.

2 comments:

  1. "Finding all optimal solutions would be, what, NP-NP-hard?"

    #P-hard, I think. http://en.wikipedia.org/wiki/Sharp-P http://en.wikipedia.org/wiki/Sharp-P-complete

    ReplyDelete
    Replies
    1. Looks right, but I glaze over quickly when it comes to complexity terminology.

      Delete

If this is your first time commenting on the blog, please read the Ground Rules for Comments.