Sunday, June 18, 2023

An Unbounded Bounded Feasible Region (II)

In my previous post, I cited a question on Operations Research Stack Exchange about an allegedly unbounded feasible region defined by a randomly generated quadratic constraint. In that post, I presented what I believe is a valid proof that, at least in theory, the feasible region should be bounded with probability 1.

Unfortunately, in our digital world, theory and computation sometimes diverge. To test whether I was correct, I coded several mixed integer quadratic programming (MIQCP) models to assess the boundedness of X, and applied them to a small sample of test problems (using Java and CPLEX 22.1.1). I set the dimension of the test problems to n=5 (for no particular reason).

All three models contained the variable x and the constraint xQx+qxq0. Two of the models attempted to answer the question directly by maximizing either the L1 or L norm of x over X.

For the L model, I added continuous variable y and binary variables zi and wi together with the constraints izi+iwi=1, yixi, yixi, zi=1yi=xi and wi=1y=xi. The combined effect of these is to force y=|xi| for some i where |xi|=maxj|xj|=∥x. The objective was to maximize y.

For the L1 model, I added continuous variables yi0 and constraints yi=|xi|. (Internally, CPLEX adds binary variables does big-M magic to linearize the use of absolute values.) The objective was to maximize iyi=∥x1.

My third approach was iterative. The model was almost the same as the L model, except that rather than maximizing y I set the objective to minimize 0 (meaning the first feasible solution wins) and set the lower bound of y to some value M. If the model found a feasible solution (an xX such that xM), I doubled M and tried again, until either M exceeded some upper limit or the solver said the problem was infeasible (meaning X is bounded in L norm by the last value of M).

You can find my Java code (self-contained other than needing CPLEX) in my repository. Here are the results from a handful of test runs.

Random Seed Max L_infinity
Max L_1
Iterative
123
unbounded
unbounded
bounded (100)
456 unbounded
unbounded
bounded (100)
789 unbounded unbounded bounded (1600)
12345 unbounded bounded (22.1) bounded (100)
61623 bounded (12.4) unbounded bounded (100)
20230616 bounded (120.9) unbounded bounded (200)

The numbers in parentheses are upper bounds on the L1 norm in the third column and the L norm in the second and fourth columns. The bound found by the iterative method is never tight, so a bound of 100 means maxxinXx100. As you can see, the iterative method always found the ellipsoids to be bounded, consistent with the mathematical argument in the previous post. The other two models frequently found the problem to be "unbounded", though they did not always agree on that. This is a bit confusing (OK, very confusing). In particular, the "Max L_infinity" and "Iterative" models differ only in whether you are maximizing y or looking for any solution with yM, so saying (when the seed is 123) that the supremum of y is but y cannot exceed 100 is grounds for another beer (or three).

Something is apparently going on under the hood in CPLEX that is beyond me. Meanwhile, I'm sticking to my belief that X is always bounded.

Saturday, June 17, 2023

An Unbounded Bounded Feasible Region (I)

If you find the title of this post confusing, join the club! A question on Operations Research Stack Exchange, "Randomly constructing a bounded ellipsoid", sent me down a rabbit hole, eventually leading to this opus. I'm going to make a couple of notational tweaks to the original question, but the gist is as follows. We have an elliptical feasible region X={xRn:f(x)0} where f(x)=xQx+qx+q0. (One of my tweaks is to absorb the author's factor 1/2 into Q.) QRn×n, qRn and q0R are generated by sampling random numbers from the standard normal distribution. In the case of Q, we sample an n×n matrix H and then set Q=12HH. (My introducing the symbol H for the sampled matrix is the other notational tweak.) Note that Q is automatically symmetric and positive semidefinite, and is positive definite with probability 1. (For it not to be positive definite, H would have to have less than full rank, which has zero probability of occurring.) I should point out here that saying something has probability 1 or 0 assumes that the random number generator works as advertised.

The author of the question said that in their experience X was unbounded "most of the time." That struck me as impossible, and after a bunch of scribbling on marker boards I finally came down to what I think is a correct argument that X must be bounded. Let {x1,,xn} be an orthonormal basis of eigenvectors of Q, with Qxi=λixi and xixj={1i=j0ij.

(I'll leave the proof that such a basis exists to the reader as an exercise.)

Now suppose that X is unbounded, meaning that for an arbitrarily large M we can find xX such that x∥>M. Write x in terms of the basis: x=iaixi. Observe that M2=∥x2=xx=ijaiajxixj=iai2(xixi)=iai2.Expanding f(x), we have 

f(x)=(iaixi)Q(iaixi)+q(iaixi)+q0=i,jaiaj(xiQxj)+iai(qxi)+q0=i,jaiajλj(xixj)+iai(qxi)+q0=iai2λi+iai(qxi)+q0.

Since xX, iai2λiiai(qxi)q0. According to the Cauchy-Schwarz inequality, |qxi|≤∥q∥∥xi∥=∥q, so we have iai2λiiai(qxi)q0i|ai|q+|q0|.

On the other hand, if Λ=miniλi>0, then iai2λiΛiai2=ΛM2. Combining these, ΛM2≤∥qi|ai|+|q0|.(1)

Now let A=maxi|ai| and assume without loss of generality that |a1|=A. Since M2=iai2, A2=M2i>1ai2M2 and so 0<AM. Meanwhile, M2=iai2nA2, which implies AMn.

  Dividing both sides of (1) by A, we have ΛMΛM2A≤∥qi|ai|A+|q0|A≤∥qn+|q0|A.(2)

The left side of (2) increases as we increase M, while the right side decreases (since A increases and both q0 and qn are constant). This leads to a contradiction.


So, barring an error in the above, we have a mathematical proof that X must be bounded. In the next post I will explore the computational side of things.