## Saturday, June 17, 2023

### An Unbounded Bounded Feasible Region (I)

If you find the title of this post confusing, join the club! A question on Operations Research Stack Exchange, "Randomly constructing a bounded ellipsoid", sent me down a rabbit hole, eventually leading to this opus. I'm going to make a couple of notational tweaks to the original question, but the gist is as follows. We have an elliptical feasible region $X = \lbrace x \in \mathbb{R}^n : f(x) \le 0 \rbrace$ where $f(x) = x^\prime Q x + q^\prime x + q_0.$ (One of my tweaks is to absorb the author's factor $1/2$ into $Q.$) $Q\in \mathbb{R}^{n \times n},$ $q\in \mathbb{R}^n$ and $q_0\in \mathbb{R}$ are generated by sampling random numbers from the standard normal distribution. In the case of $Q,$ we sample an $n\times n$ matrix $H$ and then set $Q = \frac{1}{2} H^\prime H.$ (My introducing the symbol $H$ for the sampled matrix is the other notational tweak.) Note that $Q$ is automatically symmetric and positive semidefinite, and is positive definite with probability 1. (For it not to be positive definite, $H$ would have to have less than full rank, which has zero probability of occurring.) I should point out here that saying something has probability 1 or 0 assumes that the random number generator works as advertised.

The author of the question said that in their experience $X$ was unbounded "most of the time." That struck me as impossible, and after a bunch of scribbling on marker boards I finally came down to what I think is a correct argument that $X$ must be bounded. Let $\left\{ x_{1},\dots,x_{n}\right\}$ be an orthonormal basis of eigenvectors of $Q,$ with $Qx_{i}=\lambda_{i}x_{i}$ and $$x_i^\prime x_j =\begin{cases} 1 & i = j \\ 0 & i \neq j. \end{cases}$$

(I'll leave the proof that such a basis exists to the reader as an exercise.)

Now suppose that $X$ is unbounded, meaning that for an arbitrarily large $M$ we can find $x\in X$ such that $\parallel x\parallel>M.$ Write $x$ in terms of the basis: $x=\sum_{i}a_{i}x_{i}.$ Observe that $$M^{2}=\parallel x\parallel^{2}=x^{\prime}x=\sum_i \sum_j a_i a_j x_i^\prime x_j = \sum_{i}a_{i}^{2}\left(x_{i}^{\prime}x_{i}\right)=\sum_{i}a_{i}^{2}.$$Expanding $f(x),$ we have

\begin{align*} f(x) & =\left(\sum_{i}a_{i}x_{i}\right)^{\prime}Q\left(\sum_{i}a_{i}x_{i}\right)+q^{\prime}\left(\sum_{i}a_{i}x_{i}\right)+q_{0} \\ & =\sum_{i,j}a_{i}a_{j}\left(x_{i}^{\prime}Qx_{j}\right)+\sum_i a_{i}\left(q^{\prime}x_{i}\right)+q_{0} \\ & =\sum_{i,j}a_{i}a_{j}\lambda_{j}\left(x_{i}^{\prime}x_{j}\right)+\sum_{i}a_{i}\left(q^{\prime}x_{i}\right)+q_{0} \\ & =\sum_{i}a_{i}^{2}\lambda_{i}+\sum_{i}a_{i}\left(q^{\prime}x_{i}\right)+q_{0}. \end{align*}

Since $x\in X,$ $\sum_{i}a_{i}^{2}\lambda_{i}\le-\sum_{i}a_{i}\left(q^{\prime}x_{i}\right)-q_{0}.$ According to the Cauchy-Schwarz inequality, $\vert q^{\prime}x_{i}\vert\le\parallel q\parallel\parallel x_{i}\parallel=\parallel q\parallel,$ so we have $$\sum_{i}a_{i}^{2}\lambda_{i}\le-\sum_{i}a_{i}\left(q^{\prime}x_{i}\right)-q_{0}\le\sum_{i}\vert a_{i}\vert\parallel q\parallel+\vert q_{0}\vert.$$

On the other hand, if $\Lambda=\min_{i}\lambda_{i}>0,$ then $$\sum_{i}a_{i}^{2}\lambda_{i}\ge\Lambda\sum_{i}a_{i}^{2}=\Lambda M^{2}.$$ Combining these, $$\Lambda M^{2}\le\parallel q\parallel\sum_{i}\vert a_{i}\vert+\vert q_{0}\vert.\quad (1)$$

Now let $A=\max_{i}\vert a_{i}\vert$ and assume without loss of generality that $\vert a_{1}\vert=A.$ Since $M^{2}=\sum_{i}a_{i}^{2},$ $A^{2}=M^{2}-\sum_{i>1}a_{i}^{2}\le M^{2}$ and so $0<A\le M.$ Meanwhile, $M^{2}=\sum_{i}a_{i}^{2}\le nA^{2},$ which implies $A\ge\frac{M}{\sqrt{n}}.$

Dividing both sides of (1) by $A,$ we have $$\Lambda M\le\Lambda\frac{M^{2}}{A}\le\parallel q\parallel\sum_{i}\frac{\vert a_{i}\vert}{A}+\frac{\vert q_{0}\vert}{A}\le\parallel q\parallel n+\frac{\vert q_{0}\vert}{A}.\quad (2)$$

The left side of (2) increases as we increase $M,$ while the right side decreases (since $A$ increases and both $q_0$ and $\parallel q\parallel n$ are constant). This leads to a contradiction.

So, barring an error in the above, we have a mathematical proof that $X$ must be bounded. In the next post I will explore the computational side of things.