A question came up recently that can be cast in the following terms. Let $x$ and $y$ be variables (or expressions), with $y$ unrestricted in sign. How do we enforce the requirement that \[
x\le y^{+}=\max(y,0)\] (i.e., $x$ is bounded by the positive part of $y$) in a linear program?
To do so, we will need to introduce a binary variable $z$, and we will need lower ($L$) and upper ($U$) bounds on the value of $y$. Since the sign of $y$ is in doubt, presumably $L<0<U$. We add the following constraints:\begin{align*}
Lz & \le y\\
y & \le U(1-z)\\
x & \le y-Lz\\
x & \le U(1-z)\end{align*} Left to the reader as an exercise: $z=0$ implies that $0\le y \le U$ and $x \le y \le U$, while $z=1$ implies that $L \le y \le 0$ and $x \le 0 \le y-L$.
We are using the binary variable $z$ to select one of two portions of the feasible region. Since feasible regions in mathematical programs need to be closed, it is frequently the case that the two regions will share some portion of their boundaries, and it is worth checking that at such points (where the value of the binary variable is uncertain) the formulation holds up. In this case, the regions correspond to $y\le 0$ and $y\ge 0$, the boundary is $y=0$, and it is comforting to note that if $y=0$ then $x\le 0$ regardless of the value of $z$.
No comments:
Post a Comment
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.