The key to modeling this is recognizing that every point on the graph is a linear combination of two adjacent endpoints. (The endpoints are the ones represented by dots on the graph.) There are nine endpoints on the graph, which we will designate as $(x_1,y_1),\dots,(x_9,y_9)$, as shown in the second sketch. For the modeling trick to work, it is necessary that the domain of the function be bounded (in this case $0\le x\le x_9$).

Hopefully the reason for the subscripts of the ordinates of the horizontal segments is now apparent

Let $y$ be the value of the function. We introduce continuous variables $\mu_1,\dots,\mu_9\in [0,1]$, which will be the weights of a convex combination of the graph points. Both $x$ and $y$ are expressed using these weights:\begin{gather*}x=\mu_1 x_1 + \dots + \mu_9 x_9\\y=\mu_1 y_1 + \dots + \mu_9 y_9\\\mu_1+\dots+\mu_9=1.\end{gather*}

What makes this work is that we also make the weights $\{\mu_1,\dots,\mu_9\}$ a type 2 special ordered set (SOS2), specified in index order. This tells the browser that at most two of the weights can be nonzero and, if there are two nonzero weights, they must be consecutive in the stated order (e.g., $\mu_3=0.4,\mu_4=0.6$ is fine but $\mu_2=0.4,\mu_7=0.6$ is invalid). The SOS2 restriction essentially forces the solver to select one segment of the graph, whether horizontal, diagonal or even vertical; the weights then select one point on the segment. If $y$ is part of the objective, and the chosen segment happens to be vertical, it's a safe bet that one of the weights will be 1 and the rest 0 (picking whichever endpoint of the segment better suits the objective direction).

Although the weight variables are continuous, introducing the SOS2 constraint will effectively turn the problem into a MILP even if it otherwise would have been a linear program (LP).

There are other ways to model this type of function. In particular, anything done with SOS1 or SOS2 constraints can be done with binary variables and linear constraints involving those binaries.

**Related posts**:

Professor Rubin,

ReplyDeleteNice article indeed. Thanks a lot.

You're very welcome.

DeleteDear Paul,

ReplyDeleteIf a convex continues curve instead of step function is in two adjacent endpoints, then is it possible to apply your proposed techniques to model it?

Regards,

Dewan

That depends what you mean by "curve". The technique above works for any piecewise-linear function (any mix of steps and linear segments). If your "curve" is nonlinear, you can do the same things, but you are now dealing with an MINLP (mixed integer nonlinear program), and you need a solver that can solve MINLPs. I don't work with MINLPs, so I don't know which solvers are commonly used and how (or if) they handle SOS constraints.

DeleteThanks for your quick reply.

DeleteIn my case, "curve" is a quadratic equation, which is nonlinear for sure. But, the quadratic equation can be piecewise linearized. In details, a function which consists of some finite number of segments (which are known) and all the segments are quadratic equation. I think in this case the problem can be with a MILP solver. My idea is: solver should select the optimal segment at first, then piecewise linear technique is applied to the selected segment to find the optimal solution. Here I want to mention that the whole function at once is a non-convex non differential function. But each segment individually is convex. I am afraid that the could be full of SOS variables. Can I apply your proposed techniques to model it?

When you say "the quadratic equation can be piecewise linearized", you mean the function (or each quadratic segment of the function) can be *approximated* by a piecewise-linear function, right? If so, then yes, you can use SOS2 to convert the approximate problem into a MILP. When you say "solver should select the optimal segment at first, then piecewise linear technique is applied to the selected segment to find the optimal solution" you lose me. If you mean select the optimal *quadratic* segment, no; that would require a MINLP solver. If you are talking about first doing a pw-linear approximation, then solving, then refining the approximation near the "optimal" solution and solving again, you can try that. It likely will produce a good answer, but I'm not at all sure you can prove that it converges to an optimal solution to the original (nonlinear, nonconvex) problem.

DeleteThank you again. Yes, I will give a try to first model the piecewise-linear approximation, then will send them to find a optimal solution of the approximated function.

Delete