Someone posted a nonconvex nonlinear optimization model on OR Stack Exchange and asked for advice about possible reformulations, piecewise linear approximations, using global optimizers, and other things. The model is as follows: All symbols other than , and are model parameters (or indexes). The author originally had as binary variables, apparently believing that would facilitate linearization of products, but also expressed interest in the case where is continuous. I'm going to propose a possible "low-tech" solution procedure for the continuous case. The binary case is probably a bit too tough for me. The author supplied sample data for all parameters except , with dimensions and but expressed a desire to solve the model for and (gulp).
Note that the left-hand sides (LHSes) of the three constraints are identical. Let be the function on the right-hand side (RHS) of constraint (1), so that the RHS of (1) is . is a monotonically decreasing function. The RHS of (3) is . Since the left sides are equal, we have which tells us several things. First, , so if it is impossible to satisfy (4). Second, if , (4) implies that , which simplifies the problem a bit. Lastly, let's assume . For fixed the LHS of (4) is monotonically decreasing in , with the LHS greater than the RHS when and with If , there is no that can balance equation (4), and so the value of is infeasible in the model. If , then there is exactly one value of for which (4) holds, and we can find it via line search.
Next, suppose that we have a candidate value for and have found the unique corresponding value of by solving (4). We just need to find a vector that satisfies (1) and (2). Equation (3) will automatically hold if (1) does, given (4). We can find by solving a linear program that minimizes 0 subject to (1), (2) and the bounds for .
Thus, we have basically turned the problem into a line search on . Let's set an arbitrary upper limit of for and , so that our initial "interval of uncertainty" for is . It's entirely possible that neither 0 nor is a feasible value for , so our first task is to search upward from until we find a feasible value (call it ) for , then downward from until we find a feasible value (call it ) for . After that, we cross our fingers and hope that all are feasible. I think this is true, although I do not have a proof. (I'm much less confident that it is true if we require to be binary.) Since is a function of and the objective function does not contain , we can search for a local optimum (for instance, by golden section search) and hope that the objective function is unimodal as a function of , so that the local optimum is a global optimum. (Again, I do not have proof, although I would not be surprised if it were true.)
I put this to the test with an R script, using the author's example data. The linear programs were solved using CPLEX, with the model expressed via the OMPR package for R and using ROI as the intermediary between OMPR and CPLEX. I first concocted an arbitrary feasible solution and used it to compute , so that I would be sure that the problem was feasible with my choice of . Using and 100 (arbitrarily chosen) as the initial upper bounds for and , my code produced an "optimal" solution of , , with objective value . There is a bit of rounding error involved: the common LHS of (1)-(3) evaluated to 126.6189, while the three RHSes were 126.6186, 126.6188, and 126.6186. (In my graduate student days, our characterization of this would be "good enough for government work".) Excluding loading libraries, the entire process took under three seconds on my desktop computer.
You can access my R code from my web site. It is in the form of an R notebook (with the code embedded), so even if you are not fluent in R, you can at least read the "prose" portions and see some of the nagging details involved.