- use a piecewise-linear function based on secants;
- use a piecewise-linear function based on tangents;
- use a surrogate variable that is bounded (below if the function is convex, above if the function is concave) by a collection of linear functions derived from tangents; or
- use the third technique plus a callback that adds additional linear tangent functions whenever a candidate solution underestimates (convex case) or overestimates (concave case) the true value of the function.
Before going on, I want to make two observations that I should have made in the first post. First, for the specific cases I just mentioned, if you solve the problem twice, once using secants and once using tangents, the true optimal objective value will fall between the values of the two solutions, so you will have an idea of how close to optimal the possibly suboptimal solution is. (I'll illustrate this below.) For nonlinear functions in equality constraints, or for functions that are neither convex nor concave, this would not work. Second, if the argument to the nonlinear function is integer-valued, it makes sense to construct a piecewise-linear function (first two options) with integer-valued break points or to construct tangents (third option) at integer-valued points. That way, you are guaranteed correct function values at least at some points in the domain of the function. This is easy to do with secants but considerably more work with tangents.
I have one more observation to make before getting to an example. In the fourth method I listed, with a convex function in a $\le$ constraint or a concave function in a $\ge$ constraint, if the solver finishes the search with an "optimal" solution, the solution will really be optimal. These are the cases where we would normally risk superoptimality, but the callback will prevent that from happening.
At this point, I'm going to present an example of one possible scenario. The problem is to select repeating order quantities for a collection of products. In the model to follow, capital letters will be parameters and lower case letters will be indices or variables. We start with $N$ products. For each product $i$, we know the annual demand ($D_i$), the unit price ($P_i$), the unit annual holding cost ($H_i$), the cost to place an order for the product ($S_i$, regardless of how much is being ordered), and the unit volume ($V_i$, the amount of storage space one unit occupies). In addition, we know the total storage capacity $C$ of the warehouse where the products will be stored. We will somewhat laughably assume that everything is deterministic.
Let $q_i$ denote the quantity of product $i$ ordered each time an order is placed, and $f_i$ the frequency (number of orders per year) with which product $i$ is replenished. The total annual cost, to be minimized, is $$\sum_{i=1}^N \left[P_iD_i + H_i\frac{q_i}{2} + S_i f_i \right],\tag{1}$$where the first term is the total cost of purchasing products (which is constant), the second term is the total cost of storing them (based on the average inventory level, which is half the order size), and the last term is the total cost for placing orders.
The nonlinear function in this problem is the one relating order quantity to order frequency:$$f_i = \frac{D_i}{q_i}\quad \forall i=1,\dots,N.\tag{2}$$For a single product, it would be easy to substitute out $f_i$ from the objective, leaving a function of just $q_i$, and then differentiate. The first order optimality condition leads to the well known economic order quantity (EOQ) formula$$q_i^* = \sqrt{\frac{2D_iS_i}{H_i}}.$$The catch here is that ordering the EOQ for every item might exceed our storage space. So we resort to a mixed-integer program, minimizing (1) subject to $$\frac{1}{2}\sum_{i=1}^n V_i q_i \le C \tag{3}$$with $$q_i\in \lbrace 1,\dots,D_i\rbrace\quad \forall i\in\lbrace 1,\dots,N\rbrace$$ and $$f_i \in\left[ 1,D_i\right]\quad \forall i\in\lbrace 1,\dots,N\rbrace ,$$plus some constraint(s) to connect $f_i$ to $q_i$. It's worth pausing here to note that at most one of $f_i$ and $q_i$ needs to be discrete. Here I am assuming that order quantities must be integers (we are ordering something like appliances, where a third of a washing machine is not a meaningful concept) but that order frequencies need not be integers (2.5 orders per year just means two orders one year, three the next, and so on).
What is left is to pick one of the methods for approximating the relationship between quantity and frequency, equation (2). For methods 1 and 2, we can add to (1) and (3) the constraint $$f_i = \ell_i(q_i)\quad\forall i\in\lbrace 1,\dots,N\rbrace,\tag{4}$$where $\ell_i()$ is a piecewise linear function derived from either tangents or secants to the reciprocal function $g(x)=1/x$. For method 3, we can instead compute $M_i$ tangent functions $\ell_i()$ for each $i$ and add the constraint $$f_i \ge \ell_j(q_i) \quad\forall i\in\lbrace 1,\dots,i\rbrace,\,\forall j \in\lbrace 1,\dots,M_i\rbrace.\tag{4'}$$Note that this may underestimate $f_i$ (and thus the cost of a solution) but will not affect feasibility (since the space constraint (3) does not contain $f_i$). Finally, for the fourth method, we can minimize (1) subject to (3) and (4') and also use a callback to add more constraints like (4') on the fly.
I tried all four methods, using CPLEX 12.10, on a small test problem ($N=10$ products). I used a constant holding cost rate ($H_i = 0.2$) for all products, and set the space limit to $C=2136.41$. (Yes, it looks goofy, but I used random numbers to generate the problem.) The product level data was as follows:
Product | Unit Cost | Order Cost | Demand | Size | |
---|---|---|---|---|---|
0 | 4.04 | 2.54 | 1706 | 1.57 | |
1 | 2.37 | 2.55 | 1203 | 4.68 | |
2 | 3.82 | 2.01 | 1206 | 1.56 | |
3 | 2.92 | 2.52 | 1373 | 2.00 | |
4 | 1.37 | 3.38 | 1029 | 4.79 | |
5 | 2.56 | 2.52 | 1700 | 2.91 | |
6 | 4.55 | 3.12 | 1314 | 4.28 | |
7 | 1.07 | 3.18 | 1223 | 4.06 | |
8 | 3.64 | 3.74 | 1916 | 3.32 | |
9 | 1.97 | 2.13 | 630 | 2.52 |
For method 1, I used 10 evenly spaced breakpoints (so nine chords). For the other methods, I computed tangents at 10 evenly spaced points. Of course, more breakpoints or tangents would make for a more accurate approximation. The objective value for each of the four methods is as follows:
Method | Nominal | Actual |
---|---|---|
1 | 672.48 | 670.05 |
2 | 490.27 | 28946.48 |
3 | 490.27 | 28946.48 |
4 | 633.40 | 633.53 |
Here "nominal" is the objective value reported by CPLEX and "actual" is the actual cost, using the order quantities from the CPLEX solution but recalculating the order frequencies according to (2). Method 1, based on secants, overestimates frequency (and thus cost) slightly. Methods 2 and 3 massively underestimate some frequencies, and thus the overall cost. The reason is apparent from the next table, which shows the nominal and actual order frequencies for each product:
Product | Demand | Quantity | Actual Freq | Nominal Freq |
---|---|---|---|---|
0 | 1706 | 1 | 1706.00 | 19.89 |
1 | 1203 | 1 | 1203.00 | 19.80 |
2 | 1206 | 1 | 1206.00 | 19.85 |
3 | 1373 | 1 | 1373.00 | 19.83 |
4 | 1029 | 137 | 7.51 | 6.69 |
5 | 1700 | 1 | 1700.00 | 19.82 |
6 | 1314 | 1 | 1314.00 | 19.83 |
7 | 1223 | 164 | 7.46 | 6.64 |
8 | 1916 | 1 | 1916.00 | 19.91 |
9 | 630 | 85 | 7.41 | 6.61 |
For products with nontrivial order quantities, frequencies are underestimated a bit, but for products with order quantity 1 frequencies are massively underestimated (which is what attracts the solver to using such a small order quantity). Basically, the piecewise-linear approximation of the reciprocal relation (2) stinks at the low end of the quantity range, because the curve is steep and none of the tangents are close enough to that end of the quantity domain. This could be improved by forcing the piecewise-linear functions in method 2 to have a breakpoint at $q_i=1$ or by including a tangent function $\ell_i()$ calculated at $q_i=1$ in method 3. Still, to get a reasonable approximation you might need to add a bunch of tangents at that end of the curve.
Method 4 produces a solution that is actually optimal (to within convergence tolerances). The callback ignored discrepancies between nominal and actual order frequency below 0.01. Cost is very slightly underestimated, which I think could be fixed by setting that 0.01 tolerance even smaller. That might cause the program to generate a huge number of tangents in the callback, slowing it down considerably. As it was, the callback added 392 tangents to the initial set of 10 tangents during the solution run.
I mentioned earlier that using both tangents and secants (in separate runs) would bracket the true optimal value in cases of convex (concave) functions in less than (greater than) constraints. Here the true optimal cost (around 633.5) is indeed below the nominal cost of the secant approach (672.58) and above the nominal cost of the tangent approach (490.27). Note that we have to use the nominal costs, which in the case of the tangent approach is at once horribly inaccurate for the solution produced but still a valid lower bound on the actual optimal cost.
If you would like to look at or play with my code (add breakpoints or tangents, add products, make the frequency rather than the order quantity discrete), you can find it at https://gitlab.msu.edu/orobworld/secants.
No comments:
Post a Comment
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.