tag:blogger.com,1999:blog-8781383461061929571.comments2020-02-18T17:59:59.749-05:00OR in an OB WorldPaul A. Rubinhttp://www.blogger.com/profile/05801891157261357482noreply@blogger.comBlogger1714125tag:blogger.com,1999:blog-8781383461061929571.post-57373974087110487832020-02-18T16:35:28.672-05:002020-02-18T16:35:28.672-05:00You are - of course - correct. I had misplaced the...You are - of course - correct. I had misplaced the Eq 5 constraints. Now there is no underconstrained w. Great!<br /><br />And it's quite faster now:<br />- CP: 0.07s<br />- SAT: 0.38s<br />- CBC: 10.4s<br /><br />Compare with the original CP/SAT model where the CP solver took 0.4s for the same problem.<br /><br />Tomorrow I will test the MIP model on harder problems and also create a MiniZinc model.hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-59250321261088593542020-02-18T14:50:38.099-05:002020-02-18T14:50:38.099-05:00I don't understand how you would be getting mu...I don't understand how you would be getting multiple solutions for $w$ for the same values of $z$. Constraints (2) through (4) create an equivalence between $w_{ij}$ and $z_i \wedge z_j$.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-72099529680293006912020-02-18T14:46:34.813-05:002020-02-18T14:46:34.813-05:00Yes, the two constraints (z[1] = 1 and z[m+1] = 1)...Yes, the two constraints (z[1] = 1 and z[m+1] = 1) helps both the CP model (as well as the SAT model). All constraints that prunes the search tree - here fixing two values in z - are good for CP. :-) The MIP solver (CBC) is also significantly faster with these constraints than without, though much slower than CP and SAT.<br /><br />For the CP model the time with these constraints are about 0.10s and without about 0.16s.<br /><br />One thing I noted - in the Picat model - was that the w matrix is under constrained, and ot generated a lot of different solutions of w for the same value of z. I tried to fix this in the (MIP) model but didn't succeed. Perhaps CPLEX don't care about this in the model?<br /><br />(The Picat specific trick that fixed this was to ignore w when calling the solver.)<br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-17884741932559095952020-02-18T13:48:29.372-05:002020-02-18T13:48:29.372-05:00In the MIP model, constraint (6) for the case $d=m...In the MIP model, constraint (6) for the case $d=m$ implies $w_{1,m+1}=1$, which in turn implies via (2) and (3) that $z_1 =z_{m+1}=1$. So I'm not sure why constraint (7) helps (maybe it doesn't and it's just random luck), and $z_{m+1}=1$ similarly should not help much. Does it help with the CP model?Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-44164899734207589112020-02-18T12:48:10.531-05:002020-02-18T12:48:10.531-05:00Thanks for the blog post and MIP model, Paul!
I ...Thanks for the blog post and MIP model, Paul! <br /><br />I converted it to a Picat model: hakank.org/picat/linear_combinations_mip.pi The CBC solver took a long time to get all solutions, the SAT solver too 1s, but the CP solver was quite fast: finished in about 0.1s. <br /><br />The original CP model takes about 40ms to solve this problem so it's faster. <br /><br />Also, I added the constraint that Z[M+1] #= 1 since we know that both 1 and max(Diffs) + 1 must be in the (normalized) solution.<br /><br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-46445624335919040062020-01-30T15:40:03.072-05:002020-01-30T15:40:03.072-05:00If the M values are "reasonably small", ...If the M values are "reasonably small", you do not need to use the combinatorial form of Benders from Codato & Fischetti; you can use the original form of Benders. That should eliminate your concern about putting the original objective function in the subproblem in constraint form. As far as decomposing so as to reduce the number of binary variables in any one problem, I don't see any easy fix. You could try asking on OR Stack Exchange (a link to which is in the right margin near the top of the post).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-41553198415174194572020-01-30T15:10:18.657-05:002020-01-30T15:10:18.657-05:00Thank you Dr. Rubin for the response. As for the f...Thank you Dr. Rubin for the response. As for the first criterion you mentioned, the model is too large with 7 million binary variables for an instance. The single model does exceed the solver's memory capacity. However, I cannot find an intuitive way to decompose the model. <br />Here is the model using the notation from paper by Codato and Fischetti on combinatorial BD:<br />min d^t*y<br />s.t.<br />Ax <= a<br />Bz <= b<br />Gz + Hx = g<br />Cx + Dy >= c<br />Ez + Fy >= e<br />x,z = bin, y >= 0<br /><br />As suggested by Codato et. al., I could separate the part with continuous variables and disjunctive constraints into a subproblem. It would still leave three issues: first, the master problem would still contain a large number of binary variables and it may be difficult to get an integer solution using a solver. Especially constraints related to Bz <= b constitute a VRP with multiple precedence and coordination constraints. Second, The objective function will need to be introduced into the slave problem as a constraint of form d^t * y <= UB - e. This has an issue of finding a sufficiently small value of parameter e which does not cut any feasible master problem solution. I have observed multiple integer feasible solutions with objective very close to UB. Final issue relates to the CB cuts added to the Master problem in each iteration. Cutting one solution at a time can introduce a large number of constraints into the master. The master involves, for instance, choosing 100 "x" variables as 1, out of 6 million. I am afraid the number of integer feasible solutions in the Master could be too large to be cut off by CB cuts. <br /><br />Regarding the second criterion, the big-M is not quite infinity and can be bounded by a reasonably small value. However, solving the problem as a single model still does not work for large instances. <br />Anonymoushttps://www.blogger.com/profile/03995432263472338626noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-31102584607007347882020-01-30T13:53:27.139-05:002020-01-30T13:53:27.139-05:00In general, I do not know any way to look at a pro...In general, I do not know any way to look at a problem and decide if Benders is the "right" way to model it. Generally speaking, if there would only be one subproblem and if a single model would not exceed the solver's memory capacity, I would probably try a single model first. If the problem decomposes into several / many subproblems, maybe I would try Benders first.<br /><br />You mentioned using big M constraints, which to me is a special case. If I know valid choices for the M parameters that are "reasonably small", I would be inclined to try a single model. On the other hand, if I find myself using "not quite infinity" values for M in order to be sure they are valid, I would definitely try Benders, since it gets rid of the M values.<br /><br />Bottom line, though, is that these are just personal tendencies. I don't think there is a way to decide with any high degree of certainty just by "looking" at the model.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-28748776025553619262020-01-29T19:35:15.630-05:002020-01-29T19:35:15.630-05:00Dear Dr. Rubin,
I am thinking about using BD for...Dear Dr. Rubin, <br /><br />I am thinking about using BD for a MIP model. It has two types of Binary variables, both connected to each other through a constraint. The model also has continuous variables (part of disjunctive constraints, modeled with bigM) which in turn depend on both sets of binary variables. The objective function involves minimizing a function of continuous variables. Is there a way to find out beforehand whether BD will be a good approach to try? The number of integer variables in the model runs into tens of millions. An alternate question could be, are there problems for which BD does not work as well? How can one answer that question without lengthy experiments and just by "looking" at a MIP model?Zulqarnainhttps://www.blogger.com/profile/17690333321574031547noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-48269199132690381662020-01-07T17:39:40.369-05:002020-01-07T17:39:40.369-05:00Ed: Thanks for the links. I agree with your point ...Ed: Thanks for the links. I agree with your point about expressing constraints with integers rather than fractions (or their decimal equivalents) where practical.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-58475726029841940452020-01-07T16:33:14.736-05:002020-01-07T16:33:14.736-05:00I landed on this blog post today while trying to a...I landed on this blog post today while trying to answer the same question. And Paul's September 23 post is correct about different numerical bases (not the plural of basis for the simplex method) lose precision with different numbers (e.g. 1/3 cannot be stored exactly in the base 2 arithmetic used on most of today's computers). And yes, it is possible that just losing the last bit of the mantissa can be enough to break a tie in the algorithm's decision, leading to alternate optimal bases (the simplex kind), which in turn leads to alternate branching selections when solving MIPs, and then all sorts of performance variability. This is one reason I recommend that whenever you can replace fractions involving simple ratios of integers during the model formulation step with integers, do so. For example, instead of representing 1/3x + 2/3y = 1 with decimal representations of 1/3 that result in a loss of precision around the 16th decimal place in a base 2 representation, multiply the constraint by 3 to get <br />x + 2y = 3, and now all your data is in terms of sums of powers of 2 and represented precisely. <br /><br />Given that I'm writing on a thread that is almost 10 years old, it may seem like nothing has changed, but that is not really the case. Progress has been made in understanding performance variability in mixed integer programming (e.g. see http://www.or.deis.unibo.it/andrea/Lodi-MIC2011-nopause.pdf). CPLEX now has a runseeds command that allows you to assess the level of variability in your MIP model. Also since 2010, I have written a detailed paper on the numerical aspects of LP and MIP solver that can be found on the INFORMS web site here: https://pubsonline.informs.org/doi/10.1287/educ.2014.0130. And if you don't have the patience for a lengthy paper on numerical epsilons and other nitty gritty details, you can try accessing a shorter powerpoint version here: http://yetanothermathprogrammingconsultant.blogspot.com/2015/03/mip-links.htmlEd Klotznoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-2145162378506103302019-12-27T09:16:31.480-05:002019-12-27T09:16:31.480-05:00You're welcome.You're welcome.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-5274549313483457142019-12-27T02:35:48.028-05:002019-12-27T02:35:48.028-05:00I was not aware of the blog post you linked to, wh...I was not aware of the blog post you linked to, which addresses my issue pretty much directly.<br /><br />I am very grateful for your time and for your responses.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-71427686208114297862019-12-26T11:08:20.473-05:002019-12-26T11:08:20.473-05:00I reviewed both the original approach and the call...I reviewed both the original approach and the callback approach in an even older post (https://orinanobworld.blogspot.com/2011/10/benders-decomposition-then-and-now.html). You're correct that there is no guarantee the callback method is faster than solve-cut-repeat, although my gut feeling is that it usually is. FWIW, I always use the callback method.<br /><br />As far as combining them, there are a couple of reasons why holding off until the optimal leaf node has been found won't work. The main one is that by the time you conclude node A is "optimal" (because the best bound converged to the value at that node), a different node (B) containing the true optimal solution will have been pruned, because the bound at B is worse than the bound at A. This is not an issue in the original Benders method, because you rebuild the tree after adding a cut, but it would be a problem (catastrophe?) if you were trying to reuse the current tree. It would require the solver to retain pruned nodes, which would eat a lot more memory than what is currently consumed. In any case, CPLEX will not do this.<br />Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-6441670628640267072019-12-25T23:12:25.535-05:002019-12-25T23:12:25.535-05:00Thank you for the response. What you described (i....Thank you for the response. What you described (i.e., the Benders procedure) is what I have been doing, but then I saw the CPLEX example file ilobendersatsp.cpp which seems to be implementing 'Benders' via lazy calls, which confused me a little. In addition, a reviewer for a scientific journal suggested I implement Benders via lazy cuts so that I won't have to rebuild the Master B&B tree anew every iteration. The trouble with that approach I guess is that it doesn't use the optimal integer solution given the current model, so maybe what you save by maintaining a single B&B tree is lost due to using suboptimal integer points when deriving cuts. When I actually implemented both approaches for my current problem, 'classical' Benders (without lazy calls) seemed to work better. So my original question was motivated by finding a way to combine the advantages of both approaches. I'd be very grateful if you have any further thoughts on how to maybe do that.<br /><br />Thanks again!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-27455507319848878212019-12-25T12:06:28.075-05:002019-12-25T12:06:28.075-05:00The calls occur at arbitrary nodes -- not necessar...The calls occur at arbitrary nodes -- not necessarily leaf nodes (at least not leaves of the complete tree). The nodes where the calls occur have not yet been separated into children, so I suppose they are temporarily leaves. The key is that CPLEX thinks it has a possible new integer-feasible incumbent solution when it calls the callback.<br /><br />To apply the callback only at the "optimal" leaf, you have to go back to the way Jack Benders proposed his method. Solve the current model to optimality. Pass the optimal solution to the subproblem. (This would *not* be via callback.) If the subproblem coughs up one or more constraints, add them to the master model (as ordinary constraints, unless you are generating so many that you need to make them lazy to conserve memory). Then solve the modified master problem, starting from the root node, and repeat.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-34812854199605014122019-12-25T08:41:08.971-05:002019-12-25T08:41:08.971-05:00Hi,
If I understand correctly, a lazy constraint ...Hi,<br /><br />If I understand correctly, a lazy constraint call is made at arbitrary (integer) leafs of the branch-and-bound tree - whichever it happens to land on currently. My question is: is there a way to make this call only at the optimal leaf given the cuts added so far?<br /><br />Thanks.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-18218271470317562312019-12-12T18:18:50.042-05:002019-12-12T18:18:50.042-05:00You're quite welcome.You're quite welcome.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-19086182114935838292019-12-12T18:08:20.731-05:002019-12-12T18:08:20.731-05:00Thank you for your post! Thank you for your post! Theo Enderesthttps://www.blogger.com/profile/13257817240464995438noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-4228585700661433382019-11-30T17:15:43.004-05:002019-11-30T17:15:43.004-05:00Per an inside source, if CPLEX deems the bound on ...Per an inside source, if CPLEX deems the bound on a semicontinuous variable to be reasonable, it linearizes using a binary variable (effectively creating a big-M constraint). If the bound is deemed too large, it uses an indicator constraint. How the indicator constraint is then handled can be seen in the quote from Ed Klotz embedded in an answer by Mark L. Stone to a question about indicators in CPLEX: https://or.stackexchange.com/questions/231/when-to-use-indicator-constraints-versus-big-m-approaches-in-solving-mixed-int/348#348.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-52648326249330506172019-11-28T11:41:04.294-05:002019-11-28T11:41:04.294-05:00Has anybody found out for sure how cplex treats se...Has anybody found out for sure how cplex treats semicontinuous variables? Is it using auxillary binary variables or not?Unknownhttps://www.blogger.com/profile/10495458753837205156noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-46589916930761396582019-11-21T16:52:59.814-05:002019-11-21T16:52:59.814-05:00This is consistent with the log output, where the ...This is consistent with the log output, where the iteration count is simplex iterations. If what you mean by "iteration number" is the number of nodes processed so far, that is given by IloCplex.Callback.Context.Info.NodeCount.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-53753425235135462532019-11-20T15:59:59.940-05:002019-11-20T15:59:59.940-05:00Hi again Professor Rubin. I am trying to get some ...Hi again Professor Rubin. I am trying to get some global info from the problem such as the iteration number, best incumbent objective function value (upper bound for a minimization problem), best bound (lower bound) and hence relative gap, total number of iterations. I am trying to use context to retrieve that information, but it does not work. For example to get the iteration number I used context.getIntInfo(IloCplex.Callback.Context.Info.IterationCount), but I guess this is the simplex iteration number (I don't know why that would be needed). I would be happy if you could help me on that. Thank you.<br /><br />Best regards,<br /><br />VedatVedat Bayramhttps://www.blogger.com/profile/16762259980382204540noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-24993085377564386562019-11-19T14:33:59.266-05:002019-11-19T14:33:59.266-05:00Yes, the value entries in the .ann file are the su...Yes, the value entries in the .ann file are the subproblem numbers, and as far as I know that is the only way to see how CPLEX did the decomposition. As far as generating your own annotation file, I suppose you would want to write a script or program to do so. You could use a library designed for output XML files, but the structure is simple enough you might be just as well off just writing the tags yourself. Your XML generating program would need to use the same variable names that your MIP program assigns to the variables.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-2957167503220959692019-11-18T01:03:03.341-05:002019-11-18T01:03:03.341-05:00Thanks a lot for the help, Prof. Rubin. Your expla...Thanks a lot for the help, Prof. Rubin. Your explanation helped to fix the issue. A related question: is there a way to check if the desired decomposition has really been followed? For example, in case I use FULL decomposition strategy, can I see how CPLEX chose to decompose a MIP? I saw the content of ".ann" file, but not sure whether the "values" entries represent the subproblem numbers. <br />Also, as we now have the option of providing an annotation file to CPLEX as an input, how can one generate the xml structure for a large scale instance?<br />Any pointer?<br /><br />Regards<br />Jyotirmoy.Jyotirmoy Dalalhttps://www.blogger.com/profile/15052065796456541043noreply@blogger.com