tag:blogger.com,1999:blog-8781383461061929571.comments2020-03-15T15:00:44.626-04:00OR in an OB WorldPaul A. Rubinhttp://www.blogger.com/profile/05801891157261357482noreply@blogger.comBlogger1720125tag:blogger.com,1999:blog-8781383461061929571.post-29825300794491055302020-03-15T15:00:44.626-04:002020-03-15T15:00:44.626-04:00As I said before, the CPLEX will tell you how many...As I said before, the CPLEX will tell you how many threads it is using. That is true regardless of whether or not you are using generic callbacks. As to your not getting any crashes with generic callbacks, as long as your callback does not alter any storage outside the callback itself (i.e., change values of variables that live outside the callback), thread safety should not be an issue. It's when you alter global storage that you have to be more careful with generic callbacks than with their legacy counterparts.<br /><br />As far as relative performance, legacy callbacks disable dynamic search while generic callbacks allow CPLEX to use dynamic search. If you disable dynamic search (via a parameter), that does not matter. Dynamic search tends to improve solution time (not always guaranteed, but definitely a tendency), so the fact that generic callbacks allow CPLEX to continue to use it tends to make them somewhat faster (in some, maybe most, cases, but not in all cases).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-10341576834788089422020-03-15T14:12:21.564-04:002020-03-15T14:12:21.564-04:00Thank you so much for your comprehensive response....Thank you so much for your comprehensive response. There is still one thing unclear to me. As it is mentioned in the Xavier Nodet's slides regarding Legacy Callbacks, they default to single-threaded mode, but, even if, I manually set them to be run with higher than one thread, I am still getting correct results without any code crash, although the runtime is not that much different. 1) Is Cplex running in parallel mode or it ignores my parameters setting as its default is single-threaded? <br /><br />So, my question is that 2) as I am able to run a legacy callback in parallel mode, is there still any point in using Generic one? are their performances the same in this mode (for my problem)? <br /><br />I would be thankful if you could clarify my understanding.<br /><br />RegardsAmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-16269920778553907782020-03-13T11:52:40.800-04:002020-03-13T11:52:40.800-04:00If you are running the same code with CPLEX 12.6 a...If you are running the same code with CPLEX 12.6 and 12.9, then you must be using legacy callbacks. Generic callbacks do not exist in CPLEX 12.6. So you may bumped into a bug in 12.6 that was fixed by 12.9, or your code may have a thread-related bug that triggers with 12.6 but for some reason does not trigger (at least in the problem you are solving) with 12.9.<br /><br />To answer your second question (if I understand it correctly), the CPLEX log will tell you how many threads it is using. For your third question, if 12.9 is using multiple threads without that line (check the log) and takes about the same amount of time, then multi-threading is not very useful for that example. Using multiple threads incurs some overhead, and apparently you are not getting enough benefit from parallelism to offset that cost. This can be problem-specific, so the next time you use that code (with a different problem) things might change. As far as annotated Benders goes, I think that the number of threads used is based on your setting of the thread count parameter (just like any other CPLEX run).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-25636314643992424022020-03-12T18:31:50.403-04:002020-03-12T18:31:50.403-04:00Dear Prof Rubin
Thanks for your valuable blog. I ...Dear Prof Rubin<br /><br />Thanks for your valuable blog. I have a couple of questions about threads considerations when I use the callbacks. I coded the single-search tree version of Benders decomposition in CPLEX 12.6 and I used a lazy constraint callback to add Benders cuts. When I use CPLEX 12.6 the code without the following part of code, forcing the use one thread to solve the master problem, crashes:<br /> cplex_master.setParam(IloCplex::Threads, 1).<br /><br />But when I use the same code in CPLEX 12.9 and I cancel the mentioned line of code, the code works properly till optimality. 1) What is the meaning of this? does this mean that CPLEX 12.9 handles the thread safety issue by itself to some extent? or my type of problem does not need the generic callback thread safety considerations mentioned in this page? 2) essentially, how can I find out the code is run on the single or multi-thread? Note that I use C++ to code and I have multiple linear Benders subproblems. <br />Also, when I am using CPLEX 12.9, the runtime with and without the mentioned part of code is almost the same. 3) Is this meaningful?<br /><br />4) I have another question as well, the annotated Benders decomposition (built-in benders) is run in multi-thread mode on the single one?<br /><br />Thank you so much in advance.<br />AmiZi<br /> <br />AmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-86052668898054699132020-03-06T16:48:19.153-05:002020-03-06T16:48:19.153-05:00By "IloBenders" I assume you mean the bu...By "IloBenders" I assume you mean the built-in Benders decomposition. I'm not aware of any way to access the surrogate variables, and in any case I don't think you would be able to use the information. According the user manual (Discrete optimization > Advanced programming techniques > Generic callbacks > What is a context ...), the built-in Benders algorithm is "incompatible" with the candidate, relaxation and branching callback contexts. The "candidate" context would be where you would want to add optimality cuts.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-70708374252645327222020-03-05T18:56:58.913-05:002020-03-05T18:56:58.913-05:00Dear Prof. Rubin,
Thanks for your valuable blog.
...Dear Prof. Rubin,<br /><br />Thanks for your valuable blog.<br /><br />I was wondering is there any way to add some arbitrary optimality cuts to the Benders master problem when we are utilizing built-in IloBenders? <br />In other words, do we have access to the master's surrogate variable/s when we use IloBenders?<br /><br />Thanks<br /> AmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-57373974087110487832020-02-18T16:35:28.672-05:002020-02-18T16:35:28.672-05:00You are - of course - correct. I had misplaced the...You are - of course - correct. I had misplaced the Eq 5 constraints. Now there is no underconstrained w. Great!<br /><br />And it's quite faster now:<br />- CP: 0.07s<br />- SAT: 0.38s<br />- CBC: 10.4s<br /><br />Compare with the original CP/SAT model where the CP solver took 0.4s for the same problem.<br /><br />Tomorrow I will test the MIP model on harder problems and also create a MiniZinc model.hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-59250321261088593542020-02-18T14:50:38.099-05:002020-02-18T14:50:38.099-05:00I don't understand how you would be getting mu...I don't understand how you would be getting multiple solutions for $w$ for the same values of $z$. Constraints (2) through (4) create an equivalence between $w_{ij}$ and $z_i \wedge z_j$.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-72099529680293006912020-02-18T14:46:34.813-05:002020-02-18T14:46:34.813-05:00Yes, the two constraints (z[1] = 1 and z[m+1] = 1)...Yes, the two constraints (z[1] = 1 and z[m+1] = 1) helps both the CP model (as well as the SAT model). All constraints that prunes the search tree - here fixing two values in z - are good for CP. :-) The MIP solver (CBC) is also significantly faster with these constraints than without, though much slower than CP and SAT.<br /><br />For the CP model the time with these constraints are about 0.10s and without about 0.16s.<br /><br />One thing I noted - in the Picat model - was that the w matrix is under constrained, and ot generated a lot of different solutions of w for the same value of z. I tried to fix this in the (MIP) model but didn't succeed. Perhaps CPLEX don't care about this in the model?<br /><br />(The Picat specific trick that fixed this was to ignore w when calling the solver.)<br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-17884741932559095952020-02-18T13:48:29.372-05:002020-02-18T13:48:29.372-05:00In the MIP model, constraint (6) for the case $d=m...In the MIP model, constraint (6) for the case $d=m$ implies $w_{1,m+1}=1$, which in turn implies via (2) and (3) that $z_1 =z_{m+1}=1$. So I'm not sure why constraint (7) helps (maybe it doesn't and it's just random luck), and $z_{m+1}=1$ similarly should not help much. Does it help with the CP model?Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-44164899734207589112020-02-18T12:48:10.531-05:002020-02-18T12:48:10.531-05:00Thanks for the blog post and MIP model, Paul!
I ...Thanks for the blog post and MIP model, Paul! <br /><br />I converted it to a Picat model: hakank.org/picat/linear_combinations_mip.pi The CBC solver took a long time to get all solutions, the SAT solver too 1s, but the CP solver was quite fast: finished in about 0.1s. <br /><br />The original CP model takes about 40ms to solve this problem so it's faster. <br /><br />Also, I added the constraint that Z[M+1] #= 1 since we know that both 1 and max(Diffs) + 1 must be in the (normalized) solution.<br /><br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-46445624335919040062020-01-30T15:40:03.072-05:002020-01-30T15:40:03.072-05:00If the M values are "reasonably small", ...If the M values are "reasonably small", you do not need to use the combinatorial form of Benders from Codato & Fischetti; you can use the original form of Benders. That should eliminate your concern about putting the original objective function in the subproblem in constraint form. As far as decomposing so as to reduce the number of binary variables in any one problem, I don't see any easy fix. You could try asking on OR Stack Exchange (a link to which is in the right margin near the top of the post).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-41553198415174194572020-01-30T15:10:18.657-05:002020-01-30T15:10:18.657-05:00Thank you Dr. Rubin for the response. As for the f...Thank you Dr. Rubin for the response. As for the first criterion you mentioned, the model is too large with 7 million binary variables for an instance. The single model does exceed the solver's memory capacity. However, I cannot find an intuitive way to decompose the model. <br />Here is the model using the notation from paper by Codato and Fischetti on combinatorial BD:<br />min d^t*y<br />s.t.<br />Ax <= a<br />Bz <= b<br />Gz + Hx = g<br />Cx + Dy >= c<br />Ez + Fy >= e<br />x,z = bin, y >= 0<br /><br />As suggested by Codato et. al., I could separate the part with continuous variables and disjunctive constraints into a subproblem. It would still leave three issues: first, the master problem would still contain a large number of binary variables and it may be difficult to get an integer solution using a solver. Especially constraints related to Bz <= b constitute a VRP with multiple precedence and coordination constraints. Second, The objective function will need to be introduced into the slave problem as a constraint of form d^t * y <= UB - e. This has an issue of finding a sufficiently small value of parameter e which does not cut any feasible master problem solution. I have observed multiple integer feasible solutions with objective very close to UB. Final issue relates to the CB cuts added to the Master problem in each iteration. Cutting one solution at a time can introduce a large number of constraints into the master. The master involves, for instance, choosing 100 "x" variables as 1, out of 6 million. I am afraid the number of integer feasible solutions in the Master could be too large to be cut off by CB cuts. <br /><br />Regarding the second criterion, the big-M is not quite infinity and can be bounded by a reasonably small value. However, solving the problem as a single model still does not work for large instances. <br />Anonymoushttps://www.blogger.com/profile/03995432263472338626noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-31102584607007347882020-01-30T13:53:27.139-05:002020-01-30T13:53:27.139-05:00In general, I do not know any way to look at a pro...In general, I do not know any way to look at a problem and decide if Benders is the "right" way to model it. Generally speaking, if there would only be one subproblem and if a single model would not exceed the solver's memory capacity, I would probably try a single model first. If the problem decomposes into several / many subproblems, maybe I would try Benders first.<br /><br />You mentioned using big M constraints, which to me is a special case. If I know valid choices for the M parameters that are "reasonably small", I would be inclined to try a single model. On the other hand, if I find myself using "not quite infinity" values for M in order to be sure they are valid, I would definitely try Benders, since it gets rid of the M values.<br /><br />Bottom line, though, is that these are just personal tendencies. I don't think there is a way to decide with any high degree of certainty just by "looking" at the model.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-28748776025553619262020-01-29T19:35:15.630-05:002020-01-29T19:35:15.630-05:00Dear Dr. Rubin,
I am thinking about using BD for...Dear Dr. Rubin, <br /><br />I am thinking about using BD for a MIP model. It has two types of Binary variables, both connected to each other through a constraint. The model also has continuous variables (part of disjunctive constraints, modeled with bigM) which in turn depend on both sets of binary variables. The objective function involves minimizing a function of continuous variables. Is there a way to find out beforehand whether BD will be a good approach to try? The number of integer variables in the model runs into tens of millions. An alternate question could be, are there problems for which BD does not work as well? How can one answer that question without lengthy experiments and just by "looking" at a MIP model?Zulqarnainhttps://www.blogger.com/profile/17690333321574031547noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-48269199132690381662020-01-07T17:39:40.369-05:002020-01-07T17:39:40.369-05:00Ed: Thanks for the links. I agree with your point ...Ed: Thanks for the links. I agree with your point about expressing constraints with integers rather than fractions (or their decimal equivalents) where practical.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-58475726029841940452020-01-07T16:33:14.736-05:002020-01-07T16:33:14.736-05:00I landed on this blog post today while trying to a...I landed on this blog post today while trying to answer the same question. And Paul's September 23 post is correct about different numerical bases (not the plural of basis for the simplex method) lose precision with different numbers (e.g. 1/3 cannot be stored exactly in the base 2 arithmetic used on most of today's computers). And yes, it is possible that just losing the last bit of the mantissa can be enough to break a tie in the algorithm's decision, leading to alternate optimal bases (the simplex kind), which in turn leads to alternate branching selections when solving MIPs, and then all sorts of performance variability. This is one reason I recommend that whenever you can replace fractions involving simple ratios of integers during the model formulation step with integers, do so. For example, instead of representing 1/3x + 2/3y = 1 with decimal representations of 1/3 that result in a loss of precision around the 16th decimal place in a base 2 representation, multiply the constraint by 3 to get <br />x + 2y = 3, and now all your data is in terms of sums of powers of 2 and represented precisely. <br /><br />Given that I'm writing on a thread that is almost 10 years old, it may seem like nothing has changed, but that is not really the case. Progress has been made in understanding performance variability in mixed integer programming (e.g. see http://www.or.deis.unibo.it/andrea/Lodi-MIC2011-nopause.pdf). CPLEX now has a runseeds command that allows you to assess the level of variability in your MIP model. Also since 2010, I have written a detailed paper on the numerical aspects of LP and MIP solver that can be found on the INFORMS web site here: https://pubsonline.informs.org/doi/10.1287/educ.2014.0130. And if you don't have the patience for a lengthy paper on numerical epsilons and other nitty gritty details, you can try accessing a shorter powerpoint version here: http://yetanothermathprogrammingconsultant.blogspot.com/2015/03/mip-links.htmlEd Klotznoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-2145162378506103302019-12-27T09:16:31.480-05:002019-12-27T09:16:31.480-05:00You're welcome.You're welcome.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-5274549313483457142019-12-27T02:35:48.028-05:002019-12-27T02:35:48.028-05:00I was not aware of the blog post you linked to, wh...I was not aware of the blog post you linked to, which addresses my issue pretty much directly.<br /><br />I am very grateful for your time and for your responses.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-71427686208114297862019-12-26T11:08:20.473-05:002019-12-26T11:08:20.473-05:00I reviewed both the original approach and the call...I reviewed both the original approach and the callback approach in an even older post (https://orinanobworld.blogspot.com/2011/10/benders-decomposition-then-and-now.html). You're correct that there is no guarantee the callback method is faster than solve-cut-repeat, although my gut feeling is that it usually is. FWIW, I always use the callback method.<br /><br />As far as combining them, there are a couple of reasons why holding off until the optimal leaf node has been found won't work. The main one is that by the time you conclude node A is "optimal" (because the best bound converged to the value at that node), a different node (B) containing the true optimal solution will have been pruned, because the bound at B is worse than the bound at A. This is not an issue in the original Benders method, because you rebuild the tree after adding a cut, but it would be a problem (catastrophe?) if you were trying to reuse the current tree. It would require the solver to retain pruned nodes, which would eat a lot more memory than what is currently consumed. In any case, CPLEX will not do this.<br />Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-6441670628640267072019-12-25T23:12:25.535-05:002019-12-25T23:12:25.535-05:00Thank you for the response. What you described (i....Thank you for the response. What you described (i.e., the Benders procedure) is what I have been doing, but then I saw the CPLEX example file ilobendersatsp.cpp which seems to be implementing 'Benders' via lazy calls, which confused me a little. In addition, a reviewer for a scientific journal suggested I implement Benders via lazy cuts so that I won't have to rebuild the Master B&B tree anew every iteration. The trouble with that approach I guess is that it doesn't use the optimal integer solution given the current model, so maybe what you save by maintaining a single B&B tree is lost due to using suboptimal integer points when deriving cuts. When I actually implemented both approaches for my current problem, 'classical' Benders (without lazy calls) seemed to work better. So my original question was motivated by finding a way to combine the advantages of both approaches. I'd be very grateful if you have any further thoughts on how to maybe do that.<br /><br />Thanks again!Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-27455507319848878212019-12-25T12:06:28.075-05:002019-12-25T12:06:28.075-05:00The calls occur at arbitrary nodes -- not necessar...The calls occur at arbitrary nodes -- not necessarily leaf nodes (at least not leaves of the complete tree). The nodes where the calls occur have not yet been separated into children, so I suppose they are temporarily leaves. The key is that CPLEX thinks it has a possible new integer-feasible incumbent solution when it calls the callback.<br /><br />To apply the callback only at the "optimal" leaf, you have to go back to the way Jack Benders proposed his method. Solve the current model to optimality. Pass the optimal solution to the subproblem. (This would *not* be via callback.) If the subproblem coughs up one or more constraints, add them to the master model (as ordinary constraints, unless you are generating so many that you need to make them lazy to conserve memory). Then solve the modified master problem, starting from the root node, and repeat.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-34812854199605014122019-12-25T08:41:08.971-05:002019-12-25T08:41:08.971-05:00Hi,
If I understand correctly, a lazy constraint ...Hi,<br /><br />If I understand correctly, a lazy constraint call is made at arbitrary (integer) leafs of the branch-and-bound tree - whichever it happens to land on currently. My question is: is there a way to make this call only at the optimal leaf given the cuts added so far?<br /><br />Thanks.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-18218271470317562312019-12-12T18:18:50.042-05:002019-12-12T18:18:50.042-05:00You're quite welcome.You're quite welcome.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-19086182114935838292019-12-12T18:08:20.731-05:002019-12-12T18:08:20.731-05:00Thank you for your post! Thank you for your post! Theo Enderesthttps://www.blogger.com/profile/13257817240464995438noreply@blogger.com