tag:blogger.com,1999:blog-8781383461061929571.comments2020-08-09T09:17:05.467-04:00OR in an OB WorldPaul A. Rubinhttp://www.blogger.com/profile/05801891157261357482noreply@blogger.comBlogger1727125tag:blogger.com,1999:blog-8781383461061929571.post-84440686109827373842020-06-03T11:43:40.848-04:002020-06-03T11:43:40.848-04:00Thanks Rob! The most interesting takeaway, to me, ...Thanks Rob! The most interesting takeaway, to me, was that some stores are tracking shopper movements (using RFID, at least back in '09). I'm also a bit curious about the data sample. Apparently one shopper spent just under four hours (!) in the supermarket. Must be a more interesting market than the one where I shop. I also noted that the authors "solved" their TSP problems either by brute force enumeration or by simulated annealing. The former is exact, the latter not so much, although for paper purposes it's likely close enough.<br />Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-91965213565070802522020-06-02T15:15:06.009-04:002020-06-02T15:15:06.009-04:00This paper might interest you: https://pubsonline....This paper might interest you: https://pubsonline.informs.org/doi/10.1287/mksc.1080.0402Rob Pratthttps://www.blogger.com/profile/16525877394541155854noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-53152465602976920512020-05-22T11:41:37.725-04:002020-05-22T11:41:37.725-04:00Hardmath123, thanks for stopping by and leaving a ...Hardmath123, thanks for stopping by and leaving a comment ... and thanks for starting this ball rolling with your seminal post. Your swapping heuristic with random restarts probably does pretty well, relative to the optimal solution, if you run it for a reasonable amount of time. If I were in a hurry to get a good layout, that's probably the heuristic I would try first.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-49059275036360516742020-05-21T19:52:24.615-04:002020-05-21T19:52:24.615-04:00Hi Paul! I'm the original author of that typew...Hi Paul! I'm the original author of that typewriter post. I was very excited to discover your series of posts on this problem! I'm amazed that there's all this technology I never new existed, and that this problem has a name. It's very gratifying to feel like you're asking questions that others are willing to think deeply about. Thanks for being curious about this! :-)Hardmath123https://www.blogger.com/profile/10017408348317145693noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-42087163249322848642020-03-30T10:51:47.251-04:002020-03-30T10:51:47.251-04:00Thanks for the tip, Luis.Thanks for the tip, Luis.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-41888759420621973462020-03-30T08:11:07.276-04:002020-03-30T08:11:07.276-04:00I solved executing this command on terminal:
xdg-...I solved executing this command on terminal:<br /> xdg-mime default nemo.desktop inode/directory application/x-gnome-saved-searchLuis Alfredohttps://www.blogger.com/profile/12945838540198948324noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-29825300794491055302020-03-15T15:00:44.626-04:002020-03-15T15:00:44.626-04:00As I said before, the CPLEX will tell you how many...As I said before, the CPLEX will tell you how many threads it is using. That is true regardless of whether or not you are using generic callbacks. As to your not getting any crashes with generic callbacks, as long as your callback does not alter any storage outside the callback itself (i.e., change values of variables that live outside the callback), thread safety should not be an issue. It's when you alter global storage that you have to be more careful with generic callbacks than with their legacy counterparts.<br /><br />As far as relative performance, legacy callbacks disable dynamic search while generic callbacks allow CPLEX to use dynamic search. If you disable dynamic search (via a parameter), that does not matter. Dynamic search tends to improve solution time (not always guaranteed, but definitely a tendency), so the fact that generic callbacks allow CPLEX to continue to use it tends to make them somewhat faster (in some, maybe most, cases, but not in all cases).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-10341576834788089422020-03-15T14:12:21.564-04:002020-03-15T14:12:21.564-04:00Thank you so much for your comprehensive response....Thank you so much for your comprehensive response. There is still one thing unclear to me. As it is mentioned in the Xavier Nodet's slides regarding Legacy Callbacks, they default to single-threaded mode, but, even if, I manually set them to be run with higher than one thread, I am still getting correct results without any code crash, although the runtime is not that much different. 1) Is Cplex running in parallel mode or it ignores my parameters setting as its default is single-threaded? <br /><br />So, my question is that 2) as I am able to run a legacy callback in parallel mode, is there still any point in using Generic one? are their performances the same in this mode (for my problem)? <br /><br />I would be thankful if you could clarify my understanding.<br /><br />RegardsAmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-16269920778553907782020-03-13T11:52:40.800-04:002020-03-13T11:52:40.800-04:00If you are running the same code with CPLEX 12.6 a...If you are running the same code with CPLEX 12.6 and 12.9, then you must be using legacy callbacks. Generic callbacks do not exist in CPLEX 12.6. So you may bumped into a bug in 12.6 that was fixed by 12.9, or your code may have a thread-related bug that triggers with 12.6 but for some reason does not trigger (at least in the problem you are solving) with 12.9.<br /><br />To answer your second question (if I understand it correctly), the CPLEX log will tell you how many threads it is using. For your third question, if 12.9 is using multiple threads without that line (check the log) and takes about the same amount of time, then multi-threading is not very useful for that example. Using multiple threads incurs some overhead, and apparently you are not getting enough benefit from parallelism to offset that cost. This can be problem-specific, so the next time you use that code (with a different problem) things might change. As far as annotated Benders goes, I think that the number of threads used is based on your setting of the thread count parameter (just like any other CPLEX run).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-25636314643992424022020-03-12T18:31:50.403-04:002020-03-12T18:31:50.403-04:00Dear Prof Rubin
Thanks for your valuable blog. I ...Dear Prof Rubin<br /><br />Thanks for your valuable blog. I have a couple of questions about threads considerations when I use the callbacks. I coded the single-search tree version of Benders decomposition in CPLEX 12.6 and I used a lazy constraint callback to add Benders cuts. When I use CPLEX 12.6 the code without the following part of code, forcing the use one thread to solve the master problem, crashes:<br /> cplex_master.setParam(IloCplex::Threads, 1).<br /><br />But when I use the same code in CPLEX 12.9 and I cancel the mentioned line of code, the code works properly till optimality. 1) What is the meaning of this? does this mean that CPLEX 12.9 handles the thread safety issue by itself to some extent? or my type of problem does not need the generic callback thread safety considerations mentioned in this page? 2) essentially, how can I find out the code is run on the single or multi-thread? Note that I use C++ to code and I have multiple linear Benders subproblems. <br />Also, when I am using CPLEX 12.9, the runtime with and without the mentioned part of code is almost the same. 3) Is this meaningful?<br /><br />4) I have another question as well, the annotated Benders decomposition (built-in benders) is run in multi-thread mode on the single one?<br /><br />Thank you so much in advance.<br />AmiZi<br /> <br />AmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-86052668898054699132020-03-06T16:48:19.153-05:002020-03-06T16:48:19.153-05:00By "IloBenders" I assume you mean the bu...By "IloBenders" I assume you mean the built-in Benders decomposition. I'm not aware of any way to access the surrogate variables, and in any case I don't think you would be able to use the information. According the user manual (Discrete optimization > Advanced programming techniques > Generic callbacks > What is a context ...), the built-in Benders algorithm is "incompatible" with the candidate, relaxation and branching callback contexts. The "candidate" context would be where you would want to add optimality cuts.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-70708374252645327222020-03-05T18:56:58.913-05:002020-03-05T18:56:58.913-05:00Dear Prof. Rubin,
Thanks for your valuable blog.
...Dear Prof. Rubin,<br /><br />Thanks for your valuable blog.<br /><br />I was wondering is there any way to add some arbitrary optimality cuts to the Benders master problem when we are utilizing built-in IloBenders? <br />In other words, do we have access to the master's surrogate variable/s when we use IloBenders?<br /><br />Thanks<br /> AmiZinoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-57373974087110487832020-02-18T16:35:28.672-05:002020-02-18T16:35:28.672-05:00You are - of course - correct. I had misplaced the...You are - of course - correct. I had misplaced the Eq 5 constraints. Now there is no underconstrained w. Great!<br /><br />And it's quite faster now:<br />- CP: 0.07s<br />- SAT: 0.38s<br />- CBC: 10.4s<br /><br />Compare with the original CP/SAT model where the CP solver took 0.4s for the same problem.<br /><br />Tomorrow I will test the MIP model on harder problems and also create a MiniZinc model.hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-59250321261088593542020-02-18T14:50:38.099-05:002020-02-18T14:50:38.099-05:00I don't understand how you would be getting mu...I don't understand how you would be getting multiple solutions for $w$ for the same values of $z$. Constraints (2) through (4) create an equivalence between $w_{ij}$ and $z_i \wedge z_j$.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-72099529680293006912020-02-18T14:46:34.813-05:002020-02-18T14:46:34.813-05:00Yes, the two constraints (z[1] = 1 and z[m+1] = 1)...Yes, the two constraints (z[1] = 1 and z[m+1] = 1) helps both the CP model (as well as the SAT model). All constraints that prunes the search tree - here fixing two values in z - are good for CP. :-) The MIP solver (CBC) is also significantly faster with these constraints than without, though much slower than CP and SAT.<br /><br />For the CP model the time with these constraints are about 0.10s and without about 0.16s.<br /><br />One thing I noted - in the Picat model - was that the w matrix is under constrained, and ot generated a lot of different solutions of w for the same value of z. I tried to fix this in the (MIP) model but didn't succeed. Perhaps CPLEX don't care about this in the model?<br /><br />(The Picat specific trick that fixed this was to ignore w when calling the solver.)<br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-17884741932559095952020-02-18T13:48:29.372-05:002020-02-18T13:48:29.372-05:00In the MIP model, constraint (6) for the case $d=m...In the MIP model, constraint (6) for the case $d=m$ implies $w_{1,m+1}=1$, which in turn implies via (2) and (3) that $z_1 =z_{m+1}=1$. So I'm not sure why constraint (7) helps (maybe it doesn't and it's just random luck), and $z_{m+1}=1$ similarly should not help much. Does it help with the CP model?Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-44164899734207589112020-02-18T12:48:10.531-05:002020-02-18T12:48:10.531-05:00Thanks for the blog post and MIP model, Paul!
I ...Thanks for the blog post and MIP model, Paul! <br /><br />I converted it to a Picat model: hakank.org/picat/linear_combinations_mip.pi The CBC solver took a long time to get all solutions, the SAT solver too 1s, but the CP solver was quite fast: finished in about 0.1s. <br /><br />The original CP model takes about 40ms to solve this problem so it's faster. <br /><br />Also, I added the constraint that Z[M+1] #= 1 since we know that both 1 and max(Diffs) + 1 must be in the (normalized) solution.<br /><br />hakankhttps://www.blogger.com/profile/04290139008157372195noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-46445624335919040062020-01-30T15:40:03.072-05:002020-01-30T15:40:03.072-05:00If the M values are "reasonably small", ...If the M values are "reasonably small", you do not need to use the combinatorial form of Benders from Codato & Fischetti; you can use the original form of Benders. That should eliminate your concern about putting the original objective function in the subproblem in constraint form. As far as decomposing so as to reduce the number of binary variables in any one problem, I don't see any easy fix. You could try asking on OR Stack Exchange (a link to which is in the right margin near the top of the post).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-41553198415174194572020-01-30T15:10:18.657-05:002020-01-30T15:10:18.657-05:00Thank you Dr. Rubin for the response. As for the f...Thank you Dr. Rubin for the response. As for the first criterion you mentioned, the model is too large with 7 million binary variables for an instance. The single model does exceed the solver's memory capacity. However, I cannot find an intuitive way to decompose the model. <br />Here is the model using the notation from paper by Codato and Fischetti on combinatorial BD:<br />min d^t*y<br />s.t.<br />Ax <= a<br />Bz <= b<br />Gz + Hx = g<br />Cx + Dy >= c<br />Ez + Fy >= e<br />x,z = bin, y >= 0<br /><br />As suggested by Codato et. al., I could separate the part with continuous variables and disjunctive constraints into a subproblem. It would still leave three issues: first, the master problem would still contain a large number of binary variables and it may be difficult to get an integer solution using a solver. Especially constraints related to Bz <= b constitute a VRP with multiple precedence and coordination constraints. Second, The objective function will need to be introduced into the slave problem as a constraint of form d^t * y <= UB - e. This has an issue of finding a sufficiently small value of parameter e which does not cut any feasible master problem solution. I have observed multiple integer feasible solutions with objective very close to UB. Final issue relates to the CB cuts added to the Master problem in each iteration. Cutting one solution at a time can introduce a large number of constraints into the master. The master involves, for instance, choosing 100 "x" variables as 1, out of 6 million. I am afraid the number of integer feasible solutions in the Master could be too large to be cut off by CB cuts. <br /><br />Regarding the second criterion, the big-M is not quite infinity and can be bounded by a reasonably small value. However, solving the problem as a single model still does not work for large instances. <br />Anonymoushttps://www.blogger.com/profile/03995432263472338626noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-31102584607007347882020-01-30T13:53:27.139-05:002020-01-30T13:53:27.139-05:00In general, I do not know any way to look at a pro...In general, I do not know any way to look at a problem and decide if Benders is the "right" way to model it. Generally speaking, if there would only be one subproblem and if a single model would not exceed the solver's memory capacity, I would probably try a single model first. If the problem decomposes into several / many subproblems, maybe I would try Benders first.<br /><br />You mentioned using big M constraints, which to me is a special case. If I know valid choices for the M parameters that are "reasonably small", I would be inclined to try a single model. On the other hand, if I find myself using "not quite infinity" values for M in order to be sure they are valid, I would definitely try Benders, since it gets rid of the M values.<br /><br />Bottom line, though, is that these are just personal tendencies. I don't think there is a way to decide with any high degree of certainty just by "looking" at the model.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-28748776025553619262020-01-29T19:35:15.630-05:002020-01-29T19:35:15.630-05:00Dear Dr. Rubin,
I am thinking about using BD for...Dear Dr. Rubin, <br /><br />I am thinking about using BD for a MIP model. It has two types of Binary variables, both connected to each other through a constraint. The model also has continuous variables (part of disjunctive constraints, modeled with bigM) which in turn depend on both sets of binary variables. The objective function involves minimizing a function of continuous variables. Is there a way to find out beforehand whether BD will be a good approach to try? The number of integer variables in the model runs into tens of millions. An alternate question could be, are there problems for which BD does not work as well? How can one answer that question without lengthy experiments and just by "looking" at a MIP model?Zulqarnainhttps://www.blogger.com/profile/17690333321574031547noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-48269199132690381662020-01-07T17:39:40.369-05:002020-01-07T17:39:40.369-05:00Ed: Thanks for the links. I agree with your point ...Ed: Thanks for the links. I agree with your point about expressing constraints with integers rather than fractions (or their decimal equivalents) where practical.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-58475726029841940452020-01-07T16:33:14.736-05:002020-01-07T16:33:14.736-05:00I landed on this blog post today while trying to a...I landed on this blog post today while trying to answer the same question. And Paul's September 23 post is correct about different numerical bases (not the plural of basis for the simplex method) lose precision with different numbers (e.g. 1/3 cannot be stored exactly in the base 2 arithmetic used on most of today's computers). And yes, it is possible that just losing the last bit of the mantissa can be enough to break a tie in the algorithm's decision, leading to alternate optimal bases (the simplex kind), which in turn leads to alternate branching selections when solving MIPs, and then all sorts of performance variability. This is one reason I recommend that whenever you can replace fractions involving simple ratios of integers during the model formulation step with integers, do so. For example, instead of representing 1/3x + 2/3y = 1 with decimal representations of 1/3 that result in a loss of precision around the 16th decimal place in a base 2 representation, multiply the constraint by 3 to get <br />x + 2y = 3, and now all your data is in terms of sums of powers of 2 and represented precisely. <br /><br />Given that I'm writing on a thread that is almost 10 years old, it may seem like nothing has changed, but that is not really the case. Progress has been made in understanding performance variability in mixed integer programming (e.g. see http://www.or.deis.unibo.it/andrea/Lodi-MIC2011-nopause.pdf). CPLEX now has a runseeds command that allows you to assess the level of variability in your MIP model. Also since 2010, I have written a detailed paper on the numerical aspects of LP and MIP solver that can be found on the INFORMS web site here: https://pubsonline.informs.org/doi/10.1287/educ.2014.0130. And if you don't have the patience for a lengthy paper on numerical epsilons and other nitty gritty details, you can try accessing a shorter powerpoint version here: http://yetanothermathprogrammingconsultant.blogspot.com/2015/03/mip-links.htmlEd Klotznoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-2145162378506103302019-12-27T09:16:31.480-05:002019-12-27T09:16:31.480-05:00You're welcome.You're welcome.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-5274549313483457142019-12-27T02:35:48.028-05:002019-12-27T02:35:48.028-05:00I was not aware of the blog post you linked to, wh...I was not aware of the blog post you linked to, which addresses my issue pretty much directly.<br /><br />I am very grateful for your time and for your responses.Anonymousnoreply@blogger.com