tag:blogger.com,1999:blog-8781383461061929571.comments2017-07-23T21:59:50.338-04:00OR in an OB WorldPaul Rubinhttps://plus.google.com/111303285497934501993noreply@blogger.comBlogger1434125tag:blogger.com,1999:blog-8781383461061929571.post-57597476168762451292017-07-23T21:59:50.338-04:002017-07-23T21:59:50.338-04:00Thanks. I think I have the answer of my question :...Thanks. I think I have the answer of my question :) I should focus on the formulation of the model or tune the solver.Waldstein Wanghttps://www.blogger.com/profile/07364669322437283523noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-85500140628206621142017-07-21T10:47:25.815-04:002017-07-21T10:47:25.815-04:00I'm not sure what you mean. If you're sayi...I'm not sure what you mean. If you're saying it takes a long time to prove optimality (slow-moving bound), that's common. Depending on what solver you use, you may be able to switch some parameter late in the search and have the solver put more effort into bound tightening. Frequently, though, the only alternative to patience is to find a tighter formulation for the problem (or find some problem-specific cuts you can add to tighten the bound).Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-69348993150746325222017-07-21T03:10:33.671-04:002017-07-21T03:10:33.671-04:00Hi Paul,
I also read those slides and some of Dr. ...Hi Paul,<br />I also read those slides and some of Dr. Fischetti's paper. I have one implementation of Benders Decomposition based on his formulation. I find the converging speed is quite fast when testing on uncapacitated facility location problem. However, after the algorithm has already reached the best solution for the benchmark, which is published on the internet, I find the algorithm still needs lots of time to handle the branching process.<br />Could you please give me some suggestions, or recommend some papers on how to decrease the branching process faster? <br />Thx.Waldstein Wanghttps://www.blogger.com/profile/07364669322437283523noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-74582732012558352632017-07-19T08:24:28.548-04:002017-07-19T08:24:28.548-04:00Sounds like a bug in your code. I suggest asking o...Sounds like a bug in your code. I suggest asking on the CPLEX help forum or on OR Exchange. There are links in the right margin of this page under "Places to Ask Questions"; for the CPLEX forum, you will need to drill down a level or two.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-33167093955818689712017-07-19T05:24:02.941-04:002017-07-19T05:24:02.941-04:00Hello,
I'm working with Benders method and I w...Hello,<br />I'm working with Benders method and I what I'm doing now is a single constraint generation.<br />the problem is that at each iteration, the solver delete the previous constraint added and add a new one.<br />Do you know how can I fix it? <br />thank youFoufa Leonahttps://www.blogger.com/profile/17009060866018528357noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-371236026827035082017-07-12T16:14:22.457-04:002017-07-12T16:14:22.457-04:00It's hard to diagnose a slow moving bound. Som...It's hard to diagnose a slow moving bound. Sometimes it reflects a "loose" formulation, but sometimes it happens and there is not much you can do to tighten the formulation. You might do a little searching to see if someone has developed facet defining cuts for a problem similar to yours, and if so try adding them. There will also likely be some solver parameters you can try adjusting. (Again, I'm out of touch with Xpress, but I know both CPLEX and Gurobi are tunable, so likely Xpress is too.)<br /><br />If the out-of-memory error is due to the tree occupying too much memory, you can always reduce the memory footprint by switching to depth-first search (which may extend the solution time considerably). There may also be a parameter that lets you adjust when backtracking occurs, so that you can shift the search a bit toward depth-first (and away from breadth-first) without going totally depth-first. The less often you backtrack, the less memory you use (maybe).Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-28090781328478796732017-07-11T20:03:38.339-04:002017-07-11T20:03:38.339-04:00Thanks for your reply Professor. I've managed ...Thanks for your reply Professor. I've managed to implement lazyconstraint callback, to search in one tree and add combinatorial cut when it's needed. It solves much faster than before (without callbacks and comb cuts) for small sized problems but it still has problems to reach optimal value for medium and large scaled problems. with one tree approach it gives memory error since it has spent long times to find a new incumbent after 20th integer solution. I used time limit criteria to prevent this situation and mixed classical benders with one tree before finding the optimal value.in this case, it ended up being stuck around %30 percent gap and keep giving similar values for lower bound in every iteration. Do you have any suggestion or diagnosis for this situation. sedaaağhttps://www.blogger.com/profile/13093884063740789376noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-79926511850642114892017-07-02T10:49:44.189-04:002017-07-02T10:49:44.189-04:00I don't use MATLAB, so I can't answer comp...I don't use MATLAB, so I can't answer completely. In other APIs, though, there is a version of the "get a solution" function that takes an additional argument, the index of the solution in the solution pool. There's also a function to find out how many solutions there are in the pool. So you call the latter function to get the pool count, then call the "get x" function in a loop and retrieve one solution at a time.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-1860895748060837072017-07-02T07:11:05.234-04:002017-07-02T07:11:05.234-04:00I am using the CPLEX solver for binary integer lin...I am using the CPLEX solver for binary integer linear programming (cplexbilp) in MATLAB and would like to print out the identified alternative solutions in the solution pool. The code looks as follows:<br /><br />options = cplexoptimset('cplex');<br />options.Display = 'on';<br />options.solnpoolagap = 0;<br />>> options.solnpoolintensity = 4;<br />>> options.populatelim = 100;<br />[x, fval, exitflag, output] = cplexbilp (f, Aineq, bineq, Aeq, beq, ...<br /> [ ], options);<br />Each time only one solution is stored in x, but I know as CPLEX tells me that there are other solutions as well.<br /><br />Do you know how to print out those alternative solutions?Amit Undehttps://www.blogger.com/profile/17732575696726501639noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-63342649009307429842017-06-27T03:04:49.441-04:002017-06-27T03:04:49.441-04:00I agree with that 80/20 rule. To be fair though, i...I agree with that 80/20 rule. To be fair though, in other departments of the company there are OR teams. However, this seems to come more from OR people establishing such groups rather than having a top-down mandate to establish these.Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-87011139783758880312017-06-26T17:29:13.143-04:002017-06-26T17:29:13.143-04:00Richard: Thanks for the comment (and confirmation)...Richard: Thanks for the comment (and confirmation). I suspect that, for a nontrivial number companies, there's an 80-20 rule in effect. 80% of their analytics/OR problems are adequately handled by "standard" models and off the shelf software, and can be done (switching metaphors) by the laity. The other 20% are less obvious, may need customized models or algorithms, and require someone from the "priesthood".<br /><br />After my post, I thought some more about companies routing truck deliveries. There is certainly good quality software out there, and I'm sure a lot of companies can get by just doing the "obvious" with it. Then there's UPS, who turned analytics and OR pros loose on it and discovered they could save a boatload (bad choice in this case, make that "bunch of truckloads") of money by turning right rather than left at some intersections and taking the long way around. You're not going to get that insight (and advantage) from canned software used by nontechnical people.<br />Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-28090157741647779942017-06-26T15:57:46.300-04:002017-06-26T15:57:46.300-04:00Very interesting post! I however am not surprised ...Very interesting post! I however am not surprised that the companies (apparently) did not know how to handle your question: I work in a large utility company, and in my department I am the only one with formal OR training. And our entire business model is basically "optimize stuff". But their problems are far from solved (I work with them every day), and so there is a need there for people with OR training who know how to "cook". Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-46303051137091363312017-06-26T14:03:26.256-04:002017-06-26T14:03:26.256-04:00There's always another way to do MIP stuff; yo...There's always another way to do MIP stuff; you just don't necessarily want to do it. :-)<br /><br />First, let me point out that (barring some special property of the problem that prohibits it), there might not be just a single index where the max of the z's is achieved. So I'll henceforth refer to the "official" maximum being the one selected (ties being broken arbitrarily).<br /><br />In terms of a single MIP model, I don't think you can avoid $M$. Keep in mind that if you know up front that $z_i \in [a, b]$ for all $i$, then $M=b-a$ does the job and may not be too painful.<br /><br />The only way I can think of, off the top of my head, to completely avoid $M$ is combinatorial Bender's decomposition. In the master problem, you have binary variables $x_i$ signaling whether the max will occur at $z_i$, plus a constraint $\sum_i x_i = 1$ to ensure that exactly one winner is named. In the subproblem, values for $x$ get passed in. For the lone index $j$ where $x_j = 1$, you add constraints $z_j \ge z_k \quad \forall k \neq j$. It's conceptually correct, but I wouldn't put any money on it being faster than (or as fast as) a big-$M$ approach. The one case that might send me scurrying to Benders land would be if $b-a$ were big enough to induce numerical stability problems.<br />Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-26356184728000669022017-06-26T08:40:40.953-04:002017-06-26T08:40:40.953-04:00Hi Paul,
Really nice post. I need to linearize th...Hi Paul,<br /><br />Really nice post. I need to linearize the maximum of continuous variables z_i and of course to know at which index it is achieved. So far I was using the classical way (i.e. with an auxiliary binary constraint stating the index of the maximum and a bigM constraint). However, after a discussion with some colleagues we were wondering if it is possible not to use any bigM constraint to this end. I have not found such an approach on the literature and I have not found the way myself. Have you ever seen something similar?<br /><br />Thanks!<br />MeritxellMeritxellhttps://www.blogger.com/profile/05381983230360555499noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-75379300069389224352017-06-25T14:23:39.961-04:002017-06-25T14:23:39.961-04:00Yes, the upper and lower bounds come from the mast...Yes, the upper and lower bounds come from the master problem. If some candidate solution survives the callback (no cut generated), it becomes the new master problem incumbent. As long as at least one incumbent has been found, the best (most recent) one will give you the upper bound, and that bound will be valid -- the objective value of the optimal solution will not be any larger than the objective value of that incumbent. If you have not yet found an incumbent, I believe getObjValue will throw an exception. No, you will not get premature termination.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-60577306905208857672017-06-24T03:00:16.857-04:002017-06-24T03:00:16.857-04:00Dear Prof Rubin,
In a lazy constraint callback ba...Dear Prof Rubin,<br /><br />In a lazy constraint callback based BD implementation for minimization problem, if I want to early-stop cplex using an optimality gap% criterion (setting epgap), and on termination extract the UB and LB values using cplex.getObjValue() and cplex.getBestObjValue() functions, am I not extracting UB and LB of the MP? It may be possible that the overall problem has a different UB (of higher value)? So, won't it be a termination earlier than I intend? If yes, what would be the correct way to early-stop a lazy constraint callback based BD by specifying the epgap parameter? Is there any mechanism to tell cplex to continue in a situation where the MP's bounds are smaller than epgap, but overall problem's bounds are still bigger than epgap? Please advise.<br />Jyotirmoy Dalalhttps://www.blogger.com/profile/16994038091377159844noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-4498725988084242812017-06-19T07:32:01.611-04:002017-06-19T07:32:01.611-04:00This comment has been removed by a blog administrator.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-52727876786060466822017-06-02T14:24:15.228-04:002017-06-02T14:24:15.228-04:00Jess: First, just to be clear, let me emphasize th...Jess: First, just to be clear, let me emphasize that heuristic solutions generated by CPLEX _are_ checked against lazy constraints; it's only heuristic solutions generated by you in a heuristic callback that are not checked (at least, as of the last I heard -- this might be subject to change).<br /><br />If I understand the question correctly, this is what I would do. Since it's Benders, I assume you have a lazy constraint callback (LCC). If I generated a candidate solution in a heuristic callback (HC), before adding it (using setSolution in the Java API) I would first run it through the cut generator in the LCC. If the cut generator returned a feasibility cut, I would queue that somewhere in program memory; the next time the LCC was called, it would check the queue and add any cuts it found there (emptying the queue in the process). If the cut generator agreed that the integer part of the heuristic solution was okay but generated an optimality cut because the value of the surrogate real variable (the SP contribution to the overall objective) was too charitable, I would substitute the value of that variable that the cut generator came up with and inject that solution (but not queue the optimality cut). This applies at all nodes, including the root.<br /><br />One thing people tend to forget is that there is nothing stopping various callbacks from communicating with each other through program global memory.<br />Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-16731995611993067222017-06-02T03:14:54.884-04:002017-06-02T03:14:54.884-04:00Thanks for this helpful post. You mentioned that h...Thanks for this helpful post. You mentioned that heuristic solutions won't be checked over the pool of Lazy constraints since they are expected to be feasible. However, in Benders implementation, the cuts added using a heuristic solution can define a lower bound for artificial variables, and so the branch and bound. In other words, by a heuristic solution for integer MP (master problem), we might not have a solution for real variables of SP (subproblem). To give stronger lower bound in the root node, we may solve SP for the given heuristic solution, generate and add related cuts to MP. If it is not done by lazy call back, how can we add them to MP in the root node? <br /><br />Thanks.Jess Whitehttps://www.blogger.com/profile/03010114755480388342noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-23368327480038380802017-06-01T02:27:29.274-04:002017-06-01T02:27:29.274-04:00it has solved the problem, thank you so much!it has solved the problem, thank you so much!Azat Ibrakovhttps://www.blogger.com/profile/17242073897985315722noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-10151964743373735322017-05-31T14:43:28.580-04:002017-05-31T14:43:28.580-04:00Thanks for pointing that chapter out, Samik! I rea...Thanks for pointing that chapter out, Samik! I really liked the XPRESS book back when I used it (I shudder to think how long ago), and it's good to know that it's still out there.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-26511964732741724202017-05-31T05:19:17.421-04:002017-05-31T05:19:17.421-04:00Another reference I have used in the past is Ch 3 ...Another reference I have used in the past is Ch 3 of the XPRESS LP/MIP book, available at: http://www.maths.ed.ac.uk/hall/Xpress/FICO_Docs/Xpress-booka4.pdfSamik Raychaudhurihttps://www.blogger.com/profile/08575567213510734030noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-50866280283182346942017-05-30T10:27:23.709-04:002017-05-30T10:27:23.709-04:00Nice. Thanks.Nice. Thanks.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-55800585824476971192017-05-30T10:19:12.973-04:002017-05-30T10:19:12.973-04:00Table 1 here:
http://cepac.cheme.cmu.edu/pasilectu...Table 1 here:<br />http://cepac.cheme.cmu.edu/pasilectures/grossmann/RamanRellogicCACE.pdfRob Pratthttps://www.blogger.com/profile/16525877394541155854noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-77609405024131389902017-05-30T10:03:49.987-04:002017-05-30T10:03:49.987-04:00Rob: I agree. In fact, I was thinking that there w...Rob: I agree. In fact, I was thinking that there was probably a "cheat sheet" somewhere that showed conversion of basic logical expressions (conjunction, disjunction, negation, implication) into constraints on binary variables. You wouldn't happen to have a link to one, would you?Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.com