Sunday, October 9, 2011

Benders Decomposition Then and Now

A couple of years ago, a coauthor and I had a paper under review at a prestigious journal that shall remain nameless. In the paper we solved a mixed integer linear program (MILP) using Benders decomposition, with CPLEX link as the solver, and we employed callbacks to check each potential solution to the master problem as it occurred. One of the reviewers asked me to justify the use of callbacks. My first inclination was a variation on “because it's standard practice”, but since I have not actually surveyed practitioners to confirm that it is standard (and because, as an academic, I'm naturally loquacious), I gave a longer explanation. What brought this to mind is a recent email exchange in which someone else asked me why I used callbacks with Benders decomposition. It occurred to me that, assuming I'm correct about it being the current norm, this may be a case of textbooks not having caught up to practice. So here is a quick (?) explanation.

The original paper by Jack Benders [1] dealt with both continuous and discrete optimization, linear and nonlinear. The application I have in mind here deals with problems that, without loss of generality, can be modeled as follows:
\[ \begin{array}{lrclcc} \textrm{minimize} & c_{1}'x & + & c_{2}'y\\ \textrm{subject to} & Ax & & & \ge & a\\ & & & By & \ge & b\\ & Dx & + & Ey & \ge & d\\ & x & \in & \mathbb{Z}^{m}\\ & y & \in & \mathbb{R}^{n} \end{array} \] 
Benders decomposition splits this into a master problem and a subproblem. The subproblem must be a linear program (LP), so all the discrete variables have to go into the master problem (a MILP). You can optionally keep some of the continuous variables in the master, but I will not do so here. You can also optionally create multiple subproblems (with disjoint subsets of $y$), but again I will not do so here. Finally, in the spirit of keeping things simple (and because unbounded models are usually a sign of a lazy modeler), I'll assume that the original problem is bounded (if feasible), which implies that both the master and subproblem will be bounded (if feasible).

The master problem is as follows:
\[ \begin{array}{lrclccc} \textrm{minimize} & c_{1}'x & + & z\\ \textrm{subject to} & Ax & & & \ge & a\\ & g'x & + & z & \ge & f & \forall (g,f)\in\mathcal{O}\\ & g'x & & & \ge & f & \forall (g,f)\in\mathcal{F}\\ & x & \in & \mathbb{Z}^{m}\\ & z & \in & \mathbb{R} \end{array}. \]Variable $z$  acts as a surrogate for the objective contribution $c_{2}'y$ of the continuous variables. Set $\mathcal{O}$ contains what are sometimes known as “optimality cuts”: cuts that correct underestimation of $c_{2}'y$ by $z$. $\mathcal{F}$ contains what are sometimes known as “feasibility cuts”: cuts that eliminate solutions $x$ for which the subproblem is infeasible. Initially $\mathcal{O}=\emptyset=\mathcal{F}$. The subproblem, for a given $x$ feasible in the master problem, is: \[ \begin{array}{lrcl} \textrm{minimize} & c_{2}'y\\ \textrm{subject to} & By & \ge & b\\ & Ey & \ge & d-Dx\\ & y & \in & \mathbb{R}^{n} \end{array} \]Optimality cuts are generated using the dual solution to a feasible subproblem; feasibility cuts are generated using an extreme ray of the dual of an infeasible subproblem. The precise details are unnecessary for this post.

Back in the '60s ('70s, '80s), solvers were essentially closed boxes: you inserted a problem, set some parameters, started them and went off to read the collected works of Jane Austen. Intervening in the algorithms was not an option. Thus the original flowchart for Benders decomposition looked like the following.
The modern approach, using callbacks, looks like this:
Obviously the modern approach can only be implemented if you are using a solver that supports callbacks, and requires more advanced programming than the classical approach. Other than that, what are the advantages and disadvantages?


The main advantage of the callback approach is that it is likely to avoid considerable rework. In the original approach, each time you add a cut to the master problem, you have to solve it anew. Although the new cuts may change the structure of the solution tree (by changing the solver's branching decisions), you are probably going to spend time revisiting candidate solutions that you had already eliminated earlier. Moreover, you may actually encounter the optimal solution to the original problem and then discard it, because a superoptimal solution that is either infeasible in the original problem or has an artificially superior value of $z$ causes the true optimum to appear suboptimal. With the callback approach, you use a single search tree, never revisit a node, and never overlook a truly superior solution.

The potential disadvantage of the callback approach is that you potentially generate a cut each time a new incumbent is found, and some of those cuts quite possibly would have been avoided if you followed the classical approach (and added a cut only when an “optimal” solution was found). Modern solvers are good at carrying “lazy” constraints in a pool, rather than adding the cuts to the active constraint set, but there is still a chance that the modern approach will take longer than the classical approach. I believe (but have no data to support my belief) that the modern approach will typically prove the faster of the two.

[1] Benders, J. F., Partitioning Procedures for Solving Mixed-Variables Programming Problems, Numerische Mathematik 4 (1962), 238--252.


58 comments:

  1. Paul: not re-building the master search tree every time is definitely the way to go. One of the first papers to advocate this approach was the branch-and-check paper by Erlendur Thorsteinsson. You can download it from this page: http://www.mmedia.is/esth/papers/. It's the one that appeared in the CP 2001 proceedings.

    ReplyDelete
  2. @Tallys: Thanks for the link (and the affirmation). IPs being IPs (I think "IP" stands for "Intentionally Perverse"), I'm sure that there will be occasional problems where rebuilding the tree ends up faster, but I agree that those are likely to be the exceptions.

    ReplyDelete
  3. Probably not the most relevant comment but how do you type in the formulas in blogspot. I mean how do you get them to show so nicely.

    ReplyDelete
  4. @Erling: I'm writing the formulas in LaTeX and using MathJax to render them. In Blogger's dashboard, I went to Settings > Posts and Comments and pasted the following into the Post Template box:

    <script type="text/x-mathjax-config">
    MathJax.Hub.Config({
    tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
    });
    <</script>
    <script type="text/javascript"
    src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML">
    </script>

    ReplyDelete
  5. @Erling: I should add that when I write a post, I need to switch into HTML mode initially to position the cursor after the script. Once I've started typing, I can switch back to composition (rich text) mode.

    ReplyDelete
  6. Dear Professor Rubin,

    For the callback diagram, in the callback (incumbent) loop, what do you mean by "Accept x_hat, z_hat"? I think z_hat is not well defined when we use callback, sometimes, we find that z_hat is even lower than c2'y.

    ReplyDelete
  7. The arrow leading into that box comes from a diamond designating the result of solving the subproblem. This particular arrow represents the case where z-hat matches c_2'y.

    ReplyDelete
  8. Dear Professor Rubin,

    Thank you for your explanation.

    What I mean is "Is it really correct to have that arrow?".

    In the branching tree, is it possible to have c_2'y_hat < z_hat? From our implementation of Benders decomposition with callbacks, it is possible.

    Do you have any stopping criteria for the incumbent loop?

    Thank you very much.

    ReplyDelete
  9. @NVA: Yes, it is possible that c_2'y_hat < z_hat; that is the arrow downward from the diamond.

    The incumbent loop is not a closed loop; the arrow moving up and left from the circular node goes to the "Solve master" box; it does not connect directly to the "incumbent" arrow oriented up and right. After processing the most recent incumbent, the solver regains control and either accepts it or adds a cut. It is possible that the incumbent callback will be called again at the same master problem node. Obviously, the callbacks stop when the solver decides it is done with the master problem.

    ReplyDelete
  10. Dear Professor Robin,
    If the constraint Dx + Ey >= d;
    was not there, would it still be possible to apply the benders decomposition to it?
    If Yes, then how would you generate the benders cuts?

    ReplyDelete
  11. If the connecting constraint were not there, you would have no need for Benders; the problem would separate into an IP involving only x and an LP involving only y. Solve them independently and glue the solutions together.

    ReplyDelete
  12. Thank You Professor for your explanation.

    ReplyDelete
  13. "Moreover, ...because a superoptimal solution that is either infeasible in the original problem or has an artificially superior value of z causes the true optimum to appear suboptimal."

    Prof. Rubin

    I don't quite follow the meaning of this sentence. Could you please explain more about it? Thanks a lot!

    ReplyDelete
  14. @C.L.: I'll try. Suppose that we are doing Benders the old fashioned way (solve the master to optimality, add a cut, repeat), and suppose that the optimal solution to the current master problem is destined not the optimal solution to the original problem. Then the solution to the current master must be superoptimal (too good to be true) in the original problem (because the optimal solution to the original problem is feasible in the current master). So the solution we are destined to get when we finish solving the current master will prove either to be infeasible in the original problem (and we will add a feasibility cut to cut it off) or we will discover that the value of $z$ in that solution underestimates (if we are minimizing) the actual contribution of the terms for which it is surrogate ($c_2'y$).

    As we are solving the current master, it is possible that at some node we will actually encounter the solution that is optimal in the original problem. We know it is feasible in this master, so it lurks somewhere in the search tree -- in a node that is destined to be pruned, because we are destined to find this superoptimal solution that we will subsequently reject. My point was that if we can reject the superoptimal solution as soon as we encounter it (using callbacks), then it will not have a chance to prune the node with the real optimal solution, and we will encounter the real optimum sooner.

    ReplyDelete
    Replies
    1. Prof.Rubin

      Your elaborate reply resolve my question quite clear, thank you very much!

      So, do you agree that the use of callbacks in implementing Benders' decomposition is accepted as standard by most of researchers in OR field?

      C.L

      Delete
    2. I'm hesitant to say "most", since I have not surveyed people in the field. I think it's safe to say that using callbacks is common practice among people working actively with Benders decomposition.

      Delete
  15. Hello again, Prof. Rubin

    I was implementing a Benders’ decomposition algorithm using LazyconstraintCallback and recently I want to make some improvements to accelerate the speed of search of new incumbent of master problem. The master problem contains a three dimension binary variable and a continuous variable. I design a heuristic method to assign an integer value to the integer infeasible solution of node LP relaxation. And I plan to implement it using HeuristicCallback of CPLEX. However, it seems that the newly found incumbent cannot be passed to LazyconstraintCallback to test whether it should be rejected (and meanwhile a new optimality cut being added) or not. I tried two ways to realize the HeuristicCallback:

    1)When a new incumbent of master problem is found, I use setBounds() method to modify both of upper bound and lower bound to be same as the incumbent value, then use solve() method to resolve the current node relaxation. It shows that the node relaxation is optimally solved and a better incumbent is produced. But this incumbent is seemed to be ignored and no LazyconstraintCallback routine is invoked.

    2)After the above method failed, I use setSolution() method after the resolving node relaxation to inject the incumbent in to the model. Again, no LazyconstraintCallback routine is invoked such that the incumbent is accepted by CPLEX. But actually this incumbent should be rejected.

    Have you encountered any situation like this? What do you think the problem is and how to fix it?
    I am really eager to have your reply soon.
    Thanks a lot.

    ReplyDelete
    Replies
    1. You definitely need to call setSolution if you want the solution considered.

      I verified that CPLEX apparently does not call a lazy constraint callback (LCC, to save typing) when a solution is injected by the user in a heuristic callback (HC). Frankly, this makes sense to me: CPLEX (reasonably) expects that a user will not inject a solution that should be infeasible.

      If your only concern is that the infeasibility be detected and the solution excluded, I suggest that you pull that testing/cut generating logic out of the LCC and make it a separate function, called from both the LCC and the HC. The HC calls it before injecting the new solution, and skips the call to setSolution if the function says the new solution would be infeasible.

      If you really want to include the cut that the LCC would have generated, had it seen your heuristic solution, you will need to (a) create a place in global memory to store the cut and (b) add a cut callback (CC) in addition to the other two callbacks. If the HC calls the cut function and gets back a cut (meaning the solution should be excluded, it stores the cut in the global space and skips setSolution. Whenever the CC is entered, it checks the storage area to see if there is a cut queued up. If so, it adds the cut (and clears the storage area). If not, the CC just exits without doing anything.

      Delete
    2. Prof.Rubin

      Thank you very much for your quick reply.

      I indeed want to include the cut that LCC would have generated because the cut would possibly generate some better lower bound for a minimization problem . Unfortunately, your suggestion that add another cut callback might not be permitted by CPLEX since it does not allow two CC to be called in one model. CPLEX would simply call the latter CC via use() method and ignore the first CC. Moreover, in CPLEX v12.3, CutCallback cannot be used any more, only LazyConstraintCallback or UserCutCallback is allowed.

      Regarding this circumstance, what would you suggest me to do to implement my strategy? Thanks again!

      Delete
    3. You are correct that CutCallback is no longer available. If I recall correctly, when the first refactored it into LazyConstraintCallback and UserCutCallback, you could still directly subclass CutCallback, but that's no longer true.

      I'm looking further into this, because it may have implications beyond your case, but for right now I don't see any good way to add that cut, at least not globally. You could use a branch callback to create a single child node by adding the cut, which would apply it to all descendants of the node where your heuristic found the rejected solution; but that would not apply it to other branches of the tree.

      Delete
    4. It belatedly occurred to me that a partial solution would be to check the heuristic solution before injecting it and, assuming it was rejected, queue the cut to be added the next time the LCC is called. The logic of the LCC would be to (a) test the new incumbent and, if it was not feasible, (b) add a new cut to chop it off, then (c) add any queued cuts (and clear the queue). If the LCC adds a cut, it should be called again immediately to give it the chance to add additional cuts. (The repeat calls stop when either the LCC does not add a cut or it adds a cut that duplicates one it already added. I think.) This won't add the queued cut immediately, and it may never add it (if no new incumbents are found), but it might be better than nothing.

      Delete
    5. Thanks a lot for your answer, I try it in your way and it works now.Though it cannot be guaranteeing that the cuts being added, just like you said, it is better than nothing! Thanks again, Prof. Rubin.

      Delete
  16. Dear Prof. Rubin,

    I have been trying Benders' Decomposition to a couple of difficult MIP problems for quite sometime with little success. I have also tried many of the refinements (e.g. McDaniel & Devine (1977); Geofrrion & Graves (1974); Magnanti & Wong (1981)), but haven't had success with any of them: All them perform, in terms of CPU times, much worse than the original MIP formulations. Recently, I read your blog on Modern implementation of Benders' Decomposition, but wasn't quite sure as to how to code it. Then, I found a code (ilobendersatsp.cpp for ATSP), which comes with Cplex 12.3. Before trying to adapt it for my problems, I wanted to see how much of improvement it gives over the classical version of Benders for ATSP. So, I modified ilobendersatsp.cpp to get the classical implementation of Benders, and applied both the versions (Classical and Modern) to several problem instances of ATSP (e.g. br17, ry48p, p43) available at TSPLIB (http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/ATSP.html). In all these instances, the classical implementation beats the modern implementation.
    However, the difference in CPU times wasn't much since all these instances could be solved in a few seconds. So, I tried a larger problem (kro124) for which the modern implementation took a little less than 2 hours, while the classical implementation could solve the same in less than 10 minutes.

    So, I have the following questions now:

    1. Was this expected (i.e., the modern implement to perform worse than the classical implementation)? I must confess that although ilobendersatsp.cpp came with Cplex 12.3, I am running it on Cplex 12.1.

    2. The modern implementation of Benders' was my last hope to solve all the open problems I have. But if that itself performs worse than the classical Benders, which in turn performs worse than the original MIP formulation, what else can I try?

    Thanks and Regards,
    Sachin

    ReplyDelete
    Replies
    1. Sachin,

      I just glanced at the Java version of the Benders ATSP example. I'm on CPLEX 12.4, so I'm not sure if my version is the same as yours, but I suspect so. It may not be safe to run it on CPLEX 12.1. I would need to examine the code more closely than I have time to do right now, but I'm pretty sure it relies on the fact that, in recent versions of CPLEX, lazy constraint callbacks are called whenever CPLEX finds a new incumbent by any means (including heuristics). That's a relatively recent change to CPLEX, and I'm not positive it was in effect back in version 12.1.

      Assuming your results are correct, and not an artifact of the version you are using, would I expect that? No. MIPs being the annoyingly inconsistent buggers that they are, it's certainly possible -- anything is possible -- but I would expect that on most problems the callback approach would be faster. Although I'm not sure it's related to the difference between "classical" and "modern" on this problem, I'll also point out that the ATSP example is not entirely typical, in that the objective function contains no contribution from the LP variables.

      Will Benders perform worse than solving the original MIP formulation outright? That depends on a number of factors, including the structure of the original problem, the size of the original problem, and recent sunspot activity. With any decomposition technique, there is a fair bit of overhead, so decomposing an "easy" problem will usually end up taking more time than solving it outright. At what point, if any, the decomposition starts to pay for itself is anybody's guess. (This, by the way, holds true for pretty much any "clever" modeling or algorithmic technique applied to MIPs.)

      As to your second question, that is very much problem specific. If you want to send a description of your difficult MIP problems to me (rubin msu edu), I'll try to take a look at them and see if anything pops out. Right at the moment I'm oddly busy, but within a couple of weeks or so I expect to have some free time.

      Delete
    2. Dear Prof. Rubin,

      Thank you so much for your such a prompt response and offer to help. I will soon prepare brief documents on the problems and send them to you by e-mail.
      One of them uses a Big M in the original formulation, due to which Bender's Decomposition produces extremely week cuts, and the iterations just run for ever without any improvement (you have talked about this problem in your recent paper in OR: Bai & Rubin (2009)). So, I am trying to use Combinatorial Benders (CB)cuts, as proposed by you in this paper, for this problem. However, CB cuts themselves seem to be very week, and would appreciate some help on this as well. I will send you the description shortly. However, in the meanwhile, could you please quickly answer this: The Master problem in the Benders' Decomposition of this problem is just a feasibility problem (no integer variables in the objective function). I am currently trying to solve it using the classical Benders using CB cuts. However, I am wondering if solving the problem using Modern Benders as opposed to Classical Benders (both using Combinatorial Benders Cuts) will make much difference? Classical Benders has the drawback that solving the Master problem to optimality between cuts has the potential to waste time. But in this problem, we are solving the Master problem only to feasibility (since objective function is 0) between cuts even in Classical Benders.

      Coming back to our discussion on ATSP, the code "ilobendersatsp.cpp" allows you to specify whether you want to use (option 0) just lazy constraint callback or (option 1) usercut callback in addition to lazy constraint callback.
      I have run the code using both options (0 and 1), and I can see some difference in the node tree the algorithm traverses as a result.

      Thanks and Regards,
      Sachin

      Delete
    3. Regarding option 0 and option 1, I've always used the equivalent of option 0 (wait for a new candidate incumbent before adding cuts), but I know from forums that some people use option 1 (aggressively seek cuts at every node, perhaps by rounding the node solution and using that in the subproblem, or perhaps by passing a fractional master solution to the subproblem). My guess is that, like everything else, sometimes option 0 is better and sometimes option 1 is better. My hesitation in using option 1 is that a lot of "drag", both because the user cut callback is called at every node (perhaps more than once) while the lazy constraint callback is called only when an incumbent is found, and because it leads to a lot more time spent solving the subproblem. If option 1 tightens the master quickly enough to prune nodes high in the tree, the drag may be more than compensated; but I have never felt confident enough in that to try it.

      Delete
    4. Hello Sachin, I would like to talk about this work. I am trying to use Benders for ATSP and I am having dificults. Could you send me a email: michellimaldo@gmail.com

      thanks.

      Delete
  17. Dear Prof. Rubin,

    Could callback technique be applied to lazy constraint (managed by users, not by CPLEX) to solve MIP problem? My intuition tells me that chances are it can speed up the solving but I'm not sure.

    Thanks a lot.

    Hoa

    ReplyDelete
    Replies
    1. Hoa,

      Callbacks certainly can add lazy constraints. Are you talking about a Benders decomposition or just solving a typical MIP model?

      I'm not sure what you mean by "managed by users, not by CPLEX". Users can add a lazy constraint but cannot directly remove it (other than by stopping and restarting the solution process). You can use the "cut management" feature to tell CPLEX that it is free to delete the cut later if the cut is not having much effect, but it will still be up to CPLEX to decide when to enforce the cut and whether to purge it.

      Delete
    2. Thank you very much.

      I'm talking about a typical MIP. What I mean is in the callback, I add a constraint as an normal constraint (i.e., a constraint that is NOT added to the lazy constraint pool of CPLEX). But now I think this might not be possible in CPLEX.

      Hoa

      Delete
    3. Hoa: You cannot modify the model itself while the solver is running. You can add a "user cut" (which is different in some respects from a lazy constraint) in a callback. User cuts are typically used to tighten the LP relaxations. They can be global or local to a particular subtree. The key distinction from a lazy constraint is that user cuts must not cut off integer-feasible solutions.

      I'm not positive, but I"m pretty certain that user cuts also go into a pool and are activated only when they would be violated.

      Delete
  18. Hello again prof Rubin,

    In the equations Dx + Ey >= d, can D contains some big M ? If yes, what do you think of the efficiency of Bender decomposition in this case?

    Thank you for your reply.

    Hoa

    ReplyDelete
    Replies
    1. Yes, D can contain "big M" coefficients. In the general case, you will suffer the usual perils of loose bounds and possible numerical instability (about which I've written in other posts). In the special case where the "big M" coefficients multiply binary variables (one per constraint), the intent is usually to impose a constraint on the $y$ variables only when the corresponding $x$ variable is 0. In that case, you are better off literally adding or removing the constraint in the subproblem, rather than generating master cuts containing $M$. Trying searching the phrase "combinatorial Benders cuts"; you should find a reference to a nice paper by Codato and Fischetti on the subject.

      Delete
  19. Hi professor Rubin,

    I have used the logic-based Benders decomposition in my recent paper, and your posts on this topic have been very helpful.
    Will you give me the permission to reproduce your Benders flow-charts in Beamer (LaTex) presentation? I need them to explain the method.

    Thank you very much.

    ReplyDelete
    Replies
    1. Sure, no problem. The source code for the diagrams is LaTeX using the TiKZ package, so it's very compatible with Beamer. If you write to me at rubin AT msu DOT edu, I'll send you the TiKZ code.

      Delete
  20. Hello Dr Rubin,

    I am experiencing a problem related to callback and would be thankful to you if you can clarify.

    I am using Cplex 12 with Java API to implement Benders decomposition. I have developed a correctly working implementation of Benders decomposition, and now trying to learn callback.

    I am new to this concept and trying to learn from your java code example files (link given in another post of yours).

    My master problem (MP) contains binary variables and one continuous variable (as an usual MIP problem), and the subproblem (SP) is a LP flow problem.
    MP has some surrogate constraints to ensure feasibility of SP all time, so I did not require any feasibility cut so far.


    When I restructured my code in the line of your files BendersExample.java and Benders.java I observed this:

    MP solved -> SP solved correctly; then in the 2nd time - SP is giving infeasible solution! On a close look, I found that in the 2nd time values of the binary variables passed from MP have changed (expected) but they violate my surrogate constraints. How can it be possible? Does it mean that for the MP the solver has reached a node in the B&C tree which is infeasible (will be "pruned by infeasibility" eventually?) ? Also, does it mean that in implementation with callback I MUST keep option for adding feasibility cut, although in convensional Benders framework it is not required due to my surrogate constraints?

    Thanks in advance for any help.
    =======================

    Jyotirmoy Dalal

    ReplyDelete
    Replies
    1. If the surrogate constraints are sufficient to guarantee subproblem optimality, and if the master problem variable values are being passed correctly, there should be no need for feasibility cuts.

      When you say that, on the second call to the subproblem, the master variable values violate your surrogate constraints, are you looking at those values in the callback (after they are obtained from the master but before they are passed to the subproblem) or in the subproblem? I'm trying to narrow down whether they are being generated incorrectly, obtained incorrectly, passed incorrectly or applied incorrectly.

      Something to try is to record those incorrect binary variables and save any optimality cuts generated somewhere in your code. After the error occurs, add the stored optimality cuts to the original master problem model, and fix the binary variables by setting their lower and upper bounds equal to the recorded values. Then run the conflict refiner to see if CPLEX thinks they violate any constraints. If CPLEX thinks they satisfy all constraints (no conflicts), see if you can confirm violations by "hand" calculations.

      Delete
  21. Hi Dr Rubin,
    I am trying to solve a stochastic MILP with Big M constraint, the integer variables(480 integer variables) are binary and it taking cplex a longtime to solve, do you have any reference known to you where I can decompose the problem probably using benders decompose to solve. whats the best way to handle this problem

    ReplyDelete
    Replies
    1. I know there are papers on decomposing stochastic MILPs, but I don't happen to know the details of any of them. For a general paper on using Benders to eliminate Big M constraints, search for "combinatorial Benders cuts". The paper I have in mind is by Codato and Fischetti (mentioned in a reply above).

      Delete
  22. Hi Paul,
    have u heard of the indicator constraint that can be used instead of big M in cplex. do you have any ideas on how to implement them.

    ReplyDelete
    Replies
    1. How (and when) to use them is discussed in the CPLEX user's manual, in the section "Discrete optimization > Indicator constraints in optimization". For the most part, I think CPLEX turns them into big M constraints internally; occasionally it may implement them by branching logic, which avoids numerical issues with a really large M but results in an even weaker relaxation.

      Delete
  23. how do compute the minimally infeasible subsystem in an LP

    ReplyDelete
    Replies
    1. That depends on the software. With current versions of CPLEX, I usually use the conflict refiner.

      Delete
  24. hi paul I have implemented indicator constraint, however I have a problem because x(i,t) which is my binary variable has logical precedence with x(i,t+1) which is also binary and cplex is not recognizing that

    ReplyDelete
  25. hi Paul,
    I have implemented combinatorial benders cut. However the solution it gives me is not optimal but feasible. I don't know whether I am missing a step in the Procedure. any thoughts

    ReplyDelete
    Replies
    1. The most likely explanation would be an incorrectly formulated Benders cut (one that cuts off the optimal solution).

      Delete
    2. do you know how the separation problem works in combinatorial benders cut algorithm

      Delete
    3. You pass it a candidate incumbent solution, it solves the subproblem and either (a) blesses the incumbent, (b) returns an optimality cut if the subproblem is feasible but the master problem solution incorrectly estimates the objective contribution of the subproblem or (c) return a feasibility cut if the subproblem is infeasible. Any optimality or feasibility cut should cut off the candidate solution while not cutting off the true optimum.

      Delete
    4. my problem is in form cx+dy as objective function, where x is solely binary and y is continuous however d=0. my master problem is purely binary and my slave problem just checks for feasibility. I don't think I need an optimality cut.Codato and fischetti never talked much about proving optimality

      Delete
    5. my problem is a two-stage stochastic optimization problem with big-M constraints.

      Delete
    6. You are correct in saying that if your subproblem just checks feasibility, you do not need (and will not get) optimality cuts, only feasibility cuts.

      Delete
  26. I have started running benders(CBC) and it is very fast .however before I reach optimality the master problem become very difficult to solve because it goes from a small problem to a large problem after a lot of feasibility cut has been added to the master problem. do you know any ways in which I can get relax the master problem and get very good solutions.By the way I am using matlabs cplexbilp function.

    ReplyDelete
    Replies
    1. Typically, when a large number of cuts pile up, some (many?) of them end up being redundant given subsequent cuts. With other APIs, you can tell CPLEX to allow cuts to expire after a while, but I don't think the MATLAB API supports that (not positive -- I don't use MATLAB). You might want to try removing older cuts from the master problem, while keeping a copy of them on the side. When you need to solve the subproblem, you can check the copies of the older cuts first to see if any are violated (should be faster than solving the subproblem), or you can just solve the subproblem and let it rediscover older cuts if necessary. Another strategy might be to check the slack produced by the latest solution in all cuts, add the new feasibility cut from the subproblem, and make room for it by deleting the old feasibility cut with the most slack.

      Delete
    2. I have also checked on the approach discussed earlier in this topic called branch and check. Will the branch and check guarantee optimality

      Delete
    3. I have not yet gotten around to reading the Thorsteinsson paper carefully and experimenting with the methods described, but based on a casual reading I would say that a properly executed branch-and-check strategy should guarantee optimality (assuming that the problem has an optimal solution).

      Delete
  27. Dear Dr. Rubin,

    This article helps me to understand about callback function. Currently, I am using Benders Decomposition (BD) algorithm in my research. After I checked this article, I tried to use callback function. However, the performance (solution time) of modern approach is worse than those of classic approach. I tried to test the modern approach with different CPLEX options, but still classic approach is better.

    It looks modern approach has more advantages so that it should have better results. How can I explain this results? Thank you for your comment in advance.

    ReplyDelete
    Replies
    1. In some cases, I think this can happen. Adding cuts to a model alters the search tree, and thus alters the path of the solver. A given cut can cause the solver to switch from a path that would have found a very good or optimal solution quickly to a path that takes longer. Of course, a cut can just as easily do the opposite - switch the solver from a slower path to a quicker path.

      So it may be that the one or more of the cuts that you are adding in the callback are creating a tree that, by random luck, finds an optimum more slowly than does the tree used in the classical approach. The callback approach will not always be faster, but I think it will _usually_ be faster.

      You might want to look at the cuts added when you use callbacks just o make sure they are valid and as tight as you can make them.

      Delete

If this is your first time commenting on the blog, please read the Ground Rules for Comments.