Thursday, August 9, 2012

User Cuts versus Lazy Constraints

After an e-mail exchange with a contact at IBM, I recently gained a bit of clarity (I hope) about the distinction between user cuts and lazy constraints in CPLEX, which I will share here. The terminology and some of the details in this post are specific to CPLEX, but I suspect that a number of other solvers will have something analogous to user cuts and lazy constraints, albeit perhaps under different names. If the terminology is familiar to you, you can skip to the end of the post and find the one potentially useful nugget of information it contains.

Let me start with an illustration of the feasible region of a hypothetical two variable pure integer linear program.
integer program feasible region, showing LP and IP hulls
The polygon bounded in black (including portions of the two axes) is the feasible region of the linear program relaxation (also known as the linear hull or LP hull). This is what you get if you relax the integrality restrictions while enforcing all functional constraints and variable bounds. The blue dots are the integer lattice points inside the polygon, i.e., the feasible solutions that meet the integrality restrictions. The blue polygon, the integer hull or IP hull, is the smallest convex set containing all integer-feasible solutions. The IP hull is always a subset of the LP hull. Assuming a linear objective function, one of the vertices of the IP hull (all of which are lattice points) will be the optimal solution to the problem.

User Cuts


In CPLEX parlance, a cut is a constraint that is not part of the original model and does not eliminate any feasible integer solutions. Cuts are typically added for one of two reasons: to separate a node in the search tree into two nodes representing smaller subproblems; or to strengthen bounds by paring away some of the fat in the LP hull (the "fat" being the set difference between the LP hull and the IP hull). CPLEX automatically adds a variety of cuts, of which the most famous is arguably the Gomory cut. A user cut is simply a cut added by the user, rather than by CPLEX, frequently but not necessarily inside a callback. The next figure illustrates a user cut (red dashed line).
LP and IP hulls with a user cut added
The shaded triangle shows the portion of the original LP hull that is removed by the user cut. Note that no integer solutions were harmed in the construction of this diagram.

Lazy Constraints


In contrast to user cuts, lazy constraints are allowed to pare away integer-feasible solutions. The next figure shows an example of a lazy constraint (green line).
LP and IP hulls with a lazy constraint added
The shaded polygon is again the portion of the LP (and, in part, IP) hull removed by the constraint. Note that, this time, three integer points are being rendered infeasible. The implication of using lazy constraints is that the problem originally specified (first figure) is actually a relaxation of the true problem; adding the lazy constraints to that initial problem gives you the problem you really want to solve.

Why use lazy constraints? Three reasons come to mind. First, a full specification of the model may involve an extremely large number of constraints, many of which you anticipate will be redundant, or at least not binding near the optimal solution. If you knew in advance which of those constraints were redundant, you could simply omit them, but frequently you cannot identify redundant constraints at the outset. Many modern solvers allow you to put a portion of the constraints (what we are calling the "lazy" ones) in a pool, where they are initially not a part of the active model. As solutions are generated, the solver checks to see if any lazy constraints are violated and, if so, adds them to the active set. Lazy constraints that were previously added but have not been binding for a while may be returned to the pool. This is an attempt to reduce the computation time per iteration, and possibly the memory footprint of the problem.

A second and closely related reason to use lazy constraints is that there may in fact be so many constraints that you cannot afford the time or memory to generate them all at the outset. One approach to formulating the Traveling Salesman Problem as an integer program requires the use of subtour elimination constraints. The number of such constraints grows exponentially with the number of nodes in the network. While they are easy to specify in a computer program, for decent size networks you simply cannot afford to list them all. An alternative is to treat them as lazy constraints and generate them on the fly, through a callback.

The third reason to use lazy constraints, specifically in a callback, is that you may not be able to identify them at the time the model is specified. The feasibility and optimality cuts generated during Benders decomposition fall into this category. You discover them by solving one or more subproblems at certain points in the search for the solution to the master problem.

Why Two Categories?


We've observed that the distinction between user cuts and lazy constraints is fundamentally whether or not they can cut off otherwise feasible solutions. The reason we care about that is somewhat arcane. CPLEX and other solvers do a variety of problem reductions, both at the outset (presolving) and on the fly during the search process, to tighten the feasible regions of node problems and expedite the solution process. Some of those reductions involve solutions to dual problems. User cuts do not affect the reductions, but if you hide some actual constraints in a pool of lazy constraints (or generate them after the reductions have been done), the dual solutions used in the reductions are incomplete (lacking dual variables corresponding to the constraints that are not yet included). This can lead to incorrect results.

If CPLEX sees the user adding lazy constraints (or, by virtue of the presence of a lazy constraint callback, threatening to add lazy constraints), it automatically turns of reductions that might be incorrect. I recently ran a program that uses CPLEX (12.4) with a lazy constraint callback. The output contained the following message:

Lazy constraint(s) or lazy constraint callback is present.
    Disabling dual reductions (CPX_PARAM_REDUCE) in presolve.
    Disabling non-linear reductions (CPX_PARAM_PRELINEAR) in presolve.

As user, you can turn off these reductions manually, but including a lazy constraint callback (or adding constraints to the lazy constraint pool during model construction) will induce CPLEX to do it for you.

So What Got Clarified?


A few versions ago, IBM refactored their implementation of cut callbacks. Of particular interest is the conditions under which cut callbacks are called.

  • User cuts may be checked and applied at any time during the optimization process. This includes the cut generation phase at each node. They are not guaranteed to be checked at the time a candidate solution (potential new incumbent) is found, however. Calling a constraint a user cut tells CPLEX that it cannot cut off a feasible solution, so there is no need to test it each time a feasible solution is found.
  • Lazy constraints, on the other hand, are guaranteed to be checked each time CPLEX finds a candidate solution, regardless of how the candidate is found (node LP solution, rounding, various heuristics, inferred from the position of the planets ...). This is important, because a lazy constraint might make the candidate solution infeasible in the full problem. (There is one small exception, at least as of version 12.4 and earlier: a candidate solution injected by the user through a heuristic callback is not checked against the lazy constraints. The user is somewhat charitably assumed to know what she or he is doing, so a user-injected solution is assumed to satisfy all lazy constraints.) What changed in the refactoring of the cut callback interface is that lazy constraints are checked only when a candidate solution is found. They are not routinely checked at every node.
I mainly use lazy constraints in Benders decomposition, and I generate Benders cuts only when a new incumbent is proposed, so this is fine with me. Discussions on various CPLEX and OR forums tells me that other people doing Benders sometimes want to check for possible Benders cuts at every node. There may be other scenarios unknown to me where users desire to check lazy constraints at each node. Before the refactoring, this was possible, but the new rule that lazy constraint callbacks are called only when potential incumbents are identified seems to throw up an obstacle. This is compounded by the fact that the current CPLEX documentation threatens the user with exile to the Phantom Zone should she or he dare lie and call a lazy constraint a user cut.

So here, at last, is the clarification. It turns out that it is legal to add a lazy constraint in a user cut callback (gasp!) as long as the user includes a lazy constraint callback or otherwise ensures that the necessary reductions are turned off. Since user cut callbacks are not called when candidate incumbents are found, at least for Benders decomposition this would suggest adding the same cut generation logic in both a lazy constraint callback (mandatory) and a user cut callback (optional).

I'm told that the next (?) release of official documentation for CPLEX will include this clarification. To close, I'll note that if you intend to sneak lazy constraints into a user cut callback, the safest way to do this may be to include a lazy constraint callback, even if it does nothing (just returns with no action). That way, CPLEX will automatically turn off everything that needs to be turned off. As I said, you can use parameter settings to turn off reductions manually, but which reductions will need to be turned off could potentially change in a new version. I, for one, am too lazy (pun deliberate) to keep up with that stuff.

35 comments:

  1. Hi there, I am doing some experiment with lazy constraints and find your post just in time! Actually I am quite interested in what CPLEX does when it finds a potential incumbent solution violated by some lazy constraints, since it is not clarified in the manual, do you have any idea about this?

    Thanks,
    Y.Li

    ReplyDelete
    Replies
    1. Lazy constraints (and user cuts) function like regular model constraints, other than being held in a pool. When a possible incumbent is found, the pool of lazy constraints is checked, and if the candidate violates any of them it is rejected. I'm pretty sure that the lazy constraint it violated then becomes a member of the active constraint set (the ones used to generate a basis) for a while.

      If the candidate satisfies all existing lazy constraints, and if there is a lazy constraint callback, CPLEX calls the LC callback. Should you add a lazy constraint in the callback that cuts off the candidate, CPLEX rejects the candidate and (as I understand it) adds the new lazy constraint (either locally or globally, depending on how you added it), makes it active, and starts over at the node (solving the node LP and, in the process, repeating the cut cycle -- so if you also have a user cut callback, I think it will be called again at the node).

      That's what I think happens. I'm not actually privy to the (proprietary) CPLEX code. From time to time someone at IBM explains this to me, and then I promptly forget all the gory details. :-)

      Delete
    2. Dear Prof. Rubin,

      I guess that you probably forget this gory detail. But I'll give it a shot and ask it anyway.

      What if I would add a (lazy-)constraint in the [lazy constraint] callback which does not "cut off the candidate"... Then does CPLEX reject the incumbent candidate (based on the fact that a (lazy cut) constraint is just added; which is based on this last incumbent candidate)? Or it [CPLEX] checks one last time whether this constraint that is added just now is violated [by the last incumbent candidate] or not?

      Thank for this wonderful bloq!
      Halil Sen

      Delete
    3. In my "little" experiment, I conclude that even if the cut created (and about to added to the model) in the lazyconstraintcallback is not violated by the [incumbent] candidate (per se ), it (still) is added to model and causes unnecessary iterations (and cuts).

      However (unfortunately) I'm still not convinced. So, I would appreciate if you have any know-how about this situation.

      h.

      Delete
    4. Halil,

      I'm currently at a conference, so it will be a week or so before I can investigate this more thoroughly. I'm fairly confident that adding a cut which the proposed incumbent satisfies does not lead to rejection of the incumbent. My understanding (possibly flawed) is that when the callback offers up a cut, CPLEX adds that cut to the active model and also consults a table of lazy constraints to see if any others are violated (activating any that are). I believe it then reoptimizes the node and, if an integer feasible solution is found, calls the callback again. I don't know whether CPLEX tests the new cut for violation before activating it or just assumes it should be active. Certainly the developers do not expect users to knowingly add cuts that fail to cut off the proposed incumbent.

      Also, and again I'm not positive about this, I believe that each time the callback adds a cut, it is called again (at the same node), until it finally returns without adding a cut. IBM refactored the callback interface a few minor versions ago. Prior to the refactoring, one used the now deprecated CutCallback class for both user cuts and lazy constraints, and the pattern was as I described -- keep calling it until a call produces no changes. I'm not positive that behavior remains, but I think so.

      Delete
    5. Thank you for the prompt and detailed explanation. I would like to hear more if you find anything after the conference.

      My understanding is that Cplex "just assumes" the cut should be active.
      I have a toy problem which minimizes an objective function without any constraints [at first] and adds a lazy cuts in each callback in the following form
      obj >= 1/1; obj >= 1/4; obj >= 1/9; obj >= 1/16, so on... one-by-one.
      It is clear that second and remaining cuts are unnecessary and loose however they make CPLEX go on and on until the right-hand-side value becomes infinitesimally small (total of ~4400 lazycallbacks).

      On the other case if I add same cut in each callback (i.e., obj >= 1 on every callback ) it stops after the second cut.

      For the pattern you described, I'm fairly positive that the "usercutcallback" behaves as you said however I guess that is not the case with "lazyconstraintcallback"s (I'm not sure though).

      Hope you'll have an inspiring conference.

      Delete
    6. When you add cuts, CPLEX checks for repetitions. That probably explains the second case. In the first case, CPLEX keeps seeing new cuts, which explains why it keeps calling the callback. Accepting the cuts is not the same as adding them to the active constraint set. CPLEX has a separate table of lazy constraints. So when you add a loose constraint, it's possible CPLEX just trusts you that it will be binding and adds it to the active set, but it's also possible that CPLEX first adds it to the lazy table, then checks if it is binding, sees it is not, and leaves it out of the active set (but calls the callback again in case you have more constraints to add).

      Delete
    7. I ran a small experiment, and based on that I believe the following is true of lazy constraint callbacks:

      1. If the callback adds a cut that CPLEX has not seen before, it will be called again at the same node (and again and again, until it offers up nothing new).

      2. If the callback adds a cut that CPLEX has already seen, it will _not_ be called again at that node.

      3. Adding nonbinding cuts did not reduce node throughput at all in my experiment. Unfortunately (a) all the cuts were added at the root node -- I think CPLEX had the optimal solution by the time it started branching -- and (b) the number of added cuts was pretty small. So I can't be certain that nonbinding cuts added in the callback are held out (in the lazy cut table, or whatever they call it), but I suspect that's the case.

      Delete
    8. Thank you [again] for your time and effort.

      Delete
  2. Thanks for your input!

    Y.Li

    ReplyDelete
  3. Hi, the information you gave is pretty clear and helpful. I am using the Lazy constraints currently as well and have one problem:
    Normally, in Benders Decomposition, an optimality cut or feasibility cut will be added based on optimal solution of RMP. But when the lazy constraint is used by callback, the cut will be added when a candidate node in search tree is found. Should we call this Benders and Branch and cut instead of Benders decomposition?

    ReplyDelete
    Replies
    1. I've seen out referred to as the "one tree" version of Benders decomposition (because it uses a single search tree, in contrast to the traditional version's initiation of a new tree after each cut is added). I don't know who coined the phrase, but I find it a good description.

      Delete
  4. Hi, your reasons about using lazy constraints seem interesting but actually I need a reference for these statements since I want to use them in my research work. Can you propose anything to me? Thanks.

    ReplyDelete
    Replies
    1. If you're looking for an authoritative source, I don't think I can help. For the second reason (you'll die of old age generating all the cuts up front), it's not too hard to show that the number of possible subtours in a TSP network is typically huge (and, if you start with a complete graph, *really* huge). I suspect that if you look at source material on TSPs, you'll find something about full enumeration of subtour elimination constraints being scary. (Don't know if that made it into Bill Cook's widely acclaimed book or not). For the third reason (you won't know the cuts in advance), you might want to look at Moreno-Centeno and Karp, "The Implicit Hitting Set Approach ...", OR 61 (2), 2013.

      Delete
    2. Thanks, my main concern is the traveling sales man problem. Actually for using lazy constraints, it is required to solve the same problem several times. (each time with a new set of constraints) How we can prove that solving the same problem hundreds of time will take less time comparing to listing all the subtour elimination constraints and solving the problem at once? I know that this lazy concept is a novel and good idea but I need to prove its benefit in my research work. Thanks.

      Delete
    3. In many real-world problems, you cannot list all the subtour elimination constraints -- you won't have enough memory. On the other hand, for very small problem instances, I would not be shocked to find that listing all the subtour constraints is faster than using a callback.

      So I very much doubt that you can prove either method is superior to the other in all cases. At best, you can talk about average performance, perhaps for networks of a particular size or with particular characteristics (complete graph, sparse graph, maximum degree K, ...). If you really need to show that one method outperforms the other, I would suggest running experiments (solve the same problem both ways and track timing).

      Delete
  5. This comment has been removed by the author.

    ReplyDelete
  6. Hi, thanks for your information. I am a junior student and starting to use CPLEX for VRP. I am using Lazy constraints for subtours and tours exceeding capacity but I have a problem. Although, there are tours exceeding capacity in final solution, lazy constaints does not called. I could not find any reasons why this is happening. Have you ever faced such problem?

    ReplyDelete
    Replies
    1. No, I cannot recall ever having an incumbent accepted without first having the lazy constraint callback called (assuming, of course, that I am using a lazy constraint callback).

      I suggest you add some print statements to your lazy constraint callback, showing the proposed incumbent (and its objective value) each time the callback is called. If the callback never prints the final solution, something is seriously wrong. On the other hand, if it does print the final solution, then your callback code contains a bug (resulting in a failure to cut off that solution).

      Delete
    2. It prints the final solution but it finds lower and upper bound equal. Can it be the reason why the callback is not called?

      Delete
    3. If you put the print statement in the callback, and it prints the final solution, then the callback was called with that (proposed) incumbent.

      Delete
  7. Dr. Rubin,

    Thank you for all the information on this blog, it is really helpful. I am working on a pretty sizeable stochastic MIP, using Benders decomposition as the main approach to solve it. I was hoping you could share your experience about some points concerning lazy cuts. Let me preface by saying that I have a minimization problem, there is no need for feasibility cuts within Benders, and that I am using Gurobi. I have a master problem that already contains some optimality cuts (from doing a couple of iterations of Benders with the LP relaxation of the master problem), and I am giving it to Gurobi’s B&B. Let’s call this problem BMP. This is a similar setup to what you described in one of your other posts (Benders decomposition then and now).

    1. When a lazy constraint is added, the problem originally given to the B&B changes. It is not BMP anymore due to the constraint added. This means that the lower bound of the B&B, while still valid, might not be as tight as possible. I think I know the answer to this is negative, but do you have any indication that the LB of the B&B is recomputed at this point?

    2. In every case that I ran so far, the output of the B&B procedure indicates that an incumbent for the B&B is not found until the very end of the procedure when the B&B terminates. On one hand, this makes sense because if a MIP solution for the master problem becomes the incumbent in the B&B, it also satisfies the optimality condition for Benders in that no more cuts are generated. On the other hand, I expected that at least in some of the cases I solved, I would obtain an incumbent solution (with an unfavorable cost) well before the B&B terminates. This would be the result of a lot of branching cuts. I wanted to ask you, first, if you think my logic is correct, and second, if you have ever experienced a similar result in your runs of Benders with callbacks.

    Thank you for any comments you might have!
    Goran

    ReplyDelete
    Replies
    1. Goran,

      I'm not a Gurobi user, and there are some fundamental differences between Gurobi and CPLEX. I'll answer in the context of CPLEX, and hopefully the answer will apply to Gurobi (but no guarantee).

      On point 1: to the best of my knowledge, when a lazy constraint is added via callback in CPLEX, the node LP for the current node is updated. If the current node is providing the lower bound, then it will change immediately. Lazy constraints (and user cuts) can be designated as local (to the subtree rooted at the current node) or global. If you make the cut global, it will be added to all surviving nodes, but I don't know if their LPs will be resolved immediately. (I tend to doubt it, for efficiency reasons.) So the LB may not be recomputed until the solver decides to process the node where it currently occurs. In a pure best bound search, this will be immediate, but in a more typical search I don't know how quickly the bound will update.

      On the second point, suboptimal incumbents typically do occur during Benders decomposition, so what you are seeing is a bit unusual (and possibly suspicious). There may be something specific in the structure of your model that causes the first feasible solution to be the winner. If you are seeding BMP with either a starting incumbent or a tight cutoff for the objective, it may be that the only solution better than your incumbent/cutoff is the optimum. That said, I'd probably take a close look at the code. :-)

      Delete
    2. Thank you for your helpful comments, Dr. Rubin!

      Delete
  8. Hi,
    I'm trying to use lazycallback for my benders decomposition and it seems I can't properly pass some of my variable values from my master problem to the lazycallback. Is there any small instances for BD with lazycallback I can use? (I do have the ilobendersatsp example, but it's a little bit complicated and different from what I have in my BD).
    Thanks

    ReplyDelete
    Replies
    1. Check out this post: http://orinanobworld.blogspot.com/2014/08/updated-benders-example.html.

      Delete
  9. Hi Prof. Rubin,

    I loved the post, is a shame that such useful blogs are rare... I'm currently coding in C++ via Concert Technology both exact and heuristic algorithms to solve large scale MILP including routing variables, hence requiring (integer and fractional) cut callbacks.

    I have tried to encapsulate my code to facilitate the inclusion of new formulations and algorithms. However, my question is: Do you think it would be convenient to make a class for all different callbacks (for DFJ, GFSEC, etc) or you think that just including them as methods on my solvers' class would be enough?

    Hope you wouldn't find the question rather bizarre lol.
    Best regards
    Y

    ReplyDelete
    Replies
    1. I don't find the question at all bizarre, although it's a bit hard for me to answer without seeing your code (and, trust me, I'm *not* asking to see your code -- bad enough I have to read my own ;-)). Also, my coding skills are, shall we say, not professional grade.

      I think my inclination would be to create one class extending IloCplex::UserCutCallbackI (and another extending IloCplex::LazyConstraintCallbackI, if I were also using lazy constraints). Pass the constructor any information it needs from the model (environment, variables, ...). Either in the constructor or via an add() method (and perhaps a remove() method), let the user pass into the callback class as many cut generators as the user may wish. The cut generators would be methods that would access the model and its current solution and either cough up an instance of IloRange or return null if no cut was found.

      Hope that makes sense.

      Delete
  10. Dear Professor Rubin, this blog is really amazing and helped me understand use of lazy constraints better. I am trying to solve VRP by adding capacity & sub-tour elimination constraints iteratively. As the number of nodes increases, the the number of cuts added is huge, slowing down the process. Is there anyway to remove some of the cuts while using lazyconstraint callback. - Thanks Raol

    ReplyDelete
    Replies
    1. Raol,

      I'm glad you find the blog useful. The add() method in a lazy constraint callback comes in two versions, one of which takes a second (integer) argument. That argument tells CPLEX whether you want the cut to be "purgeable". If you specify the value (UseCutPurge in the JavaAPI) that says yes, CPLEX will drop the constraint if at some point it no longer thinks the constraint is likely to be binding. Note that this raises the possibility the callback will have to recreate the same constraint at a later date.

      Delete
  11. Hi Professor Rubin,
    I'm working on Bender's decomposition of a MILP on the Julia interface with JuMP package on a Gurobi solver. In the example here, the callback function of lazy constraints is called when the master problem reaches an integer feasible solution(of an iteration). However, the function should ideally be called only at an optimal solution of the iteration. Could you suggest the method to call the function at optimal solutions for each iteration of master problem model? Following is the description of the code:

    #loading of the solver and JuMP
    loading of master Problem

    function addBendersCut(cb)
    #subproblem loading and solving
    #test for global optimum
    #lazy constraints
    end
    addlazycallback(masterProblemModel,addBendersCut)
    statusMP=solve(masterProblemModel)



    ReplyDelete
    Replies
    1. Sorry, I"ve never used Gurobi, JuMP or Julia, so I have no idea. I don't even know which callbacks Gurobi provides. Your phrase "optimal solution of the iteration" confuses me a bit. If you mean that only want to invoke the callback for an integer-feasible solution that is an optimal solution to the node LP, I would expect that to be standard behavior for the solver. (That's what happens with CPLEX for most integer-feasible solutions, excepting those found by heuristics.) If you mean you only want the callback invoked when the master problem has been solved to what the solver thinks is optimality, you shouldn't be using callbacks at all. The approach there is solve (normally) - test solution - generate cut and add to master - solve again (until a winner is found).

      Delete
  12. Thank you Prof. Rubin,
    Your response helped me and my prof think deeper on it and realize the idea that we are solving only 1 Master problem and hence we can't solve it to optimality before any iteration(it would be optimal only at the last stage ). The unexplored branch and bound tree is saved when callbacks are called.

    So, for a Minimization MILP,
    how do we update the lowerbound obtained from the problem(upperbound is from dual sub problem):
    by the solution values of the un-fathomed non-integer leaves of the solution tree(of master problem) which have the best bound when the callback is called?

    ReplyDelete

If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on OR-Exchange.