I finally got around to constructing a reasonably small example of Benders decomposition for solving a mixed-integer linear program (MIP) in CPLEX. What I am about to describe works in CPLEX 12.4 (possibly 12.2 or 12.3) and hopefully will work in the future (since I have no ambition to revise the post). The significance of the CPLEX version is twofold: IBM refactored the API for cuts, emphasizing the distinction between "user cuts" (cuts that tighten bounds without eliminating integer-feasible solutions) and "lazy constraints" (cuts that eliminate feasible solutions); and they decided that lazy constraint callbacks would be called whenever CPLEX found an incumbent, regardless of when and how it found the incumbent (node LP solution, application of heuristics, lucky guess ...), which eliminated the need for incumbent and branch callbacks. If you are still using an earlier version of CPLEX, I strongly recommend you upgrade. Among various reasons, you will need a bunch more code to pull off Benders correctly with an old version.
My test problem is simple: a transportation problem with fixed charges for the use of supply sources. (In the code, I refer to sources as "warehouses" and sinks as "customers"). The fixed-charge transportation problem is well know and rather easy to solve -- too easy to warrant Benders decomposition, in fact. I chose it because it is sufficiently straightforward that it should not interfere with understanding how the code works. (Subsequent to writing the first draft of this post, I discovered a nice document by Erwin Kalvelagen that demonstrates Benders decomposition in GAMS. He too used a fixed-charge transportation problem, but in his version there is a fixed charge for each arc used, rather than for each source.)
The mathematical model is as follows: \begin{gather*} \textrm{minimize }c'x+f'y\\ \textrm{s.t. }\sum_{i}x_{ij}\ge d_{j}\quad\forall j\\ \sum_{j}x_{ij}\le s_{i}y_{i}\quad\forall i\\ x\ge0\\ y_{i}\in\{0,1\}\quad\forall i. \end{gather*} Continuous variable $x_{ij}$ represents the flow from source $i$ to sink $j$; binary variable $y_{i}$ signals whether source $i$ is used (1) or not (0). Parameters $c\ge0$ and $f\ge0$ are unit flow costs and facility use charges respectively, while $s$ and $d$ are supplies (source capacities) and demands.
My Benders decomposition puts the facility decisions in a master problem (MIP) and the flow decisions in a subproblem (linear program). The subproblem is just \begin{gather*} \textrm{minimize }z=c'x\\ \textrm{s.t. }\sum_{i}x_{ij}\ge d_{j}\quad\forall j\\ \sum_{j}x_{ij}\le s_{i}y_{i}\quad\forall i\\ x\ge0 \end{gather*} in which the $y_{i}$ are treated as constants. The master problem is \begin{gather*} \textrm{minimize }f'y+\hat{z}\\ \textrm{s.t. }a^{(k)\prime}y-\hat{z}\le b^{(k)}\quad\forall k\in\mathcal{O}\\ a^{(k)\prime}y\le b^{(k)}\quad\forall k\in\mathcal{F}\\ y_{i}\in\{0,1\}\quad\forall i\\ \hat{z}\ge0 \end{gather*} where $\hat{z}$ is a surrogate for the flow cost component ($z$) of the original objective function (i.e., the value of the subproblem objective), $a^{(k)}$ and $b^{(k)}$ are coefficient vector and constant term for a cut generated during the solution of the problem, $\mathcal{O}$ is the index set of "optimality" cuts (cuts that attempt to correct the master underestimating the true flow costs for a given set of decisions $y$, i.e., $\hat{z}<z$) and $\mathcal{F}$ is the index set of "feasibility" cuts (cuts that eliminate use decisions $y$ that provide too little capacity to satisfy demand). Initially $\mathcal{O}=\emptyset=\mathcal{F}$, and the first master problem solution is both trivial ($y=0$, $\hat{z}=0$) and infeasible in the original model.
To generate cuts, we will need to work indirectly with the dual of the subproblem: \begin{gather*} \textrm{maximize }z=d'\lambda+\sum_{i}(s_{i}\mu_{i})y_{i}\\ \textrm{s.t. }\lambda_{j}+\mu_{i}\le c_{ij}\quad\forall i,j\\ \lambda\ge0\\ \mu\le0 \end{gather*} where $\lambda$ is the vector of shadow prices of the demand constraints and $\mu$ the vector of shadow prices of the supply constraints. Given the assumption that all flow costs are nonnegative ($c\ge0$), the dual is obviously feasible ($\lambda=0=\mu$ works). If, for a given choice of $y$, the LP subproblem has an optimal solution, then so will the dual (with equal objective value $z$). Should the master problem generate a solution $(y,\hat{z})$ for which the subproblem is feasible with optimal value $z=\hat{z}$, we accept that solution as the new incumbent. If, instead, the subproblem is feasible but $\hat{z}<z$, then we add a new optimality cut $\hat{z}\ge d'\lambda+\sum_{i}(s_{i}\mu_{i})y_{i}$ where $(\lambda,\mu)$ is the dual solution to the subproblem (so that $a_{i}^{(k)}=s_{i}\mu_{i},b^{(k)}=-d'\lambda$).
Should the master problem generate a solution $(y,\hat{z})$ for which the subproblem is infeasible, then the dual will be unbounded, and there will be a dual solution $(\lambda,\mu)$ such that $t(\lambda,\mu)$ is feasible in the dual for all $t>0$ and $d'\lambda+\sum_{i}(s_{i}\mu_{i})y_{i}>0$. Note that the master problem variables ($y$) have no role in defining the dual problem's feasible region; in the dual, they occur only in the objective. So the dual ray we just obtained will remain a recession direction for the dual regardless of $y$, and to get a feasible subproblem we need to modify $y$ so that $d'\lambda+\sum_{i}(s_{i}\mu_{i})y_{i}\le0$ (i.e., so that the ray now points in an undesirable direction). This gives us our feasibility cut ($a_{i}^{(k)}=s_{i}\mu_{i},b^{(k)}=-d'\lambda$). Observe that optimality and feasibility cuts have the same constant term and coefficient vector; the only difference is the inclusion of $\hat{z}$ in optimality cuts and its exclusion in feasibility cuts.
On to the code. Let me start with my usual disclaimer: I am not a professional coder, and I make no claim that my code is particularly efficient, nor that I took the fastest/most straightforward route to the solution. As usual, the code is in Java. For the most part, conversion to other APIs that expose the callback functionality should be straightforward, but there are a few wrinkles to note.
- If you code the comparisons $\hat{z}<z$ and $z=\hat{z}$ literally, the gods of computing will wait until your back is turned and then bite you in your derrière. There will be rounding errors in both $z$ and $\hat{z}$, so asking for exact equality is inviting disaster. Insert some rounding tolerance into your comparisons. In my code, look for a constant named FUZZ.
- When the LP subproblem is feasible, we can get the dual values directly with a function named getDuals. When the LP is infeasible, it is tempting to use the getRay function to get the dual ray, but to do that you would have to formulate the dual itself, solve it directly, and then invoke getRay on the dual. It is easier to ask for a Farkas certificate, which provides the ray without further ado.
- I found it necessary to turn off presolving of the subproblem. If the presolver detects that the subproblem is infeasible, it reports that fact with no option to get a Farkas certificate. So I sacrifice whatever speed advantage the presolver confers in order to have access to the Farkas certificate when the subproblem is infeasible.
- Update 11/30/16: I recently tripped over another thing that I should have realized but somehow overlooked. In order to access the Farkas certificate, the LP subproblem must be solved by one of the simplex solvers (primal, dual or network). I'd been leaving the choice up to CPLEX, and on one instance it used the barrier method, decided the LP was infeasible, and then threw an exception when the Farkas certificate was requested. There's also a glitch in the CPLEX documentation for the dualFarkas method. It turns out that dualFarkas only works if the LP was solved by the dual simplex algorithm. So the key is to set the RootAlg parameter for the LP to 2 (dual simplex).
- Forming a cut (either optimality or feasibility) involves multiplying the right-hand side of a subproblem constraint, expressed as a linear function of the master variables, by the corresponding dual value. In this particular example, the right-hand side always has a single term, either a constant ($d_{j}$) in a demand constraint or a multiple of a single master variable ($s_{i}y_{i}$) in a supply constraint.
- More generally, though, each right-hand side could contain multiple terms. I deliberately over-engineered my code a bit so that it will generalize to other problems. What I do in the code is form and store an instance of IloNumExpr for each right-hand side, at the time that I am creating the subproblem. Inside the lazy constraint callback, I multiply those expressions by the corresponding dual values and sum to get the necessary expression ($a^{(k)\prime}y-b^{(k)}$) for the new cut. Somewhat counterintuitively, and for rather arcane reasons, this cannot be done with IloLinearNumExpr, even though the right-hand sides are in fact linear.
- Since any master variable can occur in multiple subproblem right-hand sides, the method I just described for assembling the cut expression will typically result in multiple terms involving the same variable ($\dots+\alpha_{3}y_{7}+\dots+\alpha_{12}y_{7}+\dots+\alpha_{23}y_{7}+\dots$). Not to worry: when you actually add the cut, CPLEX will combine like terms ($\dots+(\alpha_{3}+\alpha_{12}+\alpha_{23})y_{7}+\dots$) for you.
- Interpreting the Farkas certificate is a bit tricky, since CPLEX returns the dual values in an arbitrary order (see my earlier post "Infeasible LPs and Farkas Certificates"). To deal with this, I store the subproblem constraints in two arrays (one for supply constraints, one for demand constraints), and then use a HashMap to map each constraint to the corresponding right-hand side expression.
- The C++ Standard Template Library has some analogous container class you can use (don't ask me which; I'm blissfully C++-free and mean to stay that way). With Python, you would create a dict. With other APIs, you are on your own.
- The reason I use two arrays, rather than storing all constraints in a single array, is that it facilitates updating right-hand sides in the subproblem each time the callback is invoked. I need to recompute the RHS of each supply constraint ($s_{i}y_{i}$, using the new value of $y_{i}$), but not the RHS of each demand constraint ($d_{j}$). Having the supply and demand constraints separate makes this a bit easier, but in a more general application (where most or all constraints require an update), you may want to use more than two arrays, or just a single array.
- Once the master problem has been solved, you have the optimal set of warehouse use decisions ($y$). There's a fairly good chance you will also want to know the optimal flows ($x$). The subproblem may well have been solved more recently than when the callback accepted what proved to be the optimal $y$; after that node, CPLEX may have generated some incumbents that your callback tested and rejected. So the most recent subproblem solution may not correspond to the optimal $y$. One resolution is to solve the subproblem one last time, using the optimal $y$. The other (which I use in the code) is to store the subproblem solution any time an incumbent is accepted, overwriting any previously stored solution. When the master is solved, the stored subproblem solution will be the correct flows.
Many thanks Paul, very much appreciated, I have written Benders codes several times using C++ Concert interface but I was always curious to see how others do that, perhaps more efficient, and what I am missing in my codes.
ReplyDeleteThanks, Shahin. There is a fair bit of flexibility for different coding approaches to Benders, and I'm not sure what I did in this small example will scale well. If someone has a large model, storing all those IloNumExprs might become prohibitively expensive. Also, I'm not sure whether applying sum() and prod() to IloNumExprs and then having CPLEX combine like terms is faster or slower than building a new RHS from scratch each time using IloLinearNumExprs. So many degrees of freedom, so little definitive information .
DeleteHi, I am new to OR and CPLEX programming and I was trying to convert your program into multi-commodity transportation problem.The singlemodel.java file I tried to convert into multi commoodity by using 3 dimensional array, but CPLEX is saying "unable to solve model". Could you please help me ? as I could not find any reference on using multi dimensional arrays with CPLEX java APIs.
ReplyDeleteI've answered via email.
DeleteGreat post Paul
ReplyDeleteWhat I want now is to implement it for the VRPTW problem.
Thanks -- and good luck!
DeleteHello. I downloaded your code and ran it with CPLEX 12.2. SingleModel.java solves the generated problem. But when Benders.java starts iterating, part of output generated is below. It looks like there is a cycling. Can you please comment what might be the cause?
ReplyDeleteSincerely.
Parallel mode: none, using 1 thread.
Root relaxation solution time = 0.02 sec.
Parallel mode: deterministic, using up to 2 threads for concurrent optimization.
>>> Adding feasibility cut: IloRange : -infinity <= -0.6895988277118706*Use_4 - 6.387051152521494*Use_3 - 7.888060535570556*Use_2 - 5.304215302137547*Use_1 - 5.825244285340782*Use_0 <= -2.829154110490712
Nodes Cuts/
Node Left Objective IInf Best Integer Best Node ItCnt Gap Variable B NodeID Parent Depth
0 0 0.9050 1 0.0000 0
Parallel mode: deterministic, using up to 2 threads for concurrent optimization.
>>> Adding feasibility cut: IloRange : -infinity <= -0.6895988277118706*Use_4 - 6.387051152521494*Use_3 - 7.888060535570556*Use_2 - 5.304215302137547*Use_1 - 5.825244285340782*Use_0 <= -2.829154110490712
0 0 0.9050 1 User: 1 0
Parallel mode: deterministic, using up to 2 threads for concurrent optimization.
>>> Adding feasibility cut: IloRange : -infinity <= -0.6895988277118706*Use_4 - 6.387051152521494*Use_3 - 7.888060535570556*Use_2 - 5.304215302137547*Use_1 - 5.825244285340782*Use_0 <= -2.829154110490712
0 0 0.9050 1 User: 1 0
Parallel mode: deterministic, using up to 2 threads for concurrent optimization.
This is curious. When I run the code with CPLEX 12.5, I generate the same initial feasibility cut that you have. My second cut, also a feasibility cut, has the same variable coefficients but a different constant term (so it's parallel to the first cut but deeper). Then I get two optimality cuts and termination.
DeleteDoes the same thing happen if you force CPLEX to use a single thread?
This comment has been removed by the author.
DeleteIt seems to be issue with cplex version. I got same cycling issue with Cplex 12.1. With 12.5 it works fine.
DeleteJ Dalal: Were you also using parallel mode with 12.1? I wonder if perhaps it was a thread synchronization issue that was fixed in a later version. I agree that if you and "Anonymous" both encountered it on 12.1, and you and I both do not on 12.5, it is likely to be a bug that was fixed somewhere along the line.
DeleteHi Paul,
ReplyDeleteI am currently working on a similar problem but with a quadratic subproblem. The program stops with "no solution exists" without generating any feasibility cut. I wonder, does the Farkas Certificate only work for LPs? If yes, do you know whether there exists any alternative for QPs?
Thank you very much.
Wenyi
I think a Farkas certificate only exists for linearly constrained models, so if your subproblem has quadratic constraints, that would present a difficulty. Since infeasibility has nothing to do with the objective function, if your subproblem has a quadratic objective with linear constraints, it should be possible to generate a Farkas certificate (but I do not know whether CPLEX would do so).
DeleteAs to how to proceed, that's tricky. I'll assume your subproblem contains quadratic constraints; otherwise, all else failing, you could change the objective function to zero and get a Farkas certificate. According to the CPLEX 12.5 docs, "the conflict refiner is capable of doing its work on all continuous and discrete problem types that CPLEX supports, except for those with explicit second order cone constraints" (from the section "How a conflict differs from an IIS"). So the conflict refiner probably cannot be used. Similarly, from the section "What is FeasOpt?", "FeasOpt supports all types of infeasible models, except those models with second order cone constraints". So that also is not an option.
If the variables passed from the master to the subproblem are all binary variables, you can always generate a feasibility cut saying that at least one of them must change value. That's not at all a deep cut, but it's all I can think of ... unless dropping all the quadratic constraints still leaves the subproblem infeasible, in which case you can get a Farkas certificate for the remaining constraints. I don't know what you can do if the subproblem is quadratically constrained and the variables passed from the master problem are continuous or general integer.
Paul, I am dealing with a subproblem with quadratic objective function only, which makes my life a bit easier. As you suggested, I change my objective function in getting a Farkas certificate.
ReplyDeleteMeanwhile, for my curiosity, I did the following to see whether "getDual" is meaningful to QPs (though it is supposed to be, according to their manual): I constructed a dual problem myself and solved it to optimality (this part is 100% correct, I swear). And then I compared them with those I got from "getDuals". And they are completely different. Could you give me some comments on what might go wrong in using your benders method?
I'd have to see your QP and dual problem to have a clue. Maybe you could save both as SAV files and send them to me (rubin msu edu)?
DeleteDear Mr Paul,
ReplyDeleteIt would be a great help if you suggest a solution for my following problem:
I am using MATLAB GUROBI interface to solve my nonlinear nonconvex optimization problem using piecewise lienar approxiamtion
and separable programming concepts.My Program in given below. In which my objective is to minimize A from the given search region (5 to 100)
subjct to one noninear constraint and four sos2 constraints. Here u is a vector data representing a curve with data points using separable prog concept.
I want that the point obtained after solution of the problem must be below of u curve. So i am formulating constraint as
20*log10(A)+10*log10((power(10,2)./(B.^2))+1)-10*log10((power(10,2)./(C.^2))+1)-20 <=u' .In gurobi it is,
20*log10(A)+10*log10((power(10,2)./(B.^2))+1)-10*log10((power(10,2)./(C.^2))+1)-u' <=20
Program:
A=linspace(5,100,19);
B=linspace(3,50,19);
C=linspace(1,100,19);
a=ones(19);
a1=a(1,:);
b=zeros(19);
b1=b(1,:);
u = Columns 1 through 12
7.6510 7.2751 6.1274 4.1760 1.4977 -1.5030 -4.1812 -6.1327 -7.2803 -7.6563 -7.2803 -6.1327
Columns 13 through 19
-4.1812 -1.5030 1.4977 4.1760 6.1274 7.2751 7.6510
(here u represents some non linear non convex curve as a piecewise linear approximation
using separable programming concepts using lamda method where horizontal axis is 0 t0 -250 with 19 brekapoints)
model.obj =[A b1 b1 b1];
model.ub = [a1 a1 a1 a1];
model.lb = [b1 b1 b1 b1];
model.modelsense = 'Min';
model.A = sparse([20*log10(A) 10*log10((power(10,2)./(B.^2))+1) -10*log10((power(10,2)./(C.^2))+1) -u'
a1 b1 b1 b1
b1 a1 b1 b1
b1 b1 a1 b1
b1 b1 b1 a1
]);
model.rhs =[20 1 1 1 1];
model.sense=['<' '=' '=' '=' '='];
% % Add SOS2 constraint 1:
model.sos(1).type = 2;
model.sos(1).index = [linspace(1,19,19)];
model.sos(1).weights = [linspace(1,19,19)];
% Add SOS2 constraint 2:
model.sos(2).type = 2;
model.sos(2).index = [linspace(20,38,19)];
model.sos(2).weights = [linspace(20,38,19)];
% Add SOS2 constraint 3:
model.sos(3).type = 2;
model.sos(3).index = [linspace(39,57,19)];
model.sos(3).weight = [linspace(39,57,19)];
% Add SOS2 constraint 4:
model.sos(4).type = 2;
model.sos(4).index = [linspace(58,76,19)];
model.sos(4).weights =[linspace(58,76,19)];
result = gurobi(model);
for i=1:76
fprintf('x%d %e\n', i, result.x(i))
end
fprintf('Obj: %e\n', result.objval);
A_optimal=A*result.x(1:19);%(answer is 5)
B_optimal=B*result.x(20:38);%(answer is 3)
C_optimal=C*result.x(39:57);%(answer is 50)
The problem is, after geting optimal values of A,B and C ,when i put them in a constraint
20*log10(A_optimal)+10*log10((power(10,2)./(B_optimal.^2))+1)-10*log10((power(10,2)./(C_optimal.^2))+1)-20 (which is a signle number),It satisfyis constraint.
but when i plot it, a point obtained from 20*log10(A_optimal)+10*log10((power(10,2)./(B_optimal.^2))+1)-10*log10((power(10,2)./(C_optimal.^2))+1)-20
does not lie below the graph or curve u. Means rathar than taking u as a piecewise lienar function of curve,GUROBI takes it as any feasible single value from u.
Would anyone please answer following,
1) My constraint formulation is right or wrong?
2) How i can represent curve with data points only (u) in GUROBI MATLAB interface so does my resultant points lies below the curve?
3) How can i incorporate X cordinate of u (which is 0 to -250) in constraint....
Thanks.
I'm afraid that I use neither MATLAB nor GUROBI, so I will not be of help. Someone else is free to answer, but I think this question would be more appropriately targeted at a GUROBI support forum.
ReplyDeleteThis comment has been removed by the author.
ReplyDeletehi paul I am trying to modify your code to solve an MIP,not using regular benders but codato and fischetti combinatorial benders cuts. also rather than adding objective values and constraints, I already have it in a textfile so can I call the textfile if I want to add a constraint to master problem, or how can that be done.
ReplyDeleteSorry, I don't know what the question is (or even if there's a question). If you are using the version of combinatorial Benders cuts where the continuous variables do not contribute to the objective (and all cuts are feasibility cuts), I recommend setting up the LP subproblem once and then using IloCplex.refineConflict() to find an IIS each time you get a potential incumbent to the master problem. You pass refineConflict() a list of all constraints that are related to binary variables, and a vector of preferences for which of those constraints to relax. Use a negative preference value to signal that a particular constraint should be ignored (equivalent of temporarily removing it based on the corresponding binary variable) and positive preference to signal that it should be considered a possible member of an IIS. The constraints that are not tied to binary variables are included in the LP but not passed to refineConflict; that causes refineConflict to include them in the IIS. In the absence of any preference for relaxing one constraint versus another, you can just use preferences of 1 for constraints that are activated by the corresponding binary variables.
DeleteIf refineConflict finds a conflict (return value TRUE), call IloCplex.getConflict on the subproblem to get the status of each of the constraints. The IIS is the set of constraints whose status is IloCplex.ConflictStatus.Member.
when I am done with the coding, in case I have errors can I post in on the blog in order for u to detect any problems
DeleteSorry, but reading other people's code is difficult (reading my own code isn't that easy, for that matter), and I do not have the time available to do it. Good luck with the code. In my experience, it is not particularly difficult to code the equivalent of Codato & Fischetti's CBC approach using CPLEX.
Deletethank you, your words help.i am making progress
DeleteDear Prof Rubin,
DeleteAs to the constraints not tied to binary variables, if we include them into IIS, how could we generate CBC which is related to binary variables? For me, the infeasibility of constraint is caused with its associated binary variable. That's the reason why we want to generate a cut by changing the value of the binary variable to avoid this kind of infeasibility. If we include a constraint which has nothing to do with binary variable, how could we generate a CBC cut for this constraint?
The right-hand sides of those constraints are multiplied by the dual values, summed up, and contribute to the constant term in the CBC cut.
DeleteThe form of CBC is: sum(x) + sum(1-x)>=1. I don't see the constant term.
DeleteIn fact, in the first paper proposing CBC, the author didn't mention about this aspect(maybe i missed something...). In order to find the detail about the implementation, i read a lot of papers employing this technique, but i still couldn't find it. Do you know any paper or technique documents publishing details about this algorithm? Or have you had any experience in implementing this algorithm? If so, could you please share your code or the procedure of the algorithm with me?
My mail: eddie.westlife@gmail.com
I really appreciate your help, it means a lot for our beginners in this field.
Sorry, you're right that a CBC cut as done by Codato and Fischetti is a type of "no good" cut (having the form you indicated. I was thinking of a different way to generate a feasibility cut using their approach of excluding constraints where the binary variable has value 0. You're a bit off target thinking "the infeasibility of constraint is caused with its associated binary variable". It's not one constraint that is infeasible; it's the collection.
DeleteI'll see if I have relevant code I can share, and if so will contact you via email. If I do, it will be Java code (using CPLEX).
Just looked, and I can't find any code that uses CBC directly. I have some code where combinatorial cuts are being used, but they're being generated so indirectly (and there's so much other stuff going on, obscuring the CBC aspect) that at this point it would take a couple of hours for me to explain it to anyone (if I could).
DeleteThat's exactly the cut i want. What's more, i'd like to let you know that i put one integer variable in master problem, and another integer variable plus one real variable in sub-problem. In this case, we can't get the dual value for the constraint infeasible, because there is a integer variable in my sub-problem. I have been getting stuck at this point for a long time... Any ideas?Paul
DeleteAssuming the master problem integer variables are binary, if your MIP subproblem is infeasible, you can still use the no-good cut (at least one binary variable from the master must flip value). If the subproblem is feasible but the objective value is worse than what the master solution predicted, you're somewhat screwed. There's a version of the no-good cut for optimality cuts, but it's pretty weak. Basically, for a min problem it says the master surrogate variable is at least the subproblem optimal value if no master variable changes, and at least $-\infty$ if one or more master variables do change.
DeleteHi Paul, my objective function of master problem only includes the binary variables in master problem. In this case, the subproblem will only generate feasibility cuts. According to what you said, i can have a cut like sum(x) + sum(1-x)<=|S|-1, |S|is the cardinality of the set of all the variables x. And in this case, S is not an IIS, but the set of x. Am i right?
DeleteYou are correct, but with combinatorial Benders cuts (where, say, x = 1 turns on a constraint in the subproblem and x = 0 turns it off), you typically do not want to include all the binary variables. The CBC cut would be $\sum_{i \in S} x_i \le |S| - 1$ where $\hat{x}$ is the candidate master solution and $S = \{i : \hat{x}_i = 1\}$.
DeletePaul, thanks for posting the Benders Code. It would not be easy to start without it. I am writing my own code for a two stage stochastic model. In the first stage facilities are located (binary) and in the second stage depending on the scenario realised, facility and route assignments are made. I do not need to check for the feasibility because in the first stage (master problem) I ensure that the subproblem is feasible by using induced constraints. So I just create optimality cuts. When I run the code it starts with the master problem and stops at the root node with an error message that reads:
ReplyDeleteNodes Cuts/
Node Left Objective IInf Best Integer Best Bound ItCnt Gap Variable B NodeID Parent Depth
0 0 0.0000 0 0.0000 1
Root node processing (before b&c):
Real time = 0.00 sec. (0.08 ticks)
Sequential b&c:
Real time = 0.00 sec. (0.00 ticks)
------------
Total (root+branch&cut) = 0.00 sec. (0.08 ticks)
Exception in thread "main" java.lang.NullPointerException
at ilog.cplex.IloCplex$MIPInfoCallback.getIndex(IloCplex.java:13779)
at ilog.cplex.IloCplex$ControlCallback.getValues(IloCplex.java:15511)
at ilog.cplex.IloCplex$ControlCallback.getValues(IloCplex.java:15487)
at bendersdecomposition.Benders$BendersCallback.main(Benders.java:548)
at ilog.cplex.CpxCallback.callmain(CpxCallback.java:151)
at ilog.cplex.CpxLazyConstraintCallbackFunction.callIt(CpxLazyConstraintCallbackFunction.java:44)
at ilog.cplex.Cplex.CPXmipopt(Native Method)
at ilog.cplex.CplexI$SolveHandle.start(CplexI.java:2527)
at ilog.cplex.CplexI.solve(CplexI.java:2650)
at ilog.cplex.IloCplex.solve(IloCplex.java:10948)
at bendersdecomposition.Benders.solve(Benders.java:652)
at bendersdecomposition.BendersEvacuation.main(BendersEvacuation.java:568)
The error message is related to the getValues(y) where y is the first stage binary facility location decision variable. Could you please comment what the problem might be? Thanks in advance.
public static class BendersCallback extends IloCplex.LazyConstraintCallback {
@Override
protected void main() throws IloException {
// TODO Auto-generated method stub
double zMaster = getValue(surVar);
double[] yNew = getValues(y);
Best regards,
Vedat Bayram
PhD. Candidate
Department of Industrial Engineering
Bilkent University
Hard to say without seeing the entire code (and, trust me, I'm not asking to :-) ), but my guess is that you are passing getValues() either a null argument or an array that contains a null entry.
DeleteI suggest you replace the call to getValues() with a call to getValue() inside a loop (i.e., fetch the variable values one at a time). You'll still encounter the exception, but if you print each variable (or each index) before you try to fetch it, you'll know where the null entry is.
This comment has been removed by the author.
DeleteThanks Paul, I appreciate the help. The tip helped me through it. The thing is now I have encountered the nullPointerException inside the BendersCallback class where we update the upperbounds related to the constraints containing first stage (Master Problem) variables. The nullPointerException is at the line where we call the IloRange constraint: assignToOpenConst[w][count].setUB(yNew[i]); Interestingly this works fine where I create and record this constraint but it does not work inside the BendersCallback class and gives a nullPointerException error.
DeleteDoes not work here:
class BendersCallback extends IloCplex.LazyConstraintCallback {
@Override
protected void main() throws IloException {
// TODO Auto-generated method stub
double zMaster = getValue(surVar);
double[] yNew = new double[n+1];
for(int i:setOfFacilityNodes){
yNew[i] = getValue(y[i]);
}
for(int w=0; w0.5){
System.out.println("UB: "+assignToOpenConst[w][count].getUB());
assignToOpenConst[w][count].setUB(yNew[i]);
nearestAllocConst[w][count].setUB(1.0-yNew[i]);
}
else{
assignToOpenConst[w][count].setUB(0.0);
nearestAllocConst[w][count].setUB(0.0);
}
count++;
}
}
Works here:
//Assign only to open shelter sites constraint
assignToOpenConst = new IloRange[numOfScenarios][setOfDemandNodes.size()*mapOfScenFacilityNodes.get(w).size()];
int count = 0;
for(int r:setOfDemandNodes){
for(int s:mapOfScenFacilityNodes.get(w)){
IloLinearNumExpr const2Expr = sub[w].linearNumExpr();
if(mapOfAltPaths.get(w).getODPaths(r, s).get(r+"."+s)!=null){
for(int l=1;l<mapOfAltPaths.get(w).getODPathsSize(r, s)+1;l++){
const2Expr.addTerm(1.0, z[demandNodeToOrder.get(r)][facilityNodeToOrder.get(s)][l][w]);
}
}
assignToOpenConst[w][count] = sub[w].addLe(const2Expr, 0.0, "AssignToOpenConst_"+count);
System.out.println("UB: "+assignToOpenConst[w][count].getUB());
rhs.put(assignToOpenConst[w][count], master.prod(1.0, y[s]));
count++;
}
}
Best regards,
Vedat Bayram
Hi, I am relatively new to CPLEX. I am implementing Lagrangian Relaxation using CPLEX. I am trying to figure out how to save and then to add cuts that were generated from previous iterations. It seems that addUserCuts should help, I am just wondering how to operationalize it in CPLEX. Thank you.
ReplyDeleteThis is a bit off topic for the post, so I'll refer you to either the CPLEX user forum or OR-Exchange. I will say here that addUserCuts is an appropriate method to add the cuts if they will not cut off any integer-feasible solutions (i.e., they are just bound-strengthening). Otherwise you should use addLazyConstraints(). Also, if the cuts you want to retain were generated by CPLEX, rather than by you, you will need to use the C interface. None of the higher level APIs give access to cuts generated by CPLEX.
DeleteDear Mr. Paul Rubin:
ReplyDeleteI am doing a MIP model, using Bender Descomposition, for my thesis proyect in my Master degree of Industrial engineer. Nevertheless this model is very big and have a lot of parameter and variables. Also my master problem is interger, that's why this works very slow. Therefore, i want to now if in each iteration of the bender descomposition, i can set a initial solution for the branch and bound process. The idea is try to set the initial solution in each iteration with the solution of the iteration before, so with this, the time to find the new solution will be smaller. I am working with c++ and cplex.
How can i do that?
please i appreciate if you can help me
If you have not done so already, please take a look at this older post (http://orinanobworld.blogspot.com/2011/10/benders-decomposition-then-and-now.html) for some context.
DeleteIf you are using the modern (callback) approach, I think your question is not applicable; so I assume you are using the older approach (solve the master to termination, solve the subproblem, add cuts to the master, start the master solution again). In that case, you should look at the IloCplex.addMIPStart method in the Java API (IloCplex::addMIPStart in C++, CPXaddmipstarts in C), which provides a way to add a starting incumbent before solving the master.
Hi
ReplyDeleteI am using LazyCallBack in my BD algorithm. I know that BD is genreating many violated cuts (I print them out!) but they never affect the lower bound. I am using standard code as the one in the Ilocplex example (add(cut<=Z).end();). Is there any way I force cplex to take them into account or is there anything I should be aware of?
Thanks a lot
Ragheb
Try without the ".end()" appended. If that does not fix things, it may just be that your model has a large number of nodes with identical lower bounds.
DeleteI removed the ".end()" but nothing changed. I am very suspicious if it is because of many nodes with identical lower bound, because it finds violated cuts (large violation amount) at the top of the tree and never affect the bound. The lower bound seems to be changing just because of branching. I am not quite sure how I can verify this, but if I manually add such generated cuts before starting to solve the model, they will improve the lower bound.
DeleteYou can evaluate the constraint expression using the proposed incumbent (in the lazy constraint callback) to verify that the constraint really cuts off the incumbent. The only other diagnostic measure that comes to mind would be to store all the Benders cuts in a list and add a user cut callback that adds not cuts but just gets the current node solution (using getValue or getValues) and then tests each Benders cut in the list to confirm that the node solution satisfies each. If any node solution violates any Benders cut (by more than just rounding tolerance), your suspicion is confirmed. If not, either there is a bug in your cut generator or the bound is just naturally slow-moving.
DeleteThanks a lot Paul for your help, I had a slight problem with incumbent which is ok now, and it seems that we will have many open nodes with almost similar (and low) objective and as you pointed to, it seems be the problem.
Deletebest regards
Ragheb
Dear Prof. Paul Rubin,
ReplyDeleteI have a problem using getDuals() function in JAVA. I modified your code to incorporate my model. Right now the returning values of the dual does not match the actual dual obtaining by CPLEX.
Let me explain a little bit more, In the callback I am exporting the sub model and solve it using cplex solver and get the solution and in the code I get the duals using getDuals() when I cross check the numbers I have different value for couple of constraints. I have no idea why this happens.
Can you please give me an idea what might went wrong?
Thank you
Is the objective value of the sub model the same either way? If the sub model has a degenerate optimum, the dual problem can have multiple solutions (same objective value but different duals for some or all constraints). In that case, solving it two different ways can yield two different (equally valid) dual solutions.
DeleteYeah I've already look into that but the answer is no. If I used the value of duals using java and calculating the objective function it did not match the objective function value.
DeleteDid you export the sub model as a .lp file? If so, does the discrepancy also occur when you export it as a .sav file?
Deletehi
ReplyDeleteis there anyway to know which variables are fixed in the current node of the tree by using the callbacks?
Thanks you very much, in advance!
I'm not positive, but I think you can do it with getLBs() and getUBs(). I believe that, when a variable is fixed, CPLEX updates both its lower bound and its upper bound to the fixed value.
DeleteDear Paul,
ReplyDeleteI was looking at your blog while trying to implement Benders using the Python API that CPLEX exposes. Simultaneously, I am trying to look at the sample code that CPLEX supplies. In that file (named bendersatsp.py), only ray cuts are used and point cuts are not. Any ideas on why this was done? Also, the objective function of the dual subproblem has only a part of it implemented. The 'u' dual variable doesn't feature in the objective function. Any help on this would be appreciated.
Thank you.
The description of the master problem in the comments at the top of the file shows a master objective function that depends only on the integer variables (arc selections). The flow variables have no impact on the objective. Since the flow variables have no costs, there is no surrogate variable for their cost in the master problem, which in turn means there is no need for optimality (point) cuts. When the master solver feeds a solution (choice of arcs) to the LP subproblem, the only question is whether there is a feasible flow solution using those arcs, so only feasibility (ray) cuts are needed.
DeleteI see the point. Thank you. :)
DeleteHi Dear Prof. Paul Rubin,
ReplyDeleteI am a second year PhD student and I am using Benders decomposition to solve large MILP instances.
In my implementation, I am having problems with abnormal memory consumption by Cplex.
To verify if this situation is due to my code or to Cplex, I executed the asymmetric traveling saleman problem example coded with the Benders algorithm supplied with the Cplex installation package (/opt/ibm/ILOG/CPLEX_Studio126/cplex/examples/src/cpp/ilobendersatsp.cpp).
I changed nothing in the code and I increased the size of the instance to 120 cities (see instance attached). I have chosen lazyConstraintCallBack (argument 0 ).
With this instance, Cplex memory consumption increases very fast and it ends up exceeding my computer RAM. These are my computer characteristics:
Memory : 7.5 GiB
Processor : intel R core TM i7-3520M CPU @2.90GHzx4
Graphics : intel R Ivybridge Mobile
OS type : 64-bit
Disk : 243.8 GB
I'm using Cplex 12.6
Have you ever noticed this constant and very significant increase of memory consumption? Did you have a suggestion on how I may solve it?
Currently, my Benders decomposition algorithm does not manage to solve even instances which are solved by the plain Cplex in a few seconds.
The instances I'm using are confidential, otherwise I would have sent you one of them.
Thank you in advance for your help.
Best regards
Kaba Keita
Regarding the ATSP example, this model was chosen because it is a clear, easy to understand example; I'm not sure that it is an efficient model for solving TSPs.
DeleteAs to memory consumption, if instances that CPLEX solves easily as a single MILP run out of memory when using Benders, it may just be a sign that Benders is inappropriate for that problem (or at least that your particular model is inappropriate). Benders cuts are in one respect like any other type of cut: they can nibble at the edges of the (relaxed) feasible region, or they can take a chomp out of it. If they nibble, solving the model can be a slow (and memory intensive) process.
There are things you can do to mitigate memory consumption: compress the node file in memory and/or write it to disk (controlled by parameters); backtrack more often, or at the extreme switch to depth-first search (again, controlled by parameters); make Benders cuts purgeable (optional parameter in the add method). They all risk slowing the algorithm.
I certainly have seen problems that could be solved by Benders but shouldn't be.
Thanks a lot Paul for your help.
ReplyDeleteI applied your recommendation but the memory consumption did not decrease very much.
As to Benders algorithm, I do not understand how the iterations are done with the Benders algorithm using the callback approach. In the main function before solving the master, I attach the callback to the master model as it is usually done. But how the different iterations are done ? I means after solving the master at the first time how it is ran again in the next iteration ? There is no loop and the callback is called one time in the code.
As to my model, I have a mixed integer linear model. the model does not contain binary variables in the objective function. So the master objective function contain only z (the value of the subproblem objective). I wonder if the Benders algorithm is appropriate for this model ?
Thank you in advance for your help.
Best regards
An objective function containing only the continuous variables does not make a model unsuited to Benders decomposition.
DeleteAs to how the callback operates, I assume you are using a lazy constraint callback. CPLEX tries to solve the master problem via branch-and-cut. At any node where it finds what it thinks is a new incumbent, it calls the lazy constraint callback. The callback either accepts the solution, decides it is infeasible and returns a feasibility cut, or decides it is feasible but underestimates the objective value (assuming minimization) and returns an optimality cut. If the solution is accepted, branch-and-cut proceeds as normal. If a cut is return, CPLEX adds the cut to the model and either prunes the node or solves the modified node LP. Either way it continues doing branch-and-cut until either the problem is solved or another incumbent is found (triggering the callback again).
If you have further questions, I suggest trying either the CPLEX help forum (if they are specific to CPLEX) or OR-Exchange (if they are more general). Links to both are in the right margin of this page. A blog does not make an effective help forum.
Thanks you Dear Paul,
ReplyDeleteI have a last question about the Benders on two points.
My model is a routing and a scheduling model. It contains three sets of big M constraints. Two disjunctive constraints with big M and another big M constraints. The big M formulation could make my model inappropriate for the Benders ?
I solved a small instance with the Benders. I have the same result as the plain Cplex. But I do not understand the value of z returned when I solved the restricted master problem (its objective function contains only z dummy variable).
For seven iterations, I have these values for z ( z=0, z=0, z=0, z=0, z=508, z=0, z=0, z=53). 53 is the original problem optimal solution. In the iteration 4 z=508, that corresponds to the dual objective value (dual objVal= 1462, 1123, 508, 672, 508, 217, 53, 53), then in the next iteration z=0. Normally the value of z may increase iteration by iteration ? When I solved the restricted master problem and added the cuts manually. I have this case (z=0, z=0, z=0, z=0, z=0, z=0, z=53). I hoped to have the same case with the Benders but it is not the case.
Thank you in advance for your help.
Best regards
Kaba Keita
Assuming you are minimizing, I think this could happen. Suppose that the first four "incumbents" found by the master problem have z = 0 but are ruled infeasible by the subproblem. The fifth "incumbent" has z=508 and perhaps is feasible ... and so it is the first actual incumbent accepted by the solver. The next two "incumbents" (z = 0) are either infeasible or generate optimality cuts forcing z to increase above 508 (the current incumbent), so they are ignored. The last "incumbent" (z = 53) is accepted, and then eventually the lower bound reaches 53 (to within tolerance) and the solver declares victory and stops.
DeleteHi,
ReplyDeleteI have been trying using callbacks for my benders, and apparently for different threads, some of the BD's solutions (approximately 10 out of 400 problems) are not exactly the same as MIP. I tried different problems on my PC, and HPC with different threads, and there is absolutely no pattern which one always works accurately. Sometime everything is the same, but my PC and HPC solutions are not the same.
Also when I fed the solution of my MIP to the BD it actually got it! while without feeding the BFS of the MIP, it says the best solution is something worse than that.
I was wondering if you know why this has been happening to my BD. Is there some kind of bug in some versions of cplex for lazy callback?
Thanks
A wide variety of factors can alter the solver's search path, so getting different solutions on different systems (or on the same system, when using different numbers of threads) is normal. When the objective values differ, not just the variable values, the key question is whether the inferior objective values meet the termination criteria. (I'm assuming none of the solution runs terminated due to time, memory or node limits). If the inferior solutions are within the accepted gap limits, this is again normal. The only way to avoid it would be to set the allowed gap to zero. If the inferior solution is outside the allowed gap range, then something is wrong. I'm not aware of any bugs that would explain the difference. My suggestion in that case would be to go to the CPLEX forum for help. (There's a link to the IBM optimization forums in the right margin above.)
DeleteHi Professor
ReplyDeleteCplex 12.7 has added its BD function.
Have you tried to compare your BD algorithm efficiency with cplex 12.7?
THANKS
Keji
Hi Keji,
DeleteI just downloaded 12.7 in the last couple of days and have not had any time to play with it. When I do get some free time (things are rather busy at the moment), I'll run a comparison and either post the results or add a comment to this post.
Thanks.
DeleteVery nice to your results in the future. BTW, I am using BD now. Feel free to tell me anything if i can help.
I just posted a test of the new BD options in CPLEX 12.7: https://orinanobworld.blogspot.com/2016/12/support-for-benders-decomposition-in.html. It's very easy to use and seems to work fine (based on a single test problem).
DeleteIt is easy to use. I agree.
DeleteBut I found it is still much slower than standard way in most of formulations. This transportation problem seems appropriate for BD but computational results contradicted that. Do you know why that happen?
Keji: You're saying that BD as implemented by IBM in 12.7 is slower than BD done by creating separate master problems and subproblems? Assuming that, in the separate problem version, you are just adding the usual dual-based optimality and feasibility cuts (and just one cut per integer solution), I can't see any reason why IBM's implementation would be significantly slower.
DeletePaul.
DeleteStandard is defined as your post (https://orinanobworld.blogspot.com/2016/12/support-for-benders-decomposition-in.html) This is just a single MIP model, using no decomposition.
My questions is Why bd is slower than standard in most of cases even that cases seem appropriate for BD?
When Jack Benders first published it, computer memory was _extremely_ tight, and I'm not sure that sparse matrix methods were in vogue yet. So partitioning (splitting a problem that wouldn't fit into memory into two that would) might be a necessity. Today, with efficient sparse matrix methods and lots of memory, it's no longer clear that a problem that _can_ be partitioned _should_ be partitioned. To a large extent it comes down to how tight the Benders cuts are, how quickly they are generated, and how efficiently the solver manages new cuts in the master problem (some of which will become irrelevant as other cuts are added). Also, you need to keep in mind that there are time costs associated with passing information between problems, building a new subproblem or modifying an existing one, repeating the presolve of the subproblem, etc.
DeleteI don't know about Benders being slower in "most" cases, but I think it's safe to say that there is no reason to assume it will automatically be faster just because a model has a structure that supports decomposition.
Thanks Paul.
DeleteThat comes to a question. If automatically BD can not guaranteed faster and BD code in cplex 12.7 doesn't support tight Benders cuts by ourselves(except we completely code BD by ourselves). Then BD code in cplex 12.7 is Tasteless in some extent.
I disagree. Assume you have a problem with a structure that allows BD. BD might or might not help with solution time; the only way to know is to try. The Benders support in 12.7 makes it much easier to try Benders and see if you get any benefit from it.
DeleteWhat I mean is the key part of BD now is how to generate cuts into master problem and cplex 12.7 doesn't give any convenient on that.
DeleteFor automatic BD, as you said,with efficient sparse matrix methods and lots of memory, there is not much improvement
If you have a special insight into the problem that allows you to create stronger cuts than what CPLEX automatic BD generates, then I agree you should do it manually. I disagree with your second statement. What I said before was that "standard" BD (what CPLEX does automatically) doesn't always improve over no BD; but sometimes it does, and with automatic BD it's now very easy to find out whether your model is one that benefits from BD.
DeleteGot it Paul. Thanks for your reply.
DeleteHi Paul,
ReplyDeleteThanks for sharing your thoughts on Benders Decompostion.
I am trying to use Benders Decompostion for my problem, in which the subproblem can be decomposed to N smaller problems, solvable in parallel. In fact the subproblem has N^3 varaibles and N^3 constraints. And the matrix of constraints is only independent blocks (not L shaped). The subproblem can be decomposed into N problems, each N^2 variables and N^2 constraints. I want to use IloLazyConstraintsCallback to add these different N cuts. Part of my code in IloLazyConstraintsCallback is:
for ( i=0 ; i < N; i++){
GenCut(X, Xsol, i , cplx_subp, U, sub_Obj , cutLhs, cutRhs);
add(cutLhs+ eta[i] >= cutRhs);
}
However Cplex throws an error in second iteration of the loop when calls GenCut. I am not sure how IloLazyConstraintsCallback works, whether it is smart enough to decompose the subproblem into N problems, or not. Do you have any suggestion how one can get this done? I appreciate any thought.
Regards,
Diako
Diako,
DeleteYour approach seems correct: loop through the subproblems and add a cut for each one *if* a cut can be generated. (I don't see any test in your loop that GenCut actually produces a cut on subproblem i.)
If the problem is not a null argument due to GenCut not finding a cut, then I suggest taking your question to the CPLEX user forum. There's a link to the IBM forums in the right margin, near the top; from there, you can drill down to the CPLEX forum. The comments section of a blog is ill-suited for debugging.
Thanks for your comment. Since any subproblem_i is feasible, I expect that be true always. So I thought a test is not required. Just to mention, this is my GenCut:
Deletevoid GenCut(const IloVar2Dim X, const IloNum2Dim Xsol, IloInt i_, IloCplex Cplx_subp, const IloNumVarArray U, const IloNumVarArray W,IloObjective SubObj, IloExpr cutLhs, IloNum& cutRhs)
{
IloInt k ,j, nN2 = N*N;
IloModel SubpMod = Cplx_subp.getModel();
SubpMod.remove(SubObj);
IloExpr objExpr= SubObj.getExpr();
objExpr.clear();
for(j=0;j < N; j++) {
objExpr += W[ i_ * N + j];
for (k = 0; k < N ; k++ ){
objExpr -= Xsol[i_][k] * U[i_ * nN2 + j*N + k];
}
}
SubObj.setExpr(objExpr);
SubpMod.add(SubObj);
objExpr.end();
Cplx_subp.solve();
cutRhs = 0;
for(j=0; j< N ; j++)
cutRhs +=Cplx_subp.getValue(W[i_ * N + j]);
IloNum valCutLhs=0;
cutLhs.clear();
for (j = 0; j < N; j++ )
for (k = 0; k < N; k++ )
cutLhs += X[ i_ ][k] * Cplx_subp.getValue(U[i_ * nN2 + j*N + k]) ;
Cplx_subp.clear();
} // END GenCut
I will post it on CPLEX Forum.
Thanks.
You appear to clear your subproblem model right before exiting the routine. Are you sure that doesn't delete stuff you need for the next subproblem?
DeleteOops. Yes, that's right. Thanks for your comment. I fixed that, and now I don't have the error anymore.
DeleteObviously, the solutions of the duals of suproblems must be positive. However, Cplex thinks zero is optimal! Now I need to spend time to figure out the issue.
Dr. Rubin, many thanks for your usefull blog. Is there any specific command in CPLEX to report the run time spent solving MP and SP at each itteration?(Lets say we don't use any callbacks, so MP should've been solved every itteration.)
ReplyDeleteSorry, but I don't believe there is any way to get run time breakdowns for MP and SP, either on a per-call basis or even aggregate (total time in MP/SP over the duration of the run).
Delete