tag:blogger.com,1999:blog-8781383461061929571.comments2019-05-20T09:00:36.890-04:00OR in an OB WorldPaul A. Rubinhttp://www.blogger.com/profile/05801891157261357482noreply@blogger.comBlogger1657125tag:blogger.com,1999:blog-8781383461061929571.post-40881117240895194542019-05-20T09:00:36.890-04:002019-05-20T09:00:36.890-04:00Well, that's a bit unsettling at minimum. Give...Well, that's a bit unsettling at minimum. Given my experience with other M$ software in the past, though, I can't say I'm shocked. I guess it could be worse. A long, long time ago I was using a pseudorandom number generator ... can't recall in what language ... and someone published a report that it cycled. I tested, and sure enough it cycled (after 100 or so observations, I think).<br />Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-60582339347812432722019-05-20T04:39:06.451-04:002019-05-20T04:39:06.451-04:00Fun fact: the random number generator in .NET prod...Fun fact: the random number generator in .NET produces more odd numbers than even. So just relying on that might skew your simulation results:<br />https://github.com/dotnet/corefx/issues/23298Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-23756300048420314562019-05-14T09:39:15.147-04:002019-05-14T09:39:15.147-04:00Thanks for the link. CPLEX also has a "determ...Thanks for the link. CPLEX also has a "deterministic parallel search mode" that can be set via a parameter (along with parameters for making various time limits deterministic). I don't think I've ever tried the "deterministic parallel search mode", mainly because my desire for consistency is usually dwarfed by my desire for speed. It would be interesting to know whether results from either CPLEX or Xpress were 100% reproducible with the right settings (assuming same problem inputs, same hardware etc.).Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-53520848621645343612019-05-14T09:27:03.267-04:002019-05-14T09:27:03.267-04:00FICO Xpress MIP solver has some guarantees on the ...FICO Xpress MIP solver has some guarantees on the reproducibility of a solution path:<br /><br />https://community.fico.com/s/blog-post/a5Q8000000082YvEAI/fico1154Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-85429461614089448832019-04-30T13:34:05.138-04:002019-04-30T13:34:05.138-04:00Thanks for the reference! It's interesting, an...Thanks for the reference! It's interesting, and might prove helpful to an acquaintance who is working on a real(ish)-world unconstrained MOCO. Now I just have to find time to read it. :-)Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-70213396829109124032019-04-30T13:28:01.286-04:002019-04-30T13:28:01.286-04:00Thanks, Gert-Jaap. That discussion is interesting,...Thanks, Gert-Jaap. That discussion is interesting, and it suggests another possibility to me. Again, we start with a global variable storing the objective value of the best integer-feasible solution so far. I'll also assume that the objective function contains a single variable, say minimize z. (This is to reduce dual degeneracy caused by the trick I'm going to suggest.) Each time the the global objective value changes, set a boolean flag for each thread. Each time a lazy constraint callback, the callback checks the flag corresponding to its thread. If it sees the flag is set, it clears the flag and adds a lazy constraint of the form z <= latest incumbent value. That will not only prevent repetitive generation of cuts at a node that should be pruned, it may also allow faster pruning of other nodes.<br /><br />I thought I would also mention that the business about cloning in that discussion only applies to legacy callbacks. Generic callbacks are not cloned by CPLEX, which is one reason that they are more tempermental about thread safety.Paul A. Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-86700444001044405372019-04-30T11:06:25.226-04:002019-04-30T11:06:25.226-04:00Dear Professor,
Thanks a lot for your helpful blog...Dear Professor,<br />Thanks a lot for your helpful blogpost. It is a very clear introduction to the new setup of CPLEX callbacks. <br />I also liked the last comment you made, that different threads can find suboptimal feasible solutions, because of a lack of communication between the threads (which in my view is fine because this would also slow down the solving process). However, in some of the projects that I've done, this issue was present as well and consumed a lot of time. I've asked IBM and there is an easy way around this, if you don't care about deterministic results, which might be interesting: https://www.ibm.com/developerworks/community/forums/html/topic?id=e72f622a-b899-4e96-a789-254587a39c7d&ps=25<br /><br />Best,<br />Gert-JaapGert-Jaaphttps://www.blogger.com/profile/08452235302203809872noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-66593006679433078762019-04-30T05:35:43.804-04:002019-04-30T05:35:43.804-04:00Nice post. You might be interested in the work of ...Nice post. You might be interested in the work of Schulze, Klamroth, and Stiglmayr on unconstrained MOCO-problems. If you want a reprensentation of the Pareto front and can live with a set composed of only the extreme-supported solutions, you can have it in polynomial time. https://link.springer.com/article/10.1007/s10898-019-00745-6 Unknownhttps://www.blogger.com/profile/02239605959360493928noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-56659440997235475742019-03-12T23:20:59.458-04:002019-03-12T23:20:59.458-04:00You're welcome, Hugh. Please do let me know an...You're welcome, Hugh. Please do let me know any suggestions for changes.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-22674996710949867932019-03-12T20:44:38.908-04:002019-03-12T20:44:38.908-04:00Thanks Paul!
Thanks Paul!<br />Hugh Medalhttps://www.blogger.com/profile/03468597595207233576noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-54945029675191252842019-03-04T14:59:51.397-05:002019-03-04T14:59:51.397-05:00I'm guessing that the $R_{ik}$ are parameters,...I'm guessing that the $R_{ik}$ are parameters, not variables, and that what you mean by overall reliability (to be maximized) is the product of those $R_{ik}$ for which $x_{ik} = 1$. In other words, $Rel=\prod_{i,k:x_{ik}=1}R_{ik}$. You can write this as $Rel=\prod_{i,k}R_{ik}^{x_{ik}}$, which looks even less linear but is a step in the right direction. Now take logs: $\log Rel=\sum_{i,k}\left(\log R_{ik}\right)x_{ik}$. Maximizing $\log Rel$ will maximize $Rel$, and you have a linear objective function.<br />Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-45692424291511462482019-03-04T14:48:49.281-05:002019-03-04T14:48:49.281-05:00Dear Paul Sir,
Can you please help me?
I am new to...Dear Paul Sir,<br />Can you please help me?<br />I am new to the CPLEX. I want to assign N jobs on M machines so as to maximize overall reliability. I face difficulty in implementing multiplication equation.<br />Overall reliability is calculated by following equation:<br />Rel = ∏ R(i,k) <br />where i=(1 to N) and k=(1 to M)<br />and R(i,k) - it is reliability of task i on processor k<br /><br />I am using Rel as float objective function which I have to maximize and binary decision variable x(i,k) which is 1 if job i is assigned to machine k, otherwise 0.<br />I have some other constraints which take care that no two jobs will overlap on the same machine. I am comfortable in using them. But I am unable to do multiplication in the constraint as,<br />R(i,k) * x(i,k) >= Rel; //where i=(1 to N) and k=(1 to M)<br /><br />Also, another concern is that if x(i,k) = 0 then it should not make overall reliability as zero.<br /><br />Rel and R(i,k) are float in the range of 0 to 1. <br /><br />Please help me. Thank you and appreciate your work, dedication for this blog. <br />Unknownhttps://www.blogger.com/profile/02640734944568632776noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-55915223590454082002019-03-03T10:56:49.677-05:002019-03-03T10:56:49.677-05:00If $x$ is binary, $x^k = x$. If $x$ is continuous ...If $x$ is binary, $x^k = x$. If $x$ is continuous (and, presumably, $y$ is binary), then you are dealing with a nonlinear inequality constraint, which will make your problem harder to solve. You can still use what I wrote above, but my $y$ is your $x^k$, my $x$ is your $y$, and the bounds $L$ and $U$ apply to $x^k$.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-42582243709369059462019-03-03T03:08:07.239-05:002019-03-03T03:08:07.239-05:00Hello Paul
you are explaining the problem very goo...Hello Paul<br />you are explaining the problem very good. This helps me a lot so thank you.<br /><br />But how can I handle the same Problem but a Little Change:<br />Z=x^k(y) for k is a real numberAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-24374839404420965442019-02-23T15:52:33.487-05:002019-02-23T15:52:33.487-05:00I don't know about "work great", but...I don't know about "work great", but I'll show results in the next post that suggest (but do not definitively prove) that it might "work well enough".<br /><br />I'm not sure about the best way to present a Pareto frontier myself. I did some pairwise plots of criteria, but plotting all $\binom{M}{2}$ pairwise plots might cause catatonia in the decision makers. I haven't tried this myself, but one possibility is to present selected pairwise plots ("selected" meaning comparisons in which the DM expresses an interest), and link them so that mousing over a point in one plot highlights the same solution in the other plots. It might or might not be effective as a decision aid, and in any case it's pretty decent eye candy. :-)Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-27118157642173809252019-02-23T15:41:58.355-05:002019-02-23T15:41:58.355-05:00I've often wondered about this algorithm to ge...I've often wondered about this algorithm to generate the Pareto front. Seems too trivial to work great, but it is too easy to try.<br /><br />On the other hand, does it really make sense to present a Pareto front if M is larger than maybe 5?pihttps://www.blogger.com/profile/00954503509976411806noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-6395735025228436772019-02-22T19:54:45.324-05:002019-02-22T19:54:45.324-05:00Nejat,
You did not indicate whether you are using...Nejat,<br /><br />You did not indicate whether you are using "solve" or "populate" to generate solutions. (The solution pool would be used with either one.) I believe that CPLEX works differently depending on which you use. To get all the solutions, I would use "populate". Even then, several parameters set limits that can cause CPLEX to stop before all optimal solutions are found. To be sure to find them all, you can keep calling populate until it comes back with an empty node log (nothing more to do). This assumes that you set the pool size limit high enough to hold all the optimal solutions.<br /><br />I did not understand what you meant about "triangle constraints", and I really do not know whether using a user cut callback at the beginning but then turning it off after X minutes would be helpful. (By the way, the second phase would still be branch-and-cut, since CPLEX will generate cuts even if you do not.) Setting the objective value as a constraint is unlikely to help, and done the wrong way can slow the solver. One possibility to help with memory is to set parameters that let CPLEX swap parts of the node tree out to disk. Another, if you have many constraints, is to make some of the constraints lazy constraints. (This may slow progress on the bound, but may help conserve RAM.) Finally, note that using parallel threads will massively increase RAM consumption, so you can always try cutting back on the number of threads used (again, at the cost of speed).Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-57584018546850407962019-02-22T12:32:15.580-05:002019-02-22T12:32:15.580-05:00Hi Dr. Rubin,
Thank you very much for this post an...Hi Dr. Rubin,<br />Thank you very much for this post and your blog. First of all, I am quite newbie in OR. I have a partitioning problem where I need to find all optimal solutions. By following your post and the IBM manual (I set 'SolnPoolAGap' to 0.5), I was able to find many optimal solutions. Based on this, I would like to ask 2 questions:<br />1) Can we say for sure that we get all optimal solutions?<br />2) I ran into some memory problem for some instances, please note that the memory (RAM) I use is quite large: 256 gb. So, I am thinking an alternative way which consumes less memory when enumerating all optimal solutions. I have two ideas (but, I do not know if it is feasible):<br /> *) Using cutting plane approach. Hence, we do not add all triangle constraints, it might help us gain memory usage. However, I am not sure how to implement this. If I first generate user cuts within X minutes (Branch&Cut), and then I solve the same instance with generated tight user cuts (Branch&Bound), this would be a good idea?<br /> *) I can (efficiently) solve the problem for one optimal solution, and get the objective function value. Then, use this objective function value as constraint, and solve the same instance again but for all optimal solution.<br /><br />I am open to all your suggestions. Thanks in advance.<br /><br />NejatAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-78254288350532028702019-02-10T21:50:06.235-05:002019-02-10T21:50:06.235-05:00Thanks. This problem is belong to Bilinear problem...Thanks. This problem is belong to Bilinear problem. Maybe i should refer to more papers about that. ORAndYouhttps://www.blogger.com/profile/04268693961187233051noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-13636713485669585502019-02-09T12:04:56.871-05:002019-02-09T12:04:56.871-05:00Standard Benders definitely does not apply to your...Standard Benders definitely does not apply to your problem, nor can I see a way to apply any obvious generalization of Benders to it. The first difficulty is that, in the LP subproblem, the inherited value of $y$ appears in the constraint coefficients, which means that it affects the subproblem solution nonlinearly. For instance, in your example, the solution to the subproblem is $x=\frac{25\hat{y}}{2+\hat{y}}$ where $\hat{y}$ is the value of $y$ inherited from the master problem. The second difficulty is that you need to translate the subproblem solution into linear constraints on $y$, not the specific value $\hat{y}$. The genius of what Benders did was the recognition that LP duality would let him generalize a subproblem solution solution from one specific value of the discrete variable to a valid inequality on all (feasible) values. I do not see any way to do that when $y$ shows up in the subproblem constraint coefficients.<br /><br />As far as any better methods go, I am not up to date on mixed-integer nonlinear programming, and I am afraid I have nothing to suggest.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-15479293239756784172019-02-09T06:35:16.579-05:002019-02-09T06:35:16.579-05:00Hi Paul,
There is a question confusing me. Can ...Hi Paul,<br /> There is a question confusing me. Can standard Benders decomposition (not generalized benders decomposition) be applicable to some special non-linear problem. For example, min 5*x*y, st: 2*x+x*y>=5, x∈R+,y∈Z+. The origin problem is a non-linear problem, but once we fix y's value, we can obtain a linear subproblem, and we keep the complicated variables in master problem. Benders cuts generate as usuall in standard Benders decomposition. It's seem to be correct. I try to find some examples which implies this method, but i can't find any examples like this. So, whether is Benders decomposition applicable to this kind of non-linear problem(with the term like x*y, x is continues variable, y is integer binary)? If not, how generalized Benders decomposition, or there has any other better methods?ORAndYouhttps://www.blogger.com/profile/04268693961187233051noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-77508144264181619282019-01-28T10:38:49.252-05:002019-01-28T10:38:49.252-05:00I've tended to maintain blissful ignorance of ...I've tended to maintain blissful ignorance of the algorithms used since they got creative and started factoring matrices rather than doing Gaussian pivoting. (This means I pretty much tuned out before you were born.) That said, there's always the option to keep one tree and just use the extra threads for various vector operations, such as refactoring the basis matrix or multiplying the RHS of some node problem by a known dual solution to see what you get. In fact, I don't know whether other solvers make copies of some or all of the problem matrix for each thread.<br /><br />Apparently the code wonks at IBM decided that this was the route that gave the best "average" performance, and I bow to their superior knowledge about this stuff. I do my work on a single PC, so I have no experience with clusters, but my guess is that when the various "cores" you are using are actually on different machines or blades, with their own memory, the matrix copy thing may be a bit of a non-issue. On a single desktop PC, it clearly can bite you.<br /><br />Lastly, regarding speed, as you divvy up the work into more and more threads, you encounter more and more effort to coordinate them, to do the division of load (and reintegration of the results), and to handle requisite communication. So there is what I think is an unavoidable law of diminishing returns, and at some point the amount of back and forth exceeds the amount of actual progress. (In other words, you've reinvented Congress.) Futzing with the code base might change the number of threads where gain turns into loss, but there will always be a limit.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-86949180318100509212019-01-28T03:06:39.826-05:002019-01-28T03:06:39.826-05:00Thanks for sharing this, this is really interestin...Thanks for sharing this, this is really interesting. And computational tests seem to confirm that with the current software setup there is definitely an upper bound to the number of threads that give a speedup (seems to be 10-20 at the very most).<br /><br />My question is though whether you think that we could fundamentally change this by changing the architecture of the algorithms? I.e. do all the nodes need to have a local copy of the model? Or is this just the way it is going to be?Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-24728148975036571562018-12-16T14:36:40.293-05:002018-12-16T14:36:40.293-05:00That seems like a good idea on the surface, but I ...That seems like a good idea on the surface, but I just tried it, and for some reason it slowed down progress on the bound. Partly this was fewer nodes being investigated per minute (perhaps because the model gets bigger). Even at the same node count, though, the bound was weaker than without this. Beats me why.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-53492652761646438912018-12-16T14:14:44.231-05:002018-12-16T14:14:44.231-05:00You would maximize $b'x + \sum_{i=0}^N e^{W_i}...You would maximize $b'x + \sum_{i=0}^N e^{W_i} y_i$, which is linear in $x$ and $y$. Any integer linear programming solver that recognizes SOS2 constraints could be used.Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.com