A change I made to an answer I posted on OR-Exchange, based on a comment from a well-informed user of OR-X, might be worth repeating here on the blog. It has to do with issues that can occur when using "big M" type integer programming models, a topic I've covered here before.
As I mentioned in that previous post, part of the problem in "big M" formulations stems from the inevitable rounding error in any non-integer computations done on a computer. A particular manifestation of rounding error (regardless of whether there are "big M" coefficients in the model or not) is that the double precision value assigned to integer variables in a solution will not necessarily be integers. With surprising frequency, I see users of MIP software demanding to know why the answer they got for their integer variable was 0.9999999999975 or 1.000000000032 rather than exactly 1. The answer has two parts: (a) rounding error is pretty much inevitable; and (b) the software designers accepted that reality and decreed that anything "close enough" to the nearest integer counts as being an integer for purposes of deciding if a solution is feasible. (Why the software prints all those decimal places rather than rounding for you is a separate question that I will not address.)
So the solver generally has a parameter that gives an "integrality tolerance", just as it has a (typically separate) parameter for how close the expression in a constraint has to be to the allowed value(s) to be considered feasible (again, a nod to rounding error). In CPLEX, the name of the integrality tolerance parameter is some variant of "MIP.Tolerances.Integrality" (in earlier versions, the much more compact "EpInt"), and its default value (as of this writing) is 1.0E-5.
So now I'll connect that to "big M". One of the more common uses of "big M" is to capture a logical constraint of the form "if condition is true then expression is limited". For instance, you might want to build into the model that if a warehouse is not open (the condition) then shipments from it must equal (or cannot exceed) zero. Algebraically, this frequently appears as $$f(x) \le My$$where $f(x)$ is some (hopefully linear) expression involving variable $x$ and $y$ is a binary variable (with 1 meaning the constraint is relaxed and 0 meaning it is enforced). In a world of exact arithmetic, and with $M$ chosen large enough, $y=1$ means the value of $f(x)$ is essentially unbounded above, while $y=0$ means $f(x)\le 0$.
Here the issue of integrality tolerance sneaks in. Suppose that we choose some really large value of $M$, say $M=1E+10$, and that the solver decides to accept a solution where $y=1.0E-6$ (which is within CPLEX's default integrality tolerance). From the solver's perspective, $y=0$. Logically, that should mean $f(x)\le 0$, but given the rounding error in $y$ and the large value of $M$ what you actually get is $f(x)\le 10,000$. So, borrowing from my earlier example, I've got a closed warehouse shipping 10,000 units of whatever. Oops.
A common reaction to this (and by "common" I mean I've seen it multiple times on help forums) is to say "I'll set the integrality tolerance to zero". Good luck with that. First, it's not guaranteed the software will let you. (CPLEX will.) Second, if it does let you do it, you might get a suboptimal solution, or be told your perfectly feasible problem is actually infeasible, because the software couldn't get the true optimal solution (or perhaps any solution) to have zero rounding error in all the integer variables.
If you run into incorrect solutions in a "big M" model, some combination of tightening the integrality tolerance (but not all the way to zero) and ratcheting down the size of $M$ may fix things ... but, as with all aspects of MIP models, there are no guarantees.