*know*it applies to CPLEX.

Many factors influence the sequence of pivots a solver does on an LP model and the sequence of nodes it grinds through in a MIP model. If you solve the same model two different ways, using the same or different software, you should end up with the same optimal objective value (to within rounding); that's the definition of "optimal". If you don't, either you've found a bug in the software or, more likely, your problem is mathematically ill-conditioned. Even if the problem is well-conditioned, and you get the same objective value both places, you may not get the same optimal solution. You should if the solution is unique; but if there are multiple optima, you may get one optimal solution one time and a different one the next time.

This is true even if you use the same program (for instance, CPLEX) both times. Small, seemingly innocuous things such as the order in which variable or constraints are entered, or the compiler used to compile the respective CPLEX versions, can make a difference. (Different C/C++ compilers may have different floating point algorithms encoded.)

The bigger (to me) issue, though, is that if you dump the model to a file and solve the file with the interactive solver, you are

*not solving the same model*. You are solving what is hopefully a close (enough) replicate of the original model. The binary output format (SAV for CPLEX) will generally be closer to the original model than the text (LP or MPS for CPLEX) format will be. I'm not an expert on SAV file format, so I'll say that it is possible the SAV file is an exact duplicate. That requires not only full fidelity in representation of coefficients but also that the internal model you get by reading and processing the SAV file has all variables and constraints specified in exactly the same order that they appeared in the internal model from which the SAV file was created. I'm not sure that's true, but I'm not sure it isn't.

What I am sure of is that the text version is

*not*in general a full-fidelity copy. Not only do we have the same questions about preservation of order, but we have the rounding problems incurred by converting a double precision binary number to a text representation and then back to a double precision binary value. If your model is well-conditioned,

*hopefully*the differences are small enough not to affect the optimal solution. It may still affect the solution process. Suppose that you have an LP model with some degeneracy (which is not a sign of ill-conditioning). Degeneracy means a tie occurs occasionally when selecting pivot rows. At least it's a tie in theory; in practice, tiny rounding errors may serve to break the tie, and those rounding errors may well be different in the original version and the not-quite-perfect clone you get by saving a text version of the model and then reading it in. So the solver working on the original model pivots in one row, the solver working on the model from the text file pivots in a different row, and the solution time and possibly the solution itself can change.

The first immediate take-away from this is that if you want to compare the solution your code gets to the solution the interactive solver gets, use the binary file format. If you also want to visually check the model, save it as both binary and text files (and feed the binary file to the interactive solver). Don't rely on the text file to replicate solver behavior.

The second take-away is to modulate your expectations. In a just and orderly universe, solving the same model with the same software, on two different platforms and regardless of how you passed it between the software copies, would produce the same solution. A just and orderly universe, however, would not contain politicians. So accept that differences may arise.