Two approximate calculations give the number of atoms in the observable universe to be close to 1080. (source: Wikipedia)When teaching, I used the hypothesized finiteness of the amount of matter in the universe as a quick rationale for the statement that all practical optimization models should be bounded. (All practical optimization models should be feasible, too, unless the purpose of the model is to show that your bosses are being unreasonable in their demands.) Unbounded models arise from a combination of two factors:
- the analyst does not bother to determine reasonable finite bounds for each decision variable; and
- constraints that should bound the problem are either omitted or formulated incorrectly.
The fancy approach involves getting the solver to cough up a ray along which the feasible region is unbounded and the objective is strictly improving. The easy way is to slap finite but ridiculously loose bounds (both upper and lower) on any variable that is not already bounded, and then solve the model again. Since the feasible region is now bounded, you will get an "optimal" solution. Since the problem was previously unbounded, at least some variables will attain those ridiculously loose bounds.
Ask yourself, in the context of the decision problem, why that cannot happen in practice: why you cannot make 50 trillion automobiles per day, or sell 50 trillion iGadgets per day, ... Think not in terms of the model, but in terms of reality. You can't do it because you don't have enough (labor, materials, capacity, capital, demand, ...). Now focus on the constraints that should reflect those limits. If they are missing, add them. If they are present, plug the solution you just got into them and figure out why they failed to prevent it. Remediate and iterate until you can remove the goofy bounds (preferably replacing them with more reasonable bounds) and get an optimal solution.