A former acquaintance from my student days once articulated his strategy for essay exams to me: if you don't know the answer, argue the premise of the question. In that vein, I'm inclined to ask why so many variables are necessary. One possibility is that the model captures decisions for a sequence of time periods. If decisions in one time period had no impact on subsequent periods, the problem would decompose naturally into a bunch of smaller problems; so if we are talking about a multiperiod model, it's safe to assume that the periods are connected.
That brings me to a strategy I learned back in primitive times, the "rolling horizon" approach. Let me stipulate up front that this is a heuristic method. It does not provide a provably optimal solution for the entire time horizon. Still, given the up-tick in humongous models, I'm starting to wonder if rolling horizons are no longer taught (or are looked down on).
The basic concept is simple. The devil is in the details. Let's say we want to solve a planning model over a horizon of $H$ time periods, and that one omnibus model for the entire horizon is proving intractable. The rolling horizon approach is as follows.
- Pick a shorter horizon $K < H$ that is tractable, and a number of periods $F \le K$ to freeze.
- Set "boundary conditions" (more on this below).
- Solve a model for periods $1,\dots,K$, incorporating the boundary conditions.
- Freeze the decisions for periods $1,\dots,F$.
- Solve a model for periods $F+1,\dots, \min(F+K, H)$.
- Repeat ad nauseam.
The initial conditions (starting inventory, locations of vehicles, pending orders, ...) for each model are dictated by the state of the system after the last frozen period. The boundary conditions are limits on how much of a mess you can leave at the end of the reduced planning horizon (period $K$ in the first model, $K+F$ in the second model, etc.). More precisely, they limit the terminal state of things that will be initial conditions in the next model.
As a concrete example, consider a manufacturing scheduling model. You start with inventories of components and finished products, available capacity of various kinds, and unfilled orders, and you end with the same kinds of things. Without boundary conditions, your solution for the first model might end period $K$ with no finished goods inventory. Why make stuff if it doesn't count toward your demand within the time frame considered by the model? That might make the problem for periods $K+1$ onward infeasible, though: you have orders that must be filled, they exceed your immediate production capacity, and the cupboard was left bare by the previous solution.
So you want to add constraints of the form "don't leave me with less than this much inventory on hand" or "don't leave me with more than this many unfilled orders". Picking values for those limits is a bit of an art form. Make the limits too draconian and you will get a very suboptimal solution (say, piling up way more inventory than you'll really need) or possibly even make the early subproblems infeasible. Make the limits too laissez faire, and you may force thoroughly suboptimal solutions to later subproblems (piling on lots of expensive overtime, blowing off soon-to-be-former customers) or even make the later problems infeasible. Still, it's usually possible to come up with pretty reasonable boundary conditions, perhaps with some trial and error, and it's a way to get a solution that, if not globally optimal, is at least good (and preferable to staring bleary-eyed at your screen in the 30th hour of a run, wondering if the beast will ever converge).
The term "rolling horizon" was coined in reference to problems that are decomposed based on time, but the same concept may extent to problems that can be decomposed based on position. I used it not long ago to get a heuristic for a problem placing nodes in a physical network. In essence, we took the original geographic region and chopped it into pieces, picked a piece to start from, and then worked our way outward from that piece until all the pieces had been solved, each based on the (frozen) solution to the more inward pieces.
Rolling horizon frameworks are very widely used in model predictive control, where the underlying optimization problem is solved for a given horizon. Then, the first policy is applied and the problem is solved again when the measurements for the state variables are available. There is quite a large body of literature in that community dedicated to this type of problem, and proofs of stability, optimality etc., some of which are very intriguing!
ReplyDeleteThanks for the comment, Richard. I'm particularly curious about proofs of optimality (or even near optimality), since intuitively I would think that results would depend on how "clever" one was setting boundary conditions. For some classes of problems, I think you can replace boundary conditions by setting the temporary horizon reasonably far out and apply a discount factor to results (so that the solution is progressively less sensitive to what's going on as you approach the horizon). My gut feeling is that might be more amenable to proofs of optimality (or at least near optimality). Have you seen optimality proofs that don't require discounting?
DeleteHi Rubin, the proofs of optimality basically prove that from a certain point onwards, the optimal policy will not change anymore. Two great papers on this topic are:
ReplyDelete- Mayne, Rawlings, Rao, Scokaert (2000) Constrained model predictive control: Stability and optimality (http://www.sciencedirect.com/science/article/pii/S0005109899002149)
- Scokaert and Rawlings (1998) Constrained linear quadratic regulation (http://ieeexplore.ieee.org/abstract/document/704994/).
The general assumptions there are linear dynamics and constraints and equal weighting on all horizon steps. In practice however (at least in MPC), one is more interested in stability, i.e. that any control action taken will end up in the same feasible space of the state variables, and thus enable stability of the controller.
Thanks for the explanation and the references. Shooting from the hip, I suspect that this would be hard to extend to discrete problems ... but I could be wrong. (It would not be the first time.)
Delete