tag:blogger.com,1999:blog-8781383461061929571.post862748735987289233..comments2018-02-27T13:40:17.226-05:00Comments on OR in an OB World: Rolling HorizonsPaul Rubinhttps://plus.google.com/111303285497934501993noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-8781383461061929571.post-68169424146919158062017-08-23T17:09:26.016-04:002017-08-23T17:09:26.016-04:00Thanks for the explanation and the references. Sho...Thanks for the explanation and the references. Shooting from the hip, I suspect that this would be hard to extend to discrete problems ... but I could be wrong. (It would not be the first time.)Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-77454047653592290942017-08-22T08:49:45.600-04:002017-08-22T08:49:45.600-04:00Hi Rubin, the proofs of optimality basically prove...Hi Rubin, the proofs of optimality basically prove that from a certain point onwards, the optimal policy will not change anymore. Two great papers on this topic are:<br /><br />- Mayne, Rawlings, Rao, Scokaert (2000) Constrained model predictive control: Stability and optimality (http://www.sciencedirect.com/science/article/pii/S0005109899002149) <br /><br />- Scokaert and Rawlings (1998) Constrained linear quadratic regulation (http://ieeexplore.ieee.org/abstract/document/704994/).<br /><br /><br />The general assumptions there are linear dynamics and constraints and equal weighting on all horizon steps. In practice however (at least in MPC), one is more interested in stability, i.e. that any control action taken will end up in the same feasible space of the state variables, and thus enable stability of the controller.Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-80636734814542510342017-08-21T15:29:33.078-04:002017-08-21T15:29:33.078-04:00Thanks for the comment, Richard. I'm particula...Thanks for the comment, Richard. I'm particularly curious about proofs of optimality (or even near optimality), since intuitively I would think that results would depend on how "clever" one was setting boundary conditions. For some classes of problems, I think you can replace boundary conditions by setting the temporary horizon reasonably far out and apply a discount factor to results (so that the solution is progressively less sensitive to what's going on as you approach the horizon). My gut feeling is that might be more amenable to proofs of optimality (or at least near optimality). Have you seen optimality proofs that don't require discounting?Paul Rubinhttps://www.blogger.com/profile/05801891157261357482noreply@blogger.comtag:blogger.com,1999:blog-8781383461061929571.post-34417287083471690532017-08-21T07:06:47.181-04:002017-08-21T07:06:47.181-04:00Rolling horizon frameworks are very widely used in...Rolling horizon frameworks are very widely used in model predictive control, where the underlying optimization problem is solved for a given horizon. Then, the first policy is applied and the problem is solved again when the measurements for the state variables are available. There is quite a large body of literature in that community dedicated to this type of problem, and proofs of stability, optimality etc., some of which are very intriguing!Richard Oberdieckhttps://www.blogger.com/profile/03386871665462162276noreply@blogger.com