I read an article today (in an alumni magazine) about successes in the physical sciences and engineering that occurred, either by serendipity or hard work, after one or more significant failures. The underlying thesis was that funding agencies have become overly risk averse, and their unwillingness to fund projects with that do not look like fairly sure things may be inhibiting new discoveries.
That got me thinking about failure rates in O.R. (and, to some extent, what it means to fail in O.R.). I'd like to avoid the recent debates about what we mean by "O.R." and take an inclusive view here, one that spans "pure" research (which in my experience usually means developing an understanding of the mathematical/statistical properties underlying common classes of models and algorithms), "applied" research (which for me is mainly algorithm construction, and perhaps model tuning) and application (solving problems).
I'm not sure to what extent pressure to extract grant money from risk-averse sources is an issue, but I think that in the academic world the "publish or perish" mentality, and the related formula pay raise = pittance + lambda*(recent pubs), pushes professors into doing very incremental, sure-fire work and not taking shots at problems that could require years if not decades of effort, and might not bear fruit. In the business world, O.R. applications are typically (exclusively?) decision support endeavors, and informed decisions need to be made now, not a decade down the road when some algorithmic breakthrough occurs. So I think we are collectively quite risk averse, but the consequences are impossible to measure.
The other question that struck me is how we define failure in O.R. We build models that approximate reality, and if close approximations prove intractable, we can usually make looser approximations (subject to an occasional bit of derision from actual decision makers, or academics who profess a closer tie to reality) (which is to say, not economists). On the algorithmic side, if we cannot find the optimal solution, the precise steady-state distribution or average queue length, or what have you, we can usually come up with an approximation / bound / heuristic solution and declare victory (and, in the case of consultants, pick up the last check). The closest thing to failure that I can point to in my own experience is having a manuscript rejected by a journal, and even then it's usually a partial failure: I shop the manuscript to another journal lower in the pecking order and iterate until acceptance. For practitioners, I suspect the most common manifestation of failure is producing a solution that is never utilized (something I've also experienced).
So, circling back to the original question, are there things we should be doing but are not, things that might directly or indirectly pay some significant social dividend?
No comments:
Post a Comment
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.