My adopted state of Michigan is currently afflicted with the Republican presidential primary. (Symptoms include repetitious attack ads on television, robocalls to one's house, and the general malaise associated with staring at any crop of candidates for political office.) Primaries tend to draw out "base" voters (those committed to one party or the other); we swing voters just stay at home, hiding under the covers until it is over.
Last night the local TV news included a sound bite from a generic Republican voter, an apparently intelligent and articulate woman (to the extent one can judge these attributes from a two sentence interview) who said she was still undecided because she wanted to vote for the "most conservative" candidate. The logic, or lack of logic, behind that statement caused me to take notice of the similarities between how some "base" voters think and common errors in operations research.
A single criterion is easy, but multiple criteria may be correct. There are quite a few pressing issues these days, ranging from foreign policy to budget deficits to global warming to unemployment to ... (I'll stop there; I'm starting to depress myself). Our base voter, henceforth Mme. X, has apparently condensed these criteria down to a single value, on a scale from hard core liberal (arbitrarily 0) to hard core conservative (arbitrarily 1). What is not apparent is how the multiple dimensions were collapsed to a single one. OR people know that multiple criterion optimization is hard, more from a conceptual standpoint than from a computational one. Using a single composite criterion (weighted sum of criteria, distance from a Utopia point in some arbitrary metric, ...) makes the computational part easier, but there are consequences (frequently hidden) to the choice of the single criterion. Goal programming has its own somewhat arbitrary choices (aspiration levels, priorities) which again can have surprising consequences. Picking the "most conservative" candidate simplifies the cognitive process but may lead to buyer's remorse. Similarly, arbitrarily collapsing multiple objectives into a single objective may simplify modeling, but may produce solutions that do not leave the client happy.
Averages can be deceptive. Point estimates also make modeling and decision making easier, but they can mask important things. (A colleague has a favorite, if politically incorrect, quotation: "Statistics are like bikinis. What they reveal is interesting, but what they conceal is critical.")
Suppose that Mme. X has narrowed her choices down to two candidates, and that they have both weighed in on five important issues (A through E). If candidate 1 is consistently to the right of candidate 2 on all issues, we have a dominated solution: Mme. X can eliminate candidate 2 and vote for candidate 1. On the other hand, consider the following scenario, where each candidate's position is rated on a scale from 0 (liberal) to 1 (conservative).
A solution that goes unimplemented is not a solution. Missing in Mme. X's search for the most conservative candidate is the quality referred to by pundits as "electability". Neither major political party claims a majority of registered voters in the U.S., so to win a general election, a candidate must capture a significant number of moderates and independents. The most ideologically pure candidate (for either party) may not be able to do so. This is a bit of a paradox in recent elections, where candidates find that they must appeal to "base" voters at one end of the political spectrum to get the nomination, then appeal to voters in the middle of the spectrum to win the election. Ideological "base" voters may not grasp this particular reality; they expect the "correctness" of their candidate's views (which are also their views) to triumph. [This may be at least partly explained by the false consensus fallacy.]
OR modelers sometimes have a similar blind spot. We can pursue perfection at the expense of good answers. We can opt for the approach that uses the most sophisticated or "elegant" mathematics or the most high-powered solution technique available. We may try for more scope or more scale in a project than what we can accomplish in a reasonable time frame (or what users can realistically cope with, in terms of data requirements and solution complexity). Professional journals often encourage this trend by requiring "novel" solution methods in order to publish a paper. The end result can be a really impressive solution that sits on a shelf because the client is unwilling or unable to implement it, or because it is too complex for the client to understand and trust.
Garbage in, garbage out. OR models rely on data, as inputs to the decision process or to calibrate parameters of the model. Feed bad data to an otherwise correct model and no good will come of it. I have seen estimates that as much as 60% of the time in an OR project can be spent cleaning the data.
Meanwhile, Mme. X has to rely on a variety of unreliable sources to gauge how conservative each candidate may be. Candidates famously say things they may not entirely believe, or express intentions they may not carry out, either in an overt effort to curry voters or because their views change between campaigning and governing. Historical data may be faked or misreported, and sometimes facts may not be what they seem. For instance, a generally pro-military candidate might vote against a military appropriation bill because there is a rider on it that would fund an inordinately wasteful project, or something unpalatable to the candidate and/or the candidate's constituents. Opponents will characterize this as an anti-military stance. Budget projections, and indeed any sort of projections, are subject to forecast errors, so a candidate's magical plan to fix deficits/unemployments/Mme. X's dripping kitchen faucet may turn out not to be so magical after all. Unfortunately for Mme. X, she probably has less ability to filter and correct bad data than an OR analyst typically does.
So, in conclusion, voters and OR analysts face similar challenges ... but OR analysts do not have to cope with a glut of robocalls.