Showing posts with label INFORMS. Show all posts
Showing posts with label INFORMS. Show all posts

Monday, September 13, 2021

Helping Babies

Before I get to what's on my mind, let me start by giving a shout out to the Resoundingly Human podcast from INFORMS. In each episode, host Ashley Kilgore interviews one or more OR academics or professionals about interesting recent work. Frequently, though not always, the subject matter pertains to a recent or forthcoming publication, but the discussions never get very technical, so they are suitable for a general audience. Episodes typical run in the vicinity of 15 to 20 minutes, which I find enough for a full exposition but not long enough to be tedious. They are generally quite well done.

What motivated me to post was the most recent episode, "Helping valuable donor milk reach infants in need", which I listened to while walking on a nature trail, feeding mosquitoes. The guest was Professor Lisa Maillart of the University of Pittsburgh, discussing a model she and colleagues developed to help donor milk banks convert donations into products for end use. If you are not familiar with milk banks (I actually was, not sure why), think "blood bank" but with donated breast milk replacing donated blood and babies in need replacing surgical patients in need. For greater detail, I recommend listing to the episode (linked above, approximately 20 minutes duration).

The application is the sort of thing that tends to give one warm fuzzy feelings. (Who doesn't like babies?) Moreover, during the podcast Prof. Maillart made a point that I think bears some reflection. I have not seen their model (the paper is not out yet), but from the sounds of it I think we can assume it is a production planning/blending model that likely resembles other models with which many of us are familiar. Early results from implementing the model seem to have produced substantial gains to the milk bank with which the authors collaborated. Prof. Maillart noted that, given the variety of constraints involved and the number of decisions to be made, scheduling production manually (based on experience, intuition or maybe just guesswork or subconscious heuristics) is challenging. In other words, for the non-OR person doing the planning, it is a "hard" problem, in part due to the number things to be juggled. To an OR person, it may not seem that hard at all.

For me, at least, "hard" means difficult to formulate because it involves some complicated constructs or some probabilistic/squishy elements, or difficult to formulate because something is nonlinear (and not easily approximated), or possibly difficult to solve because feasible solutions are in some way tough to find or the minimum possible scale (after decomposing or clustering or whatever it takes to get the dimensions down) still overwhelms the hardware and software.  (At this point I'll stop and confess that my perspective inevitably is that of an optimizer, as opposed to a simulator or a stochastic process <insert your own noun ending in "er" here -- I'm at a loss>.) For a typical person, going from five constraints of a particular sort (for instance, capacity limits on individual workers) to ten can be "hard". For an OR person, it just means the index range of a single constraint changes.

After listening to the episode, I am left wondering (not for the first time) how often people in the "real world" stare at "hard" problems that would seem relatively straightforward to an OR person ... if only we knew about them, which usually requires that the problem owner know about us. INFORMS is working to publicize both itself and the work of the OR community, but I think we still fly below almost everybody's radar.

Saturday, May 23, 2020

Of ICUs and Simulations

I'm a fan of the INFORMS "Resoundingly Human" podcasts, particularly since they changed the format to shorter (~15 minute) installments. I just listened to a longer entry (40+ minutes) about the use of OR (and specifically simulation models) to help with hospital planning during the pandemic. (Grrrr. I'd hoped to keep the word "pandemic" out of my blog. Oh well.) The title is "The dangers of overcrowding: Helping ICUs preserve essential bed space", and the guest is Frances Sneddon, CTO of Simul8 Corporation. I thought the content was interesting, and Frances was very enthusiastic presenting it, so if you have any interest in simulation and/or how OR can help during the (here it comes again) pandemic, I do recommend giving it a listen.

One thing that definitely got my attention was Frances's emphasis on building simulation models in a rapid / interactive / iterative / agile way. ("Rapid" was very much her word, and she used "agile" toward the end of the podcast. "Interactive" and "iterative" are my words for the process she described.) Basically (again with my paraphrasing), she said that the best outcomes occur when simulations are born from discussions among users and modelers where the users ask questions, followed by fairly rapid building and running of a new/revised model, followed by discussions with the users and more of the same. Frances at one point drew an analogy to detective work, where running one simulation lets you ferret out clues that lead to questions that lead to the next model.

To some extent, I think the same likely holds true of other applications of OR in the real world, including optimization. Having one conversation with the end users, wandering off into a cave to build a model, and then presenting it as a fait accompli  is probably not a good way to build trust in the model results and may well leave the user with a model that fundamentally does not get the job done. As a small example, I once worked on a model for assigning school-age children to recreational league athletic teams. The version of the model satisfied all stated constraints, but the user told me it would not work. Some parents have multiple children enrolled in the league, and it is unworkable to expect them to ferry their kids to different teams playing or practicing in different places. So siblings must go on the same team. (There were other constraints that emerged after the initial specification, but I won't bore you with the details.)

So on the one hand, I'm predisposed to agree at least somewhat with what Frances said. Here comes the cognitive dissonance (as my erstwhile management colleagues would say). Once upon a time I actually taught simulation modeling. (I won't say exactly when, but in the podcast Frances mentions having been in the OR field for 20 years, and how saying that makes her feel old. The last time I taught simulation was before she entered the field.) Two significant issues, at least back then, were verifying and validating simulation models. I suspect that verification (essentially checking the correctness of the code, given the model) is a lot easier now, particularly if you are using GUI-based model design tools, where the coding looks a lot like drawing a flow chart from a palette of blocks. The model likely was also presented as a flow chart, so comparing code to model should be straightforward (put the two flow charts side by side). Validation, the process of confirming that the model is correct, may or may not be easier than in the past. To some extent you can achieve "face validity" by talking through the assumptions of the model with the users during those interactive sessions, helped by a flow chart.

Back in my day, we also talked about historical validation (running the model with historical inputs and seeing if the results reasonably tracked with historical outputs). When you are trying to answer "what if" questions (what if we reconfigure the ICU this way, or change admissions this way, or ...?), you likely don't have historical data for the alternate configurations, but you can at least validate that the model adequately captures the "base case", whatever that is. Also, "what if" questions are likely to lead you down paths for which you lack hard data for parameter estimates. What if we build a tent hospital in Central Park (which has never been done before)? What do we use for the rate at which patients experience allergy attacks (from plant life in the park that simply does not exist inside the hospital)? My guess is that your only recourse is to run the simulation for multiple values of the mystery parameter, which leads us to a geometric explosion of scenarios as we pile on uncertain parameters. So my question is this: in an interactive loop (meet with users - hack model - collect runs / synthesize output - repeat), can we take reasonable care to preserve validity without exhausting the parties involved, or overloading them with possibilities to the point that there is no actual take-away?

Informed opinions are welcome in the comments section. (It's an election year here, so I'm already maxed out on uninformed opinions.)

Tuesday, January 10, 2017

Pro Bono Analytics Is Growing Social

Pro Bono Analytics is a program by INFORMS (the Institute for Operations Research and the Management Sciences, for the acronym-averse), "the largest society in the world for professionals in the field of operations research (O.R.), management science, and analytics". PBA "connects our members and other analytics professionals with nonprofit organizations working in underserved and developing communities". In other words, we hook up charitable organizations needing analytics or OR help with volunteers willing to provide it without compensation. Our volunteers are a mix of industry practitioners, academics, students and the occasional geezer retiree. I think the majority of the volunteer pool comes from the U. S., and a majority are INFORMS members, but we have volunteers from as far away as Australia, and a significant portion are non-members.

I'd encourage anyone with OR/MS/analytics skills to consider volunteering, and anyone who knows a charitable organization (particularly those of limited financial means) to let them know we're out there willing to extend a helping hand. Both potential clients and potential volunteers can find out more, and signify their interest, at the PBA home page (repeating the link above).

Our new (and apparently energetic) staff liaison, Tia Carrai, has expanded PBA's social media footprint. You can find us now on the following:
I'd also like to give a shout-out to Pro Bono O.R., a like-minded initiative by our sister institution in the U. K., the Operational Research Society.

Monday, September 23, 2013

INFORMS 2013: Just Around the Corner

We're within about two weeks of the 2013 INFORMS Annual Meeting in beautiful Minneapolis, Minnesota. (Thank heavens for Google Maps. The atlas I had as a kid growing up in New York just said "Here Be Dragons" somewhere in the blank space west of Detroit.) I'll be a guest blogger during the meeting (October 6 through October 9), so I may be even less productive than usual on this blog. If you are curious, my first (pre-)conference post is already up (unless someone gets embarrassed and takes it down).

Friday, November 11, 2011

Blogging at INFORMS 2011

I volunteered to be a guest blogger at the 2011 INFORMS national meeting in Charlotte, NC. I haven't even hopped a plane yet, and I've already spammed the blog (er, posted) once. You can find the post, if you're curious, at http://meetings2.informs.org/charlotte2011/blog/?p=145.  There is quite a cast of bloggers for the meeting, so if you are attending (or just interested), I recommend subscribing to the RSS feed.

Saturday, July 16, 2011

Facts, Beliefs ... and Budgets

Melissa Moore (@mooremm), Executive Director of INFORMS, recently tweeted the following question: "What or tools would you use if you were asked to help solve the US Federal /#debt impass?"  My initial reaction (after verifying that a flammenwerfer is not considered an OR tool) was that OR and analytics would be of no use in the budget debate (debacle?).  OR and analytics rely on facts and logic; it is unclear that either side of the debate is interested in facts or willing to be constrained by logic.

The question did set me to thinking about the difference between facts and beliefs.  I have a hard time sorting out when demagogues, whether politicians or media bloviators, are espousing positions they actually believe and when they are simply pandering for ratings/votes.  (My cynicism is hard won: I grew up in New York, went to school in New Jersey, and cast my first vote to reelect Richard M. Nixon.  It's been downhill from there.)  For the sake of argument, let's stipulate that both sides are acting on beliefs they truly hold.  When I was younger it seemed to me that, however venal either side's motives might be, both the left and the right were capable of negotiating based on some common understanding of governance and the political, social and economic realities of the country they governed.  It's hard to trade horses, though, when one side can't tell a horse from a zebra and the other can't tell a horse from a camel. Today, one party thinks that the answer to any question that does not contain the phrase "gay marriage" is "cut taxes".  The other side thinks that the answer to any question that does not contain the phrase "gay marriage" is "tax the rich".  That the proposed solution might not work is simply inconceivable (as is the possibility that the other side's solution might work).

The somewhat unnerving truth, however, is that everything we think we know as a fact (raw data aside) is ultimately a belief.  My training is in mathematics. Casual users of mathematics, and even forgetful mathematicians, tend to think that what has been "proved" (i.e., a theorem) is definitively true. In reality, theorems are merely statements that must follow logically from a set of axioms (beliefs). The system of logic we accept is itself a matter of belief, but in the interest of avoiding a painful flashback to an undergraduate formal logic course I'll drop that line of thought right now. As in mathematics, so too in the physical sciences: theory arises from a mix of assumptions and empirical evidence; when new evidences clashes with the theory, modifications are made; and when the modifications become untenable, some assumption is altered or deleted and the theory is rebuilt.  (Remember when the speed of light was a constant?)

So if mathematics and physical sciences are built on leaps of faith, we really cannot fault elected representatives (and economists) from doing the same.  What we perhaps can demand, though, is that these beliefs at least be acknowledged as beliefs (not "proven facts"), and that decision makers attempt to examine the likely impact of any of those beliefs turning out false. As a parallel (pun deliberate), consider Euclid's Elements, written ca. 300BC, in which Euclid developed many theorems of what we now refer to as "Euclidean" geometry based on five postulates.  The postulates appear self-evident, and mathematicians over the centuries tried unsuccessfully to derive one from the others (turning the derived one into a theorem). In the 19th century, Nikolai Lobachevsky famously replaced Euclid's fifth postulate with a negation of it, perhaps hoping to prove the fifth postulate from the others by contradiction. Rather than finding a contradiction, he invented hyperbolic geometry, which is not only consistent as a mathematical system but has actually found use (those bleeping physicists again).

So, back to the original question: can OR bring any useful tools to bear on the budget debate? With enough time and effort, and exploiting the systems perspective that underlies OR, perhaps we could diagram out the interplay of all the assumptions being made (consciously or unconsciously) by each side; and perhaps, using simulation models based on those assumptions and calibrated to historical data, we could explore the consequences of each side's preferred solution (or, for that matter, any compromise solution) should any specific assumption not hold up. It would be a massive undertaking, and I am not confident it would be productive in the end. Zealously held beliefs will not yield easily to "what if" analyses.

Sunday, November 14, 2010

INFORMS Debrief

The national INFORMS meeting in Austin is in the rear-view mirror now (and I'm staring at the DSI meeting in San Diego next week -- no rest for the wicked!), so like most of the other bloggers there I feel obligated to make some (in my case random) observations about it.  In no specific order ...
  • The session chaired by Laura McLay on social networking was pretty interesting, particularly as we got an extra guest panelist (Mike Trick joined Anna Nagurney, Aurelie Thiele and Wayne Winston, along with Laura) at no extra charge. Arguably the most interesting thing from my perspective was the unstated assumption that those of us with blogs would actually have something insightful to say. I generally don't, which is one reason I hesitated for a long time about starting a blog. 
  • On the subject of blogs, shout-out to Bjarni Kristjansson of Maximal Software, who more or less badgered me into writing one at last year's meeting. Bjarni, be careful what you wish for! :-)  It took me some digging to find Bjarni's blog (or at least one of them). Now if only I could read Icelandic ...
  • I got to meet a few familiar names (Tallys Yunes, Samik Raychaudhuri ) from OR-Exchange and the blogosphere.
  • Thanks to those (including some above) who've added my blog to their blogrolls.  The number of OR-related blogs is growing pretty quickly, but with growth come scaling issues. In particular, I wonder if we should be looking for a way to provide guidance to potential readers who might be overwhelmed with the number of blogs to consider. I randomly sample some as I come across them, but if I don't see anything that piques my interest in a couple or so posts I'm liable to forget them and move on, and perhaps I'm missing something valuable. This blog, for instance, is mostly quantitative (math programming) stuff, but with an occasional rant or fluff piece (such as this entry).  I wonder if we could somehow publish a master blog roll with an associated tag cloud for each blog, to help clarify which blogs contain which sorts of content.
Getting off the social network tram for a bit ...
  • A consistent bug up my butt about INFORMS meetings is their use of time. Specifically, the A sessions run 0800 to 0930, followed by coffee from 0930 to 1000. There is then an hour that seems to be underutilized (although I think some plenaries and other special sessions occur then -- I'm not sure, as I'm not big on attending plenary talks). The B sessions run 1100 to 1230 and the C sessions run 1330 to 1500. So we have an hour of what my OM colleagues call "inserted slack" after the first coffee break, but only one hour to get out of the convention center, find some lunch and get back. It would be nice if the inserted slack and the B session could be flipped, allowing more time for lunch. My guess is that the current schedule exists either to funnel people into plenaries (who might otherwise opt for an extended lunch break) or to funnel them into the exhibits (or both).
  • We're a large meeting, so we usually need to be in a conference center (the D.C. meeting being an exception). That means we're usually in a business district, where restaurants that rely on office workers for patronage often close on Sundays (D.C. and San Diego again being exceptions). Those restaurants that are open on Sunday seem to be caught by surprise when thousands of starving geeks descend en masse, pretty much all with a 1230 - 1330 lunch hour. For a society that embraces predictive analytics, we're not doing a very good job of communicating those predictions to the local restaurants. (Could INFORMS maybe hire a wiener wagon for Sundays?)
  • The layout of the Austin convention center struck me as a bit screwy even without the construction. For non-attendees, to get from the third floor to the fourth floor you had to take an escalator to the first floor (sidebar: the escalator did not stop at the second floor), walk most of the length of the facility, go outside and walk the rest of the length, go back inside and take an escalator to the fourth floor (sidebar: this escalator did not stop at either of the two intervening floors). Someone missed an opportunity to apply the Floyd-Warshall algorithm and embed shortest routes between all nodes in the map we got.
  • The sessions I attended ranged from fairly interesting to very interesting, so I have absolutely no complaints about that. I also had good luck networking with people I wanted to see while I was there. Since sessions and networking are my two main reasons for attending a conference (my own presentation ranks a distant third), it was well worth the trip.
The conference, like all its predecessors, allowed me to collect some new observations that add to the empirical evidence supporting my theories of Geek Physics (which depart from classical Newtonian physics in a few regards). In particular:
  • A geek in motion will come to rest at the location that maximizes interdiction of other traffic (particularly purposeful traffic).
  • A geek at rest will remain at rest until someone bearing a cup of coffee enters their detection range, at which point they will sharply accelerate on an optimized collision course.
  • A geek entering or exiting a session in progress will allow the door to slam shut behind them with probability approaching 1.0. (Since I have no empirical evidence that geeks are hard of hearing, I attribute this to a very shallow learning curve for things outside the geek's immediate discipline, but I'm having trouble collecting sufficiently specific data to test that hypothesis.)
If anyone has observed other laws of Geek Physics that I'm missing, I'd be interested in hearing them.

Enough about the conference, at least for now. It was interesting, I enjoyed interacting with some people I otherwise only see online, and I brought home a bunch of notes I'll need to sort through. I'm looking forward to next year's meeting.