I'm a fan of the INFORMS "Resoundingly Human" podcasts, particularly since they changed the format to shorter (~15 minute) installments. I just listened to a longer entry (40+ minutes) about the use of OR (and specifically simulation models) to help with hospital planning during the pandemic. (Grrrr. I'd hoped to keep the word "pandemic" out of my blog. Oh well.) The title is "The dangers of overcrowding: Helping ICUs preserve essential bed space", and the guest is Frances Sneddon, CTO of Simul8 Corporation. I thought the content was interesting, and Frances was very enthusiastic presenting it, so if you have any interest in simulation and/or how OR can help during the (here it comes again) pandemic, I do recommend giving it a listen.
One thing that definitely got my attention was Frances's emphasis on building simulation models in a rapid / interactive / iterative / agile way. ("Rapid" was very much her word, and she used "agile" toward the end of the podcast. "Interactive" and "iterative" are my words for the process she described.) Basically (again with my paraphrasing), she said that the best outcomes occur when simulations are born from discussions among users and modelers where the users ask questions, followed by fairly rapid building and running of a new/revised model, followed by discussions with the users and more of the same. Frances at one point drew an analogy to detective work, where running one simulation lets you ferret out clues that lead to questions that lead to the next model.
To some extent, I think the same likely holds true of other applications of OR in the real world, including optimization. Having one conversation with the end users, wandering off into a cave to build a model, and then presenting it as a fait accompli is probably not a good way to build trust in the model results and may well leave the user with a model that fundamentally does not get the job done. As a small example, I once worked on a model for assigning school-age children to recreational league athletic teams. The version of the model satisfied all stated constraints, but the user told me it would not work. Some parents have multiple children enrolled in the league, and it is unworkable to expect them to ferry their kids to different teams playing or practicing in different places. So siblings must go on the same team. (There were other constraints that emerged after the initial specification, but I won't bore you with the details.)
So on the one hand, I'm predisposed to agree at least somewhat with what Frances said. Here comes the cognitive dissonance (as my erstwhile management colleagues would say). Once upon a time I actually taught simulation modeling. (I won't say exactly when, but in the podcast Frances mentions having been in the OR field for 20 years, and how saying that makes her feel old. The last time I taught simulation was before she entered the field.) Two significant issues, at least back then, were verifying and validating simulation models. I suspect that verification (essentially checking the correctness of the code, given the model) is a lot easier now, particularly if you are using GUI-based model design tools, where the coding looks a lot like drawing a flow chart from a palette of blocks. The model likely was also presented as a flow chart, so comparing code to model should be straightforward (put the two flow charts side by side). Validation, the process of confirming that the model is correct, may or may not be easier than in the past. To some extent you can achieve "face validity" by talking through the assumptions of the model with the users during those interactive sessions, helped by a flow chart.
Back in my day, we also talked about historical validation (running the model with historical inputs and seeing if the results reasonably tracked with historical outputs). When you are trying to answer "what if" questions (what if we reconfigure the ICU this way, or change admissions this way, or ...?), you likely don't have historical data for the alternate configurations, but you can at least validate that the model adequately captures the "base case", whatever that is. Also, "what if" questions are likely to lead you down paths for which you lack hard data for parameter estimates. What if we build a tent hospital in Central Park (which has never been done before)? What do we use for the rate at which patients experience allergy attacks (from plant life in the park that simply does not exist inside the hospital)? My guess is that your only recourse is to run the simulation for multiple values of the mystery parameter, which leads us to a geometric explosion of scenarios as we pile on uncertain parameters. So my question is this: in an interactive loop (meet with users - hack model - collect runs / synthesize output - repeat), can we take reasonable care to preserve validity without exhausting the parties involved, or overloading them with possibilities to the point that there is no actual take-away?
Informed opinions are welcome in the comments section. (It's an election year here, so I'm already maxed out on uninformed opinions.)
No comments:
Post a Comment
Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.