Tuesday, February 3, 2026

Finding a Row that Moved

This post deals with programmatically scrolling a table in a Shiny app (using the DT library) to a specific row. 

I've been working on a Shiny app involving a number of tables, which are displayed using the R DT package. The tables include controls to let the user sort them by any column, which plays a role in today's post. Controls external to the table let the user insert, delete and edit records. The reason I use external controls and not the editing capabilities built into DT is that there are a number of limitations on edits that the application code enforces.

Before getting into the weeds, I need to establish a bit of terminology. The underlying data in any table has associated row numbers, just as in a spreadsheet. Those do not change when the user sorts the displayed table. I'll refer to that as the "row index", and to the position (1 if first, 2 if second, etc.) of a row in the displayed table as the "display index". If the seventeenth row of a table winds up at the top after sorting, it has row index 17 and display index 1. (Sorry if I'm belaboring the obvious.)

Here is the problem I ran into. Suppose that a user selects a row and edits it. When the user commits the change, DT updates the display, and the edited row may wander quite far in either direction and in particular leave the portion of the table being displayed. So I want the application to automatically scroll the table to the new location of the edited record. I initially assumed this would be easy. It was not. In fact, 20+ attempts with the Shiny Assistant coding AI and 10+ attempts with Claude AI failed to solve the problem (although they did produce some insights, and a bit of code described below).

We can split the problem into two parts: finding the new location of the record, and then scrolling to it. It turns out that the latter is the easier part. At least it's the easier part if you know JavaScript, which I do not. One of the bots (I forget which) provided me with the following code to embed in an R function. The arguments to the R function are outputID (the name of the table in the UI) and row (the row number to which to scroll).

js_code <- sprintf(
  "setTimeout(function() {
     var table = $('#%s table').DataTable();
     // Scroll to the row.
     setTimeout(function() {
       var rowNode = table.row(%d).node();
       if (rowNode) {
         $(rowNode).addClass('highlight-row');
         rowNode.scrollIntoView({behavior: 'smooth', block: 'center'});
       }
     }, 300);
   }, 200);",
  outputId,
  row - 1
)
runjs(js_code)

I'm not positive, but I believe you need to load the shinyjs R library for this to work. (My app was already using it.) The values 200 and 300 in the code are apparently delays (in milliseconds) intended to allow time for DT to finish rendering the table before scrolling occurs.

On to what I originally thought would be the easier part: finding the target record. What the JS code is looking for in the second argument (row number) is actually what I'm calling the display index. My code knows the row index of the target record. Assume the output ID of the table is "A". After some digging, I discovered that input$A_rows_all contains the vector of row indices in display order (meaning the first entry is the row index of the record with display index 1 etc.). So if variable r contains the row index of the record we want, which(input$A_rows_all == r) will give us the display index of that row. Simple enough ... except that it did not work.

Poking around my code, I discovered that after the user committed a change the table display would update "immediately" but that input$A_rows_all retained its pre-change value. This turns out to be a synchronization issue -- DT updates the table and then updates the rows_all vector at it's own pace, while my function attempts to move immediately from the line committing the change to the line looking for the new display index.

The answer was to wrap the code doing the scrolling in an event observer: observeEvent(input$A_rows_all, { ... }) where "..." is the code that finds the target row and scrolls to it. This effectively pauses my code until DT has gotten around to updating the rows_all vector. 

Monday, January 19, 2026

Queueing Cuts in XpressMP

In my previous post, I described doing Benders decomposition with the XpressMP optimizer (Java API), including some sample code. As with previous code solving the same sample problem using CPLEX, a callback was required. The callback takes candidate solutions from the master problem, solves the corresponding LP subproblem, and either accepts the candidate or rejects it by adding a Benders feasibility cut or a Benders optimality cut. Well, that's how it works with CPLEX. The Xpress callback allowed me to add Benders cuts when the candidate solution was the optimal solution to a node LP in the master search tree, but not when the candidate was found via heuristics. (With CPLEX, the source of the solution did not matter.)

Being obstinate, I tried to work around that limitation. I updated my Java source code (still in the same repository) with a second Benders approach using a cut queue. In this approach, the PreIntsol callback solves the LP subproblem and, if warranted, cranks out a Benders cut. If the source was a node optimum, the PreIntsol callback adds the cut as before. If the source was a heuristic, the PreIntsol callback adds the cut to a queue within the Java code. A second callback, which implements the XpressProblem.CallbackAPI.CutRoundCallback interface, is called when Xpress is generating cuts. If any cuts are in the queue, this callback removes them from the queue and adds them via calls to XpressProblem.addManagedCut(). Thus, in theory, the opportunity to generate Benders cuts based on heuristic solutions does not go to waste.

So did this help? Somewhat, though not as much as I had expected. The first thing I discovered is that running either of the Benders models with a single thread was faster than running them with five threads. I suspect this is partly due to the test problem being very easy and partly due to multiple threads doing redundant work (for instance, churning out the same heuristic solution multiple times). 

 With a single thread, Benders with the queue was a wee bit faster than Benders without the queue. In one run, the original Benders model needed 597 ms. versus 578 ms. for the Benders run with the cut queue. (For comparison purposes, the basic MIP model with no decomposition needed 162 ms.) This might understate the advantage of the model with cut queue very slightly, since it does a bit more printing, and when you are dealing with differences in the millisecond range maybe time printing matters (?). The original Benders model had its callback invoked 28 times with integer-feasible node solutions, resulting in 25 optimality cuts and one feasibility cut. It was invoked 74 times with heuristic solutions. The version with the cut queue was invoked only six times with integer-feasible solutions but 74 times with heuristic solutions. It generated 56 optimality cuts and 21 feasibility cuts. The higher cut counts make sense, since it can generate cuts from heuristic solutions. What is a bit perplexing to me is that it encountered the same number of heuristic solutions, suggesting that the extra cuts blocked some node optima from occurring but had no effect on Xpress's heuristics.

With five threads, the story changes.  The run times were 168 ms. for the MIP model (no change after accounting for randomness), 817 ms. using the queue and 2557 ms. without the queue. So the queue cut Benders run time by about 2/3 or so. The original Benders callback was called 84 times with integer-feasible node solutions and 142 times with heuristic solutions, versus 17 times with integer-feasible node solutions and 87 times with heuristic solutions for the version with the queue. Note that, in this case, the number of heuristic solutions also went down.

So is the cut queue worth the effort of programming it (and the extra time devoted to invoke the cut callback repeatedly)? Maybe. It appears to depend at least in part on the number of threads being used, although one or two runs on one test case is far from conclusive. 

Thursday, January 15, 2026

Benders Decomposition in XpressMP

Thirteen and a half years ago (yikes!) I posted about doing Benders decomposition in CPLEX, including source code (Java) for solving a small fixed-cost transportation problem using Benders. The source code changed a couple of years later and I moved it to a new repository. I probably should state up front that when I allude to Benders decomposition I mean what is known in some quarters as "one tree Benders" -- solving the problem in a single pass, using callbacks, rather than Jacques Benders's original approach, which solved the master problem to "optimality", added cuts and then started the master problem solution again from scratch.

I decided to try applying Benders to the same problem using the Java API to FICO's XpressMP solver. It turns out that implementing callbacks in XpressMP is a bit of an adventure, in part due to its design and in part due to rather sparse documentation for the Java API. (Someone at FICO advised me to consult the documentation for the C API whenever I could not find some detail in the Java documentation.) Eventually, I managed to get a working implementation of Benders for the same example used in those old posts. I will not repeat the formulation here. You can find it in the first of the posts above. It involves deciding which of a given set of warehouses to open and how much to ship from each opened warehouse to each client. Clients have demands that must be met, warehouses have capacities, there are fixed charges to open warehouses and per-unit flow costs between warehouses and customers. The warehouse-customer links do not have any capacity limits. In the Benders decomposition, the master problem (MIP) contains binary variables indicating which warehouses are opened plus a surrogate variable for the total flow cost. It initially has no constraints. The subproblem (LP) contains continuous flow variables for each combination of warehouse and customer, demand constraints for the customers and capacity constraints for the warehouses.

The Java code for my implementation can be found in this repository. The "Code" drop-down menu provides options for downloading the code. Running it requires that you have a recent version of XpressMP installed. Outside that (and having Java, of course) there are no software requirements. The code first runs a single MIP model as a benchmark, then runs the Benders model.

Callbacks

The Benders implementation requires only one callback in the master problem. It implements the XpressProblem.CallbackAPI.PreIntsolCallback interface. According to the C documentation, Xpress calls it "when an integer solution is found by heuristics or during the branch and bound search, but before it is accepted by the Optimizer". One of the arguments (soltype) tells you whether the solution is an integer-feasible node optimum (soltype = 0) or a heuristic solution (soltype = 1). The distinction is important because Xpress will allow you to add cuts when it has a node solution but will not allow you to add cuts when it has a heuristic solution.

The callback includes an argument (pReject in my code) that you set to 0 to accept the solution or 1 to reject it. Somewhat curiously, if a node solution is infeasible or underestimates flow costs and you are able to add a Benders cut, you are supposed to "accept" the solution (set pReject to 0) in the callback (else the cuts are lost, if I understand this correctly). If a heuristic solution is infeasible or underestimates the flow cost, you reject it (set pReject to 1). The argument has class IntHolder, so you do not directly set it to 0 or 1; you instead set its value field to the desired value (for instance, pReject.value = 0 to accept the solution).

According to a source at FICO, there is currently no callback that allows you to add Benders cuts when dealing with a heuristic solution. All you can do is reject it if it is infeasible or underestimates the flow costs. This results in the master problem solver repeatedly generating (via heuristic) the same or similar infeasible solutions, which could be avoided if Benders cuts could be added.

While researching this, I came across a Benders example (in Python)  that also used a callback which in Java would implement the OptNodeCallback interface. This is apparently no longer necessary, due to changes to the solver.

Variable names

As with all other solvers, Xpress lets you assign names to variables.  In my code, I print out the Benders cuts being added. This is mainly for debugging purposes. I certainly would not do it in a production problem with hundreds or thousands of binary variables. It turns out this is not as easy as one might think. For rather arcane reasons that I will not get into (since I would get the explanation wrong), inside the callback the variables are named X[0], X[1], X[2] etc., regardless of whatever names you originally assigned them. This is a bit confusing (and makes debugging tricky, since it's unclear which of the original variables is now known as X[12]). Daniel Junglas at FICO provided me Java code for a workaround, which is included in my code. You will find it in the BendersModel.printCut method.

Thread safety

When using multiple threads, there is good news and bad news. The good news is that Xpress goes to great lengths to be thread-safe.  Part of their implementation of thread safety is the first argument of the callback function, which is a presolved (I believe) copy of the original (master) problem. This means that if the callback is running simultaneously in multiple threads, each thread is working with a different copy of the problem, avoiding collisions. 

The bad news is that this introduces some rather curious problems. In my case, problems arose when printing the cuts being added. The cuts were expressed in terms of the original variables (defined in the master problem and saved in an array for later use generating cuts). For whatever reason, the cuts themselves came out fine, and adding them worked, but when I went to print them (which implicitly meant invoking toString() on the cut expressions) Xpress threw exceptions and eventually crashed. This only happened when using multiple threads. If I throttled Xpress to a single thread, printing the cuts caused no problems. Fortunately, the workaround of the variable naming issue (implemented in the printCut method) also eliminated the exceptions occurring when I printed the cuts.

So, bottom line, I now have working code that solves a problem via Benders decomposition with the Java API to Xpress. You are free to download it if you wish, provided you promise not to laugh at it. Okay, not to laugh at it too much.