Monday, November 18, 2024

Android Silliness

Once upon a time I bought an Insignia smart speaker (with Google Assistant baked in) for my bedroom. I could set an alarm that would stream my choice of radio station using verbal commands ("Hey Google, set a radio alarm for ..."). It worked so well that Google decided to fix it.

At some point I saw a brief item in a tech feed about Google eliminating radio alarms in favor of some sort of automation thing, but my speaker kept on working and I ignored the story ... to my own peril. A few weeks ago, I canceled the existing alarm. When I went to set a new alarm, the speaker's response was "I'm sorry, I don't know how to do that."

After a bit of research, I discovered that the Google Home app now has something called "automations". It turned out a bunch of predefined automations were set on my phone, none of which ever did anything (because I never uttered the necessary incantation?). None were anything I wanted, so I turned them all off and created a new one. Automations involve one or more "starters" (in my case, every day at 6:57 am), one or more "actions" (in my case, play the radio station I want) and a "configuration" (in my case, play it on the bedroom speaker). You can test the automation by tapping a button (and I did). It worked when triggered manually.

So began the adventure. On day 1, the new automation failed to do anything. I woke up on my own (belatedly), changed the time to five minutes or so after I awoke (call it 7:20 am), and it worked. So I reset it to 6:57 am, and the next day it again failed. Eventually I found a notification hidden away somewhere that the automation had failed due to scheduled down time being set. I went to the settings menu in the Home app, found "Digital wellbeing", and sure enough there was a downtime setting. It was set to expire at 8:00 am, leaving me confused as to why the 7:20 am test had worked, but whatever. I deleted it.

Next morning, no luck. I went to the settings for the speaker and found it had its own digital wellbeing setting, which I also deleted. Still no luck, and no notifications I could find as to why it did not work.

I'll skip over the details of a very unsatisfying online chat with a level 1 Google support person (who did not seem to grasp that the 7:20 test working implied that the speaker really was connected to the home WiFi) and an email exchange with a level 2 support person who wanted the speaker's serial number among other things. (That was two weeks ago. Nothing back so far.) With some experimentation, I discovered the following. If I set the alarm time prior to 7:00 am, it did not work at all. If I set it to exactly 7:00 am, it triggered at 7:00 am the first day and 7:02 am every day thereafter. If I set it to 7:01 am, it triggered at 7:01 am every day. (Note that the 7:01 setting triggered a minute before the 7:00 am setting did, which offended me as a mathematician due to the lack of monotonicity.)

Ultimately I got another notification about something to do with downtime, which made no sense to me since I had deleted the downtime settings for both phone and speaker. So I went into the phone settings (not the Home app, but the phone itself). After considerable vertical scrolling, I found "Digital Wellbeing & parental controls". Wallowing around in that, I found a "Bedtime mode" (which was turned off) and a "Do Not Disturb" menu. "Do Not Disturb" was turned off on the phone, but fortunately I got curious and burrowed into the menu. In the "General" section of that menu was a heading named "Schedules" saying I had three schedules set. Two of those were "Gaming" and "Game Dashboard" (no idea what they do or why they were turned on). I forget the title of the third one (I have since renamed it "sleeping"), but that turned out to be what was blocking the alarm. I left the start time at 11:00 pm and changed the end time to "6:45 am next day". It had been set to 7:00 am, which apparently was what caused issues. You can customize what things get blocked (via "Do Not Disturb behavior"), but I didn't bother. Curiously, there is a switch labeled "Alarm can override end time", which was (and remains) turned on. Since that did not allow the automation to trigger before 7:00 am, I assume it only applies to alarms and not automations.

With that tweak, the radio alarm via the speaker started working at 6:57 am, and so I am tentatively going to declare victory. Why we need digital wellbeing setting scattered all over the place under various names is a mystery to me, one of many.

Sunday, November 10, 2024

Solver Parameters Matter

Modern integer programming solvers come with a gaggle of parameters the user can adjust. There are so many possible parameter combinations that vendors are taking a variety of approaches to taming the beast. The first, of course, is for vendors to set default values that work pretty well most of the time. This is particularly important since many users probably stick to default settings unless they absolutely have to start messing with the parameters. (By "many" I mean "myself and likely others".) The second is to provide a "tuner" that the user can invoke. The tuner experiments with a subset of possible parameter settings to try to find a good combination. Third, I've seen some discussion and I think some papers or conference presentations on using machine learning to predict useful settings based on characteristics of the problem. I am not sure how far along that research is and whether vendors are yet implementing any of it.

In the past few days I got a rather vivid reminder of how much a single parameter tweak can affect things. Erwin Kalvelagen did a blog post on sorting a numeric vector using a MIP model (with an up front disclaimer that it "is not, per se, very useful"). He test a couple of variants of a model on vectors of dimension 50, 100 and 200. I implemented the version of his model with the redundant constraint (which he found to speed things up) in Java using the current versions of both CPLEX and Xpress as solvers. The vector to sort was generated randomly with components distributed uniformly over the interval (0, 1). I tried a few random number seeds, and while times varied a bit, the results were quite consistent. Not wanting to devote too much time to this, I set a time limit of two minutes per solver run.

Using default parameters, both solvers handled the dimension 50 case, but CPLEX was about eight times faster. For dimension 100, CPLEX pulled off the sort in two seconds but Xpress still did not have a solution at the two minute mark. For dimension 200, CPLEX needed around 80 seconds and Xpress, unsurprisingly, struck out.

So CPLEX is faster than Xpress, right? Well, hang on a bit. On the advice of FICO's Daniel Junglas, I adjusted one of their parameters ("PreProbing") to a non-default value. This is one of a number of parameters that will cause the solver to spend more time heuristically hunting for a feasible or improved solution. Using my grandmother's adage "what's sauce for the goose is sauce for the gander," I tweaked an analogous parameter in CPLEX ("MIP.Strategy.Probe"). Sure enough, both solvers got faster on the problem (and Xpress was able to solve all three sizes), but the changes were more profound than that. On dimension 50, Xpress was between three and four times faster than CPLEX. On dimension 100, Xpress was again around four times faster. On dimension 200, Xpress won by a factor of slightly less than three.

So is Xpress actually faster than CPLEX on this problem? Maybe, maybe not. I only tweaked one parameter among several that could be pertinent. To me, at least, there is nothing about the problem that screams "you need more of this" or "you need less of that", other than the fact that the objective function is a constant (we are just looking for a feasible solution), which suggests that any changes designed to tighten the bound faster are likely to be unhelpful. I confess that I also lack a deep understanding of what most parameters do internally, although I have a pretty good grasp on the role of the time limit parameter.

So the reminder for me is that before concluding that one solver is better than another on a problem, or that a problem is too difficult for a particular solver, I need to put a bit of effort into investigating whether any parameter tweaks have substantial impact on performance.

Update: A post on the Solver Max blog answers a question I had (but did not investigate). With feasibility problems, any feasible solution is "optimal", so it is common to leave the objective as optimizing a constant (usually zero). Erwin and I both did that. The question that occurred to me (fleetingly) was whether an objective function could be crafted that would help the solver find a feasible solution faster. In this case, per the Solver Max post, the answer appears to be "yes".

Sunday, November 3, 2024

Xpress and RStudio

The following is probably specific to Linux systems. I recently installed FICO Xpress optimizer, which comes with an R library to provide an API for R code. FICO requires a license file (or a license server -- I went with a static file since I'm a single user) and adds an assortment of environment variable to the bash shell, including one pointing to the license file. So far, so good.

Xpress comes with example files, including example R scripts. So I cranked up RStudio, opened the simplest example ("first_lp_problem.R", which is just what it sounds like) and executed it line by line. The problem setup lines worked fine, but the first Xpress API call died with an error message saying it couldn't find the license file in directory "." (i.e., the current working directory). The same thing happened when I tried to source the file in the RStudio console.

To make a long story somewhat shorter, after assorted failed attempts to sort things out it occurred to me to run R in a terminal and source the example file there. That ran smoothly. So the problem was with RStudio, not with R. Specifically, it turns out that RStudio runs without loading any bash environment variables.

After assorted failed attempts at a fix (and pretty much wearing out Google), I found the following solution. In my home directory ("/home/paul", a.k.a. "~") I created a text file named ".Renviron". In it, I put the line "XPAUTH_PATH=/home/paul/.../xpauth.xpr", where "..." is a bunch of path info you don't need to know and "xpauth.xpr" is the name of the license file. If you already have a ".Renviron" file, you can just add this line to it. The example script now runs fine in RStudio. Note that there are a gaggle of other bash environment variables created by Xpress, none of which presumably are known to RStudio, but apparently the license file path is the only needed by the API (at least so far). If I trip over any other omissions later on, presumably I can add them to ".Renviron".