Tuesday, July 31, 2018

NP Confusion

I just finished reading a somewhat provocative article on the CIO website, titled "10 reasons to ignore computer science degrees" (when hiring programmers). While I'm not in the business of hiring coders (although I recent was hired as a "student programmer" on a grant -- the Universe has a sense of humor), I find myself suspecting that the author is right about a few points, overstating a few and making a few that are valid for some university CS programs but not for all (or possibly most). At any rate, that's not why I mention it here. What particularly caught my eye was the following paragraph:
It’s rare for a CS major to graduate without getting a healthy dose of NP-completeness and Turing machines, two beautiful areas of theory that would be enjoyable if they didn’t end up creating bad instincts. One biologist asked me to solve a problem in DNA sequence matching and I came back to him with the claim that it was NP-complete, a class of problems that can take a very long time to solve. He didn’t care. He needed to solve it anyway. And it turns out that most NP-complete problems are pretty easy to solve most of the time. There are just a few pathological instances that gum up our algorithms. But theoreticians are obsessed with the thin set that confound the simple algorithms, despite being rarely observed in everyday life.
Any of my three regular readers will know that I periodically obsess/carp about NP-fixation, so I'm sympathetic to the tenor of this. At the same time, I have a somewhat mixed reaction to it.
  • "... NP-complete, a class of problems that can take a very long time to solve." This is certainly factually correct, and the author thankfully said "can" rather than "will". One thing that concerns me in general, though, is that not everyone grasps that problems in class P, for which polynomial time algorithms are known, can also take a very long time to solve. One reason, of course, is that "polynomial time" means run time is a polynomial function of problem size, and big instances will take longer. Another is that $p(n)=n^{1000}$ is a polynomial ... just not one you want to see as a (possibly attainable) bound for solution time. There's a third factor, though, that I think many people miss: the size of the coefficients (including a constant term, if any) in the polynomial bound for run time. I was recently reading a description of the default sorting algorithm in a common programming language. It might have been the one used in the Java collections package, but don't quote me on that. At any rate, they actually use two different sorting algorithms, one for small collections (I think the size cutoff might have been around 47) and the other for larger collections. The second algorithm has better computational complexity, but each step requires a bit more work and/or the setup is slower, so for small collections the nominally more complex algorithm is actually faster.
  • "He didn’t care. He needed to solve it anyway." I love this. It's certainly true that users can ask coders (and modelers) for the impossible, and then get snippy when they can't have it, but I do think that mathematicians (and, apparently, computer scientists) can get a bit too locked into theory. <major digression> As a grad student in math, I took a course or two in ordinary differential equations (ODEs), where I got a taste for the differences between mathematicians and engineers. Hand a mathematician an ODE, and he first tries to prove that it has a solution, then tries to characterize conditions under which the solution is unique, then worries about stability of the solution under changes in initial conditions or small perturbations in the coefficients, etc., ad nauseum. An engineer, faced with the same equation, tries to solve it. If she finds the solution, then obviously one exists. Depending on the nature of the underlying problem, she may or may not care about the existence of multiple solutions, and probably is not too concerned about stability given changes in the parameters (and maybe not concerned about changes in the initial conditions, if she is facing one specific set of initial conditions). If she can't solve the ODE, it won't do her much good to know whether a solution exists or not.</major digression> At any rate, when it comes to optimization problems, I'm a big believer in trying a few things before surrendering (and trying a few optimization approaches before saying "oh, it's NP-hard, we'll have to use my favorite metaheuristic").
  • "And it turns out that most NP-complete problems are pretty easy to solve most of the time. There are just a few pathological instances that gum up our algorithms." I find this part a bit misleading. Yes, some NP-complete problems can seem easier to solve than others, but the fundamental issue with NP-completeness or NP-hardness is problem dimension. Small instances of problem X are typically easier to solve than larger instances of problem X (although occasionally the Universe will mess with you on this, just to keep you on your toes). Small instances of problem X are likely easier to solve than large instances of problem Y, even if Y seems the "easier" problem class. Secondarily, the state of algorithm development plays a role. Some NP-complete problem classes have received more study than others, and so we have better tools for them. Bill Cook has a TSP application for the iPhone that can solve what I (a child of the first mainframe era) would consider to be insanely big instances of the traveling salesman problem in minutes. So, bottom line, I don't think a "few pathological instances" are responsible for "gum[ming] up our algorithms". Some people have problem instances of a dimension that is easily, or at least fairly easily, handled. Others may have instances (with genuine real-world application) that are too big for our current hardware and software to handle. That's also true of problems in class P. It's just that nobody ever throws their hands up in the air and quits without trying because a problem belongs to class P.
In the end, though, the article got me to wondering two things: how often are problems left unsolved (or solved heuristically, with acceptance of a suboptimal final solution), due to fear of NP-completeness; and (assuming that's an actual concern), would we be better off if we never taught students (other than those in doctoral programs destined to be professors) about P v. NP, so that the applications programmers and OR analysts would tackle the problems unafraid?

Tuesday, July 17, 2018

Selecting Box Sizes

Someone posted an interesting question about box sizes on Mathematics Stack Exchange. He (well, his girlfriend to be precise) has a set of historical documents that need to be preserved in boxes (apparently using a separate box for each document). He wants to find a solution that minimizes the total surface area of the boxes used, so as to minimize waste. The documents are square (I'll take his word for that) with dimensions given in millimeters.

To start, we can make a few simplifying assumptions.
  • The height of a box is not given, so we'll assume it is zero, and only consider the top and bottom surfaces of the box. For a box (really, envelope) with side $s$, that makes the total area $2s^2$. If the boxes have uniform height $h$, the area changes to $2s^2 + 4hs$, but the model and algorithm I'll pose are unaffected.
  • We'll charitably assume that a document with side $s$ fits in a box with side $s$. In practice, of course, you'd like the box to be at least slightly bigger, so that the document goes in and out with reasonable effort. Again, I'll let the user tweak the size formula while asserting that the model and algorithm work well regardless.
The problem also has three obvious properties.
  • Only document sizes need be considered as box sizes, i.e. for every selected size at least one document should fit "snugly".
  • The number of boxes you need at each selected size equals the number of documents too large to fit in a box of the next smaller selected size but capable of fitting in a box of this size.
  • You have to select the largest possible box size (since that is required to store the largest of the documents).
What interests me about this problem is that it can be a useful example of Maslow's Hammer: if all you have is a hammer, every problem looks like a nail. As an operations researcher (and, more specifically, practitioner of discrete optimization) it is natural to hear the problem and think in terms of general integer variables (number of boxes of each size), binary variables (is each possible box size used or not), assignment variables (mapping document sizes to box sizes) and so on. OR consultant and fellow OR blogger Erwin Kalvelagen did a blog post on this problem, laying out several LP and IP formulations, including a network model. I do recommend your reading it and contrasting it to what follows.

The first thought that crossed my mind was the possibility of solving the problem by brute force. The author of the original question supplied a data file with document dimensions. There are 1166 documents, with 384 distinct sizes. So the brute force approach would be to look at all $\binom{383}{2} = 73,153$ or $\binom{383}{3} = 9,290,431$ combinations of box sizes (in addition to the largest size), calculate the number of boxes of each size and their combined areas, and then choose the combination with the lowest total. On a decent PC, I'm pretty sure cranking through even 9 million plus combinations will only need a tolerable amount of time.

A slightly more sophisticated approach is to view the problem through the lens of a layered network. There are either three or four layers, representing progressively larger selected box sizes, plus a "layer 0" containing a start node. In the three or four layers other than "layer 0", you put one node for each possible box size, with the following restrictions:
  • the last layer contains only a single node, representing the largest possible box, since you know you are going to have to choose that size;
  • the smallest node in each layer is omitted from the following layer (since layers go in increasing size order); and
  • the largest node in each layer is omitted from the preceding layer (for the same reason).
Other than the last layer (and the zero-th one), the layers here will contain 381 nodes each if you allow four box sizes and 382 if you allow three box sizes. An arc connects the start node to every node in the first layer, and an arc connects every node (except the node in the last layer) to every node in the next higher layer where the head node represents a larger size box than the tail node. The cost of each arc is the surface area for a box whose size is given by the head node, multiplied by the number of documents too large to fit in a box given by the tail node but small enough to fit in a box given by the head node.

I wanted to confirm that the problem is solvable without special purpose software, so I coded it in Java 8. Although there are plenty of high quality open-source graph packages for Java, I wrote my own node, arc and network classes and my own coding of Dijkstra's shortest path algorithm just to prove a point about not needing external software. You are welcome to grab the source code (including the file of document sizes) from my Git repository if you like.

I ran both the three and four size cases and confirmed that my solutions had the same total surface areas that Erwin got, other than a factor of two (I count both top and bottom; he apparently counts just one of them). How long does it take to solve the problem using Dijkstra's algorithm? Including the time reading the data, the four box version takes about half a second on my decent but not workstation caliber PC. The three box version takes about 0.3 seconds, but of course gives a worse solution (since it is more tightly constrained). This is single-threaded, by the way. Both problem set up and Dijkstra's method are amenable to parallel threading, but that would be overkill given the run times.

So is it wrong to take a fancier modeling approach, along the lines of what Erwin did? Not at all. There are just trade-offs. The modeling approach produces more maintainable code (in the form of mathematical models, using some modeling language like GAMS or AMPL) that are also more easily modified if the use case changes. The brute force and basic network approaches I tried requires no extra software (so no need to pay for it, no need to learn it, ...) and works pretty well for a "one-off" situation where maintainability is not critical.

Mainly, though, I just wanted to make a point that we should not overlook simple (or even brute force) solutions to problems when the problem dimensions are small enough to make them practical ... especially with computers getting more and more powerful each year.

Friday, July 6, 2018

Mint 19 Upgrade: Adventures #1-3

I use my laptop as the "canary in the coal mine" when it comes to do operating system upgrades, since there's nothing awesomely important on it. So today I tried upgrading from Linux Mint 18.3 to 19.0. Note that I used the upgrade path, rather than downloading the installer, burning it to a bootable disk, then installing from there. In hindsight, that might have been the faster approach. The upgrade took over an hour, and that's before any debugging.

The case of the not-so-missing library file


I hit the first of what will no doubt be several adventures when I reinstalled RStudio desktop and discovered it would not run. Despite the installer saying that all dependencies were satisfied, when I tried to run it from a command line I was told that a library file (libGL.so.1) could not be found.

I'll skip over another hour or so of pointless flailing and cut to the chase scene. It turns out that libGL.so.1 actually was installed on my laptop, as part of the libgl1-mesa-glx package. It was hiding in plain sight in /usr/lib/x86_64-linux-gnu/mesa/. Somehow, that folder had not made it onto the system library path. (I have no idea why.) So I ran the command

sudo ldconfig /usr/lib/x86_64-linux-gnu/mesa

and that fixed the problem.

Editor? We don't need no stinkin' editor


Next up, I couldn't find a text editor! Note that LibreOffice was installed, and was the default program to open text (.txt) files. Huh?? Poking around, I found nano, but xed (the default text editor in Mint 18) and gedit (the previous default editor) were not installed (even though xed was present before the upgrade).

Fixing this was at least (to quote a math prof I had in grad school) "tedious but brutally straightforward". In the software manager, I installed xed ... and xreader, also MIA. For whatever reason, the other X-Apps (xviewer, xplayer and pix) were already installed (as they all should have been).

The mystery of the launcher that wouldn't launch


Mint has a utility (mintsources) that lets you manage the sources (repositories, PPAs etc.) that you use. There is an entry for it in the main menu, but clicking that entry failed to launch the source manager. On the other hand, running the command ("pkexec mintsources") from a terminal worked just fine.

I found the original desktop file at /usr/share/applications/mintsources.desktop (owned by root, with read and write permissions but not execute permission). After a bunch of messing around, I edited the menu entry through the menu editor (by right-clicking the menu entry and selecting "Edit properties"), changing "pkexec mintsources" to "gksudo mintsources". That creating another version at ~/.local/share/applications/mintsources.desktop. After right-clicking the main menu button and clicking "Reload plugins", the modified entry worked. I have no idea why that works but "pkexec mintsources" does not, even though it does from a terminal. I tried editing back to "pkexec", just in case the mere act of editing was what did the trick, but no joy there. So I edited back to "gksudo", which seems to be working ... for now ... until the gremlins return from their dinner break.

Update: No sooner did I publish this than I found another instance of the same problem. The driver manager would not launch from the main menu. I edited "pkexec" to "gksudo" for that one, and again it worked. I guess "pkexec" is somehow incompatible with the Mint menu (at least on my laptop).

I'll close for now with a link to "Solutions for 24 bugs in Linux Mint 19".




Thursday, July 5, 2018

Firefox Ate My Bookmarks

This morning, I "upgraded" Firefox 60.0.2 to 61.0.0 on my desktop computer (Linux Mint). When I started the new version, it came to life with the correct default tabs and pages, no menu bar (my reference), and with the bookmark tool bar visible ... but completely empty. Toggling the menu option to display it was unproductive. I restored the most recent backup of the bookmarks, but the tool bar remained empty.

So I killed Firefox, started it in safe mode (no change), then killed it again and restarted it normally. This time the bookmark tool bar was populated with the correct bookmarks and folders. (I don't know if passing through safe mode was necessary. Maybe it just needed another restart after the restoration operation.) Unfortunately, my problems were not yet over. Although I had the correct top-level stuff in the bookmark tool bar, the various folders only had about three items each, regardless of how many were supposed to be in each folder (and, trust me, it was typically more than three).

When you go to restore bookmarks in Firefox, it will show you a list of backup files (I think it keeps the fifteen most recent) and how many items each contains. My recent backups were all listed with 18 to 21 items. Fortunately, I also have Firefox (not yet upgraded) on my laptop (running the same version of Linux Mint), with the same bookmarks. On the laptop, recent backups have 444 items. So either the upgrade messed up the backup files or Firefox 61.0.0 has trouble reading backups from version 60. Heck, maybe Firefox 60 screwed up making the automatic backups on my desktop (but, somehow, not on my laptop).

The laptop proved my savior. I manually backed up the bookmarks on it to a file, parked that file on Dropbox just in case, copied it to the desktop and manually restored it. For the moment, at least, I have all my bookmarks back.

In case you're reading this because you're in the same boat, here are the steps to do manual backups. Of course, this will only help if you have your bookmarks intact somewhere. If you're thinking of upgrading Firefox but haven't pulled the trigger yet, you might want to make a manual backup for insurance.

Start with the "hamburger menu" (the button three horizontal parallel lines). From there, click the "Library" option, then "Bookmarks", then "Show All Bookmarks" (at the very bottom). That opens up a window titled "Library". Click the "Import and Backup" drop-down menu, then either "Backup" or "Restore" depending on your intent. Backup will give you a typical file saving dialog. Restore will give you a list of your recent backups and an option at the bottom to select a file. Use that option to navigate to a manual backup.

Once again, software saves me from having a productive morning. :-(

By the way, this bug has already been reported: https://bugzilla.mozilla.org/show_bug.cgi?id=1472127.

Tuesday, July 3, 2018

Usefulness of Computer Science: An Example

I thought I would follow up on my June 29 post, "Does Computer Science Help with OR?", by giving a quick example of how exposure to fundamentals of computer science recently helped me.

A current research project involves optimization models containing large numbers of what are basically set covering constraints, constraints of the form $$\sum_{i\in S} x_i \ge 1,$$ where the $x_i$ are binary variables and $S$ is some subset of the set of all possible indices. The constraints are generated on the fly (exactly how is irrelevant here).

In some cases, the same constraint may be generated more than once, since portions of the code run in parallel threads. Duplicates need to be weeded out before the constraints are added to the main integer programming model. Also, redundant constraints may be generated. By that, I mean we may have two cover constraints, summing over sets $S_1$ and $S_2$, where $S_1 \subset S_2$. When that happens, the first constraint implies the second one, so the second (weaker) constraint is redundant and should be dropped.

So there comes a "moment of reckoning" where all the constraints generated by all those parallel threads get tossed together, and duplicate or redundant ones need to be weeded out. That turns out to be a rather tedious, time-consuming operation, which brings me to how the constraints are represented. I'm coding in Java, which has various implementations of a Set interface to represent sets. The coding path of least resistance would be to toss the indices for each constraint into some class implementing that interface (I generally gravitate to HashSet). The Set interface defines an equals() method to test for equality and a containsAll() method to test whether another set is a subset of a given set. So this would be pretty straightforward to code.

The catch lies in performance. I have not found it documented anywhere, but I suspect that adding elements to a HashSet is $O(n)$ while checking subset status or equality is $O(n^2)$, where $n$ is the number of possible objects (indices). The reason I say $O(n^2)$ for the latter two operations is that, in the worst case, I suspect that Java takes each object from one subset and compares it to every object in the other set until it finds a match or runs out of things to which to compare. That means potentially $O(n)$ comparisons for each of $O(n)$ elements of the first set, getting us to $O(n^2)$.

A while back, I took the excellent (and free) online course "Algorithms, Part 1", offered by a couple of faculty from my alma mater Princeton University. I believe it was Robert Sedgewick who said at one point (and I'm paraphrasing here) that sorting is cheap, so if you have any inkling it might help, do it. The binary variables in my model represent selection or non-selection of a particular type of object, and I assigned a complete ordering to them in my code. By "complete ordering" I mean that, given two objects $i$ and $j$, I can tell (in constant time) which one is "preferable". Again, the details do not matter, nor does the plausibility (or implausibility) of the order I made up. It just matters that things are ordered.

So rather than just dump subscripts into HashSets, I created a custom class that stores them in a TreeSet, a type of Java set that maintains sort order using the ordering I created. The custom class also provides some useful functions. One of those functions is isSubsetOf(), which does pretty much what it sounds like: A.isSubsetOf(B) returns true if set $A$ is a subset of set $B$ and false if not.

In the isSubsetOf() method, I start with what are called iterators for the two sets $A$ and $B.$ Each starts out pointing to the smallest member of its set, "smallest" defined according to the ordering I specified. If the smallest member of $B$ is bigger than the smallest member of $A$, then the first element of $A$ cannot belong to $B$, and we have our answer: $A\not\subseteq B$. If the smallest element of $B$ is smaller than the smallest element of $A$, I iterate through element of $B$ until either I find a match to the smallest element of $A$ or run out of elements of $B$ (in which case, again, $A\not\subseteq B$). Suppose I do find a match. I bump the iterator for $A$ to find the second smallest element of $A$, then iterate through subsequent members of $B$ (picking up where I left off in $B$, which is important) until, again, I get a match or die trying. I keep doing this until I get an answer or run out of elements of $A$. At that point, I know that $A\subseteq B$.

What's the payoff for all this extra work? Since I look at each element of $A$ and each element of $B$ at most once, my isSubstOf() method requires $O(n)$ time, not $O(n^2)$ time. Using a TreeSet means the contents of each set have to be sorted at the time of creation, which is $O(n\log n)$, still better than $O(n^2)$. I actually did code it both ways (HashSet versus my custom class) and timed them on one or two moderately large instances. My way is in fact faster. Without having a bit of exposure to computer science (including the Princeton MOOC), though, it would never have occurred to me that I could speed up what was proving to be a bottleneck in my code.