Tuesday, December 25, 2012

Resetting Speaker Volume in Mint/Ubuntu

Okay, you have to promise not to laugh or make "old guy" cracks here. I keep the master volume on my Linux Mint PC cranked fairly low (between 40% and 45% of maximum) because I have powered speakers (and reasonably acute hearing). When I watch YouTube videos, I usually have to crank the volume up a bit. When the video is on the long side (think slide show or lecture rather than cats mugging for the camera), I often (usually?) forget to crank the volume back down again. Mint remembers the volume, so the next time I log in I get blasted by the log-in chime. If I'm not sitting at the PC when the desktop comes up -- and frequently I'm off getting a cup of coffee or doing some chore while it boots -- I'll forget to reset the volume, and eventually get blasted by some other sound.

So here's a partial fix, a script that resets the volume on log-in.
  1. In a terminal, run pactl list sinks to get a list of output devices controlled by the PulseAudio driver. Note the device number for the device that handles your speakers. (In my case, the only entry is device #1.)
  2. Using your favorite text editor, create a script file (mine is named resetVolume.sh) in a directory of your choice (~/Scripts for me). Putting the script somewhere in your home directory should keep it safe from being lost during system upgrades. Put the following two lines of code in the script file:
    #!/bin/sh
    pactl set-sink-volume 1 45%
    
    Change 1 to whatever device number you obtained in the first step and 45% to whatever volume setting you want. Note that the pactl command seems to suffer some rounding errors when it does volume changes; this script actually sets my volume to 44%, according to the panel audio control applet.
  3. In a terminal, run chmod +x ~/Scripts/resetVolume.sh (changing the path and file name as appropriate) to make the script executable.
  4. Test the script: use the panel audio applet (or whatever mechanism you normally use to control volume) to crank the master volume up or down, then run the script in a terminal and verify the volume resets correctly.
  5. Find Startup Applications in the system menu (the easiest way is to search for it by name) and run it. Click the Add button and create an entry for the script.
  6. Test once more by screwing with the volume setting and then logging out and back in.
The one drawback I've found to this is that the volume reset takes place after the startup chime sound has begun playing. So the initial auditory assault is not entirely avoided, but at least I've averted any future ones for that session.

UPDATE: I apparently declared victory prematurely. The script seems to run fairly reliably when I log out and log back in, but if I shutdown and restart, or reboot, it does not work. I switched from the pactl command to the pacmd command, but that did not help. I added a line 'sudo alsactl store' to reset the stored volume (and added the script to the sudoers file), but that did not help. I linked the script from /etc/rc0.d and from /etc/rc6.d, so that it would run when shutting down or rebooting, and confirmed that the script did indeed run; it just did not reset the stored volume. (I named it both K99resetVolume, so that it would run late in the shutdown sequence, and K00resetVolume, so that it would run early, but no joy either way.) My suspicion (and it's just a suspicion) is that there's a timing issue, with the script perhaps failing because alsa and/or pulse-audio is not running at the time the script executes. In any event, I'm at a loss how to get it to run properly.

UPDATE #2: Another day, another failure.  This time I symlinked the script in /etc/X11/Xsession.d, so that the script would run when the X system started after login. I gave it a name starting with "99z", which I think would make it the last script to run during the X start. Once again, the script ran but failed to affect the audio volume.

[SOLVED] UPDATE #3: I fixed this a while back and apparently forgot to update this post. The script that works for me is as follows:

#!/bin/sh
#
# There seems to be some inconsistency about whether the sound card
# is sink 0 or sink 1, so hit them both just to be safe.
#
pacmd set-sink-volume 0 27500
pacmd set-sink-volume 1 27500
sudo alsactl store

It will generate a harmless warning message because one of the two sinks (0, 1) will not exist when the script runs.

Thursday, December 20, 2012

A GEdit Headache

What should have been a simple task just ate an hour of my life (and, trust me, the marginal value of an hour of your life is an increasing function of age). I'm trying to compare two parts of a text file, and I wanted to use a split view in gedit. That's not a feature of gedit, so I sought out and found a plug-in, GeditSplitView, that should do the trick. I downloaded it and installed it to ~/.local/share/gedit/plugins (after creating ~/.local/share/gedit and the child plugins folder). That I had to create the folders was a bit surprising, as I was sure I'd previously created them for a different plug-in (now gone missing). I keep my home folder tree on a separate disk partition from everything else, so upgrades (such as my recent installation of Mint 14 Nadia over Mint 11 Katya) should not disturb anything in the home folder. Well, whatever.

After installing the plug-in, I restarted gedit - and discovered it was not seeing the plug-in. Hmm. As an experiment, I installed the gedit-plugins package (a set of optional plug-ins) from the Ubuntu repositories, using Synaptic. Gedit didn't see those, either, which sent me off on a fruitless expedition of web searching.

It turns out the problem is simple, if not intuitive. There was a change in plug-in formatting (and naming conventions) between gedit 2.x and gedit 3.x. The GeditSplitView plugin requires gedit 3.x. That brings me to the daffy (to me) part. Mint, as it installs itself, uses the Ubuntu 12.10 "Quantal Quetzal" repositories. (Mint is a fork of Ubuntu.) It lists version 3.6.0-0ubuntu1 as the current version of gedit-plugins (which is what I installed unsuccessfully) ... but it comes with gedit 2.30.4-ubuntu1 (and the corresponding version of the gedit-common package) preinstalled, and lists those as the current versions. So it's giving you incompatible versions of gedit and gedit-plugins as defaults (and lists no other options).

Once I sorted that out, a quick search turned up instructions on how to uninstall gedit 2.30 and install gedit 3.6.1 in its place. After that, I was able to install GeditSplitView easily, and gedit had no trouble finding it.

Sunday, December 16, 2012

How Not to Debate Gun Control

In the wake of Friday's shooting rampage at a Connecticut elementary school that left 26 dead (not counting the shooter), 20 of them between the ages of six and seven (casualty list here; keep a full box of tissues handy if you decide to read it), there are once again calls for a debate on greater gun control in the U.S., and protests by those against it. I have my own opinions on the issue, which I will keep to myself because they are just that: opinions. Both sides of the issue are repeating a pattern of "debate" that contains several fundamental flaws.

Emotional Decisions


First, both sides are confronting a very difficult issue while emotions are running high. It is hardly surprising that a considerably body of research has shown that emotions impact decision-making in a variety of ways. While there may be some benefit to a heightened emotional state in this case -- it pushes us to take up a contentious issue when we might otherwise be tempted to "kick the can down the road" -- there is also the danger that we let those emotions trump reason. In particular, listening to one's "gut" is considerably easier than dealing with a complex, multidimensional analysis.

Reliance on Anecdotes


There is a rational analysis of the issue on Randy Cassingham's blog, along with a considerable discussion in the comments section. It illustrates the flaws I'm discussing here, including in particular the reliance on anecdotes as opposed to statistics and decision models. Some parties in favor of tighter control over guns and ammunition will argue that, had those tighter controls been in effect, this particular incident would have/might have been averted, or at least produced a lower body count. Some parties opposed to tighter controls (or opposed to tighter controls merely in reaction to this incident) will argue that other crimes of a similar nature were conducted without the use of firearms, citing in particular the 1927 Bath Township school bombings. (It happens that I live approximate four miles from Bath Township.) Both sides are relying on historical anecdotes.

Mr. Cassingham mentions closures of mental hospitals, and some commenters echo the theme that we need to address mental illness, rather than gun control. It's not clear what prompted those comments, other than what I suspect is a common assumption that you have to be nuts to murder children, but it is possible that some people are recalling previous incidents in which they believe a shooter was mentally deranged (in the clinical sense) and either was denied treatment or should have been (but was not) involuntarily committed to treatment. For what it's worth, the shooter in the Virginia Tech massacre had been diagnosed with an anxiety disorder (which, to the best of my knowledge, is not a know precursor to violence) but had been receiving treatment. Eric Harris, one of the two Columbine shooters, also suffered some emotional issues but was receiving treatment. The gunman in the 2006 Amish school shooting seems to have been (in unscientific terms) a whack-job, but an undiagnosed one.

In any case, making decisions based on anecdotal evidence is unsound. Anyone in operations research (or statistics) knows that a sample of size 1 is not a useful sample. Reliance on anecdotes also makes us more susceptible to confirmation bias, since (a) we better remember the anecdotes that support our beliefs and (b) we may unconsciously warp those memories if they would otherwise not provide confirmation.

There's a parallel here to the climate control debate. Elements in favor of climate control legislation will argue that a particular weather event (the North American drought in 2012, "Superstorm" Sandy) was the direct result of global warming, even when climatologists are scrupulous in pointing out that there is no direct causal link to a single event. Global warming naysayers will focus on specific events (recent drops in recorded temperatures, record floods from a century or more ago) as evidence that global warming is not occurring, is not a recent phenomenon, or is not exacerbated by man-made emissions.

Optimizing vs. Satisficing


Not that I really believe "satisficing" is a word, but I'll bite the bullet and use it here. Even when a problem has an optimal solution, it is sometimes the case that the time and effort to find it are not adequately rewarded when an alternative solution provides an adequate degree of satisfaction in a more timely or economical manner. Besides the anecdotal aspect, Mr. Cassingham's emphasis on the Bath bombings and the wave of school stabbings and bludgeonings in China (echoed by some of the commenters) implicitly uses the line of argument that if we cannot prevent every mass shooting by enhanced gun control (optimality), it is not worth pursuing. Many (including, I'm willing to bet, Mr. Cassingham) would consider a reduction in mass shootings, or even a reduction in the body counts, as a significant improvement (satisficing). Gun control advocates are not immune from this focus on optimality; they sometimes appear to adopt the position that any level of regulation that would fail to prevent a particular incident is insufficient.

Multi-Criteria Decision Analysis


Okay, you knew this post eventually had to tie back to operations research in some meaningful way (didn't you?). The issue of how to mitigate violence at schools seems to me to be a rather messy example of multi-criteria decision analysis. This is by far not my area of expertise, so I'll keep things rather general here. As I understand MCDA, the discussion we should be having should be framed approximately as follows:
  • What are our criteria for success? This includes a variety of objectives: keeping children safe; preserving individual rights; making schools conducive places for learning; maintaining a populace capable of defending itself in times of war (I personally do not subscribe to the theory that gun owners necessarily make more effective soldiers, but it should be considered); and likely other objectives I'm not seeing at the moment.
  • How do we quantify/measure attainment of those objectives? For instance, it's fine to say we want no more children to die at school (or perhaps no more deaths specifically from gun violence), but does that mean that a 99% reduction in such deaths has no value? What about a 1% reduction?
  • What are our options? We cannot have a meaningful argument about choices without knowing what those choices are.
  • What are the costs and consequences of each alternative? These will need to be considered in a probabilistic manner. To take a specific, if rather vague, example, suppose that one of the options bans the sale of high-capacity ammunition clips. That would not stop a shooter from wreaking havoc with one or more standard clips, nor could we be positive that no shooter ever would manage to gain access to a high-capacity clip. For that matter, we have no idea when another school shooting might take place (although, sadly, the "if" does not seem to be in doubt -- see the Wikipedia compilation of school-related attacks for some severely depressing historical evidence.). Someone will need to attempt a quantitative assessment of the frequency and severity of violent incidents under each alternative being considered. "Minority Report" being a work of fiction, we cannot claim that a particular decision will prevent or mitigate a specific future attack; we can only talk about expected benefits (and costs).
  • If we consider multiple alternatives in combination, how do they interact? There is no reason to assume that, in our quest to make life safer for our children, we are limited to a single course of action. It would, however, be a mistake to evaluate each course of action independently and then assume that the results of a combination of them would be additive.
  • How will we reconcile trade-offs? As one can see in the Wikipedia article on MCDA, there are quite a few methods for solving a multi-criteria decision problem. They differ in large part in how they address the issue of trade-offs. For instance, charitably assuming we can even measure these things, how much freedom are you willing to give up, and/or how much more are you willing to pay in taxes, to eliminate one "expected" future fatality or injury?
This is all very complex stuff, and to do it justice requires a great deal of research (by unbiased parties, if we can find them), a careful and open discussion of what we might gain and what we might lose with each possible action, and an attempt to reach consensus on what trade-offs are or are not acceptable. If the process ever reaches a conclusion, it will also take one heck of a sales job to convince the public that the result really is a good (not perfect, but better than status quo) result.

Update


Shivaram Subramanian wrote a really interesting and well-researched "Collection of Notes on Gun Control", which I commend to anyone interested in the debate. (You won't often see Louis L'Amour, Isaac Asimov and King Asoka of India name-checked in the same blog post.) He cites a Washington Post column from July, in the aftermath of the Aurora (Colorado) shooting, titled "Six facts about guns, violence and gun control" which is also very much worth reading (or, sadly, rereading).

Tuesday, December 11, 2012

Minor CPLEX Java API Backward Compatibility Issue

I just moved a Java application (that compiled and ran fine) from CPLEX 12.4 to CPLEX 12.5 and saw it suddenly sprout a couple of syntax errors (more precisely, a couple of instances of the same syntax error). This may be a case of "be careful what you wish for".

In the CPLEX 12.4 Java API, IloCplex.addLazyConstraint required an argument of type IloConstraint. In the 12.5 API, it requires its argument to be IloRange (which is descended from IloConstraint). I was looking forward to this change. Lazy constraints in CPLEX must be linear constraints, but all sorts of things (disjunctions, implications, SOS constraints) qualify as IloConstraint. The 12.4 API would let you add one of these not-so-linear constraints as a lazy constraint at compile time; at run time, CPLEX would pitch a fit. With the 12.5 API, you'll know immediately (at least if using an IDE) should you attempt to add something untoward as a lazy constraint.

That's the good news. The bad news is that adding a constraint of the form linear expression >= variable or linear expression >= linear expression (and similarly for == or <=) got trickier. There are a gaggle of overloads of the IloModeler.ge, IloModeler.le and IloModeler.eq methods for generating constraints. If one argument is an expression and the other is a number, the resulting constraint is an instance of IloRange, but if both arguments are expressions, the constraint is an instance of IloConstraint -- and looks like an illegal argument to addLazyConstraint in the 12.5 API. So the following code snippet works in 12.4 but won't compile in 12.5:
IloCplex cplex = new IloCplex();
IloLinearNumExpr expr = cplex.linearNumExpr();
IloLinearNumVar x = cplex.numVar(0, 10, "x");
// ... do something to build up the expression ...
cplex.addLazyConstraint(cplex.ge(x, expr));  // compile error here
Yes, I tell CPLEX that expr is a linear expression (and actually build it to be linear), but that doesn't mean CPLEX believes me. There is no overload of IloModeler.ge that recognizes linear expressions as such (they are recognized as expressions, IloNumExpr rather than IloLinearNumExpr).

Fortunately, the solution is simple: just explicitly cast the constraint as IloRange. For my little sample, the fix is to change the last line to
cplex.addLazyConstraint((IloRange) cplex.ge(x, expr));
Additional overloads of the constraint-building methods that output IloRange when both sides are linear (constant, IloNumVar or IloLinearNumExpr) would be nice, but I'm pretty sure the developers' to-do list contains some higher priority items.

Friday, December 7, 2012

Modeling Challenges in the Real World

One of the ongoing issues with operations research and analytics (OR/A) as a profession is that we really do not have a formal system of apprenticeships, the way some trades do. Throw in a mix of university professors who are in love with the mathematics behind OR/A but often short on experience applying OR/A in practice (I freely confess to having been one such), and we end up with a system that produces graduates long on "book learning" but destined for some shocks when they enter the (non-academic) workforce.

Recently, blogs and other online resources have begun to fill the gap in what I call "tactical knowledge", things that seldom make it into textbooks or college lectures. To that end, I'd like to highlight, and expand a bit upon, a recent blog post by Jean-François Puget titled "Analytic Challenges". He lists a number of challenges and then discusses one in detail, how to make analytics "socially and organizationally acceptable". What follows are a few observations of my own.

Choices


Puget discusses impediments to getting the front line troops to accept and implement analytical solutions. This is a very important consideration. Another, slightly different one, is that managers like choices. They often do not care to be given a single solution, even if it is "optimal". (I put optimal in quotes because optimality is always with respect to a particular model, and no model is a perfect representation of the real world.) Among the various reasons for this, some managers realize that they are being paid the "big bucks" for making decisions, not rubber-stamping something an analyst (let alone a rather opaque computer model) said. If you find this notion quaint, ask yourself how you would feel surrendering control to an automated driving system while your car is zipping down a crowded highway during rush hour.

Long ago, before "open-source software" was a recognized phrase, a student in one of my courses became so enthralled by linear and integer programming that he coded his own solver ... on a Commodore 64. (Clearly he was not lacking in fortitude.) He actually used it, in his day job (for the state Department of Transportation), to help make decisions about project portfolios. In order to get his bosses to use his results, he had to generate multiple solutions for them to browse, even when the model had a unique optimum. So he would hand them one optimal solution and several diverse "not too far from optimal" solutions. I think they often picked the optimal solution, but they key is that they picked.

Data Quality


When I taught statistics, I liked to share with my classes Stamp's Law:
The government are very keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first instance from the chowky dar (village watchman in India), who just puts down what he damn pleases.
Analytics relies on the existence and accessibility of relevant data, and quite often that data is compiled from unreliable sources, or recorded by workers whose attention to detail is less than stellar. I once had a simple application of Dijkstra's shortest path algorithm go horribly awry because the arc lengths (distances), extracted from a corporate database, contained a large number of zeros. (For instance, the distance from Cincinnati to Salt Lake City was zero.) The zeros were apparently the result of employees leaving the distance field blank when recording shipments (as opposed to, say, spontaneous appearance and disappearance of wormholes). Puget's list mentions "uncertain or incomplete data", to which we can add incorrect data. I've read estimates (for instance, here) that anywhere from 60 to 80 percent of the work in a data-based analytics project can be devoted to data cleaning ... a topic that I think has not yet gained sufficient traction in academe.

Implicit Constraints/Criteria


By "implicit" I mean unstated (and possibly unrecognized). Puget mentions this as a cause of resistance to implementation of model solutions, in the context of front-line troops finding faults in the solution based on aspects not captured in the model. Managers may also find these types of faults.

An example I used in my modeling classes was a standard production planning model (linear program), in which the analyst selects the amounts of various products to manufacture so to maximize profit (the single criterion). In the textbook examples we used, it was frequently the case that some products were not produced at all in the optimal solution, because they were insufficiently profitable within the planning horizon. I would then ask the class what happens if you implement the solution and, down the road, those product become profitable (and perhaps quite attractive)? By discontinuing the products, have you lost some of the expertise/institutional knowledge necessary to produce them efficiently and with high quality? Have you sacrificed market share to competitors that may be difficult to recover? Did you just kill a product developed by the boss's nephew? My point was that perhaps there should be constraints requiring some (nonzero) minimum level of output of each product, so as to maintain a presence in the market. Otherwise, you in essence have a tactical or operational model (the boundary is a bit fuzzy to me) making a strategic decision.

Another example I used of implicit criteria also began with a production planning model (in this case more of a scheduling application). You solve the model and find an optimal schedule that meets demand requirements at minimum cost. It also involves major changes from the current schedule. What if there were a slightly suboptimal schedule that required only minor deviations from the current schedule. To a mathematician, "slightly suboptimal" is still suboptimal. To a production foreman having to make those changes, the trade-off might seem well justified.

Mission Creep


Particularly when a model is a first foray into analytics, the users commissioning the model may have a limited scope in mind, either because they are narrowly focused on solving a particular problem or because they think anything larger might be unmanageable. Once the decision makers see the fruits of a working model, they may get the urge to widen the scope and/or scale of the model. Should that happen, modeling choices made early on may need to be revisited, since the original model may not scale well. (Coders know this process as refactoring, not to be confused with refactoring a basis matrix.)

Mission creep can be the undoing of a project. I vaguely remember stories of the Pentagon tweaking the design of a new surface combatant, while the prototype was under construction, to the point where its superstructure raised the center of gravity so high that it was allegedly unstable in a rough sea. I also vaguely remember stories of a new carrier-based patrol aircraft (antisubmarine patrol bomber?) being redesigned on the fly (so to speak) until the prototype rolled off the assembly line too large to land on a carrier. Sadly, "vaguely remember" translates to being unable to recall the specific projects. If anyone recalls the details, please let me know via comment.

Tuesday, December 4, 2012

Cinnamon Spices

Yesterday I upgraded my home PC from Linux Mint 11 (Katya) to Mint 14 (Nadia), picking the Cinnamon version. (Well, I did most of the upgrading -- I'm still tweaking things, installing applications that were not from the main repositories, etc.) A few things about the upgrade are worth mentioning:
  1. Mint ships with LibreOffice 3.6.2 (I think); the current version, as of this writing, is 3.6.3. Either would be more recent than the version I had with Katya. I don't use most of LibreOffice, but I use Calc (the spreadsheet component) rather extensively. So I was dismayed to discover that it crashed more than half the time trying to open a particular spreadsheet file (the first one I needed to access). If I managed to get into the file, Calc crashed consistently when I tried to save it. It also crashed consistently if I tried to copy the contents (thinking that perhaps if I pasted it into a new file I might work around the problem). I tried opening and saving a few other spreadsheets, and they all were handled correctly. There's nothing special about the contents of the problematic one (a column of dates, a column of formulas, a column of numbers, some headings), nor is it a large file. A web search turned up one or two reports of crashes with Writer (the word processing component), typically on a different Mint variant (KDE desktops, I think). One person reported that the problem disappeared after an upgrade. So I downloaded and installed the latest LibreOffice, and so far the problem has not resurfaced.
  2. Cinnamon seems to be based on JavaScript and JSON. A few of the features I used with the GNOME desktop have been removed, have not yet been replicated or are left to the reader as an exercise. Fortunately, third-party substitutes are available in some cases. A number of possibly handy applets (that plug effortlessly into the panel) are available from the Mint Spices page. One I found particularly useful is My Launcher. It provides an easily configured quick-launch list for applications I use frequently. One click expands the list, a second click launches the application. With GNOME, I was able to add the Accessories menu directly to the panel; My Launcher accomplishes the same thing, but lets me add applications that are not on the Accessories menu and remove ones I do not use often.