## Tuesday, October 8, 2019

### Numerical Inaccuracy (Linear Algebra Edition)

"There are three kinds of lies: lies, damned lies, and statistics." The origin of the statement is lost in time, but the sentiment lives on. I'm tempted to give it a 21st century update. "There are three kinds of lies: lies, damned lies, and floating point arithmetic."

I am currently working on a research project in which the following problem arises. We start with an $m \times n$ matrix $A$ with $m < n$, and a set of indices $I\subseteq \{1,\dots,n\}$. I need to confirm that $x_i = 0$ for all $i\in I$ and for all $x\in \mathcal{N}$, where $\mathcal{N} = \{x\in \mathbb{R}^n : Ax = 0\}$. There are various names for $\mathcal{N}$; I'll go with the null space of $A$.

How do you test whether a particular component of every vector in an uncountably infinite set is zero? Since $\mathcal{N}$ is a vector space, we can compute a basis for it and just check the basis vectors. If $x_i =0$ for every basis vector $x$ of $\mathcal{N}$, it will be true for every vector in $\mathcal{N}$.

This is going on in a Java program, and I have tried multiple linear algebra packages. Initially I had the idea to use a QR decomposition of $A$ to get a basis for the null space, but that requires (at least as far as I know) an upper triangular $R$ component, and neither of the two most promising packages I tried delivered one. (I'm pretty sure the Javadocs for one or both said $R$ would be upper triangular ... but it wasn't.) So I switched to singular value decomposition (SVD) using the EJML library. SVD is generally slower than QR decomposition, but tends to be more accurate.

I got an answer, but a check performed by the Java program warned me that some of the $x_i$ values that were supposed to be zero were in the vicinity of $10^{-6}$, which is a bit dicey to shrug off as rounding error. (The basis vectors are all normalized, by the way.) So I decided to port $A$ and some other stuff over to R and do some testing there ... which is when the adventure began.

The dimensions of this $A$ matrix were $m=401$ and $n=411$. EJML gave me a basis of 17 vectors, indicating that $A$ had nullity 17 and thus rank 394. (The nullity of matrix $A$ is the dimension of $\mathcal{N}$, and is in turn the difference between $n$ and the rank of $A$.) I tested this by doing QR and SVD decompositions of $A$ (using the qr() and svd() functions from the R base package) and the Rank() function from the R pracma package. The QR and SVD results indicated that $A$ had rank 393 and nullity 18. Only pracma::Rank() agreed with EJML ... and that came with the warning "Rank calculation may be problematic". (As if the rest of this stuff isn't.)

So EJML might have shorted me one dimension in the null space ... or maybe not. The smallest eigenvalues of $A$ from the SVD were as follows:  9.16e-05, 5.37e-06, 1.09e-10, 1.13e-15, 7.56e-16, 6.20e-16, 4.87e-16, 3.79e-16, 3.37e-16, 3.16e-16. So the last seven (italicized) were pretty clearly zero and the preceding one (bold) is maybe zero, maybe not. The SVD does not produce the ten zero eigenvalues resulting from the difference between $m$ and $n$, so basically we're looking at a nullity of 18 if you treat the bold one as zero and 17 if you don't.

The 17 basis vectors EJML gave me were all at least close to zero in all components $i\in I$. R had almost all the designated components $10^{-7}$ or smaller. The pracma::nullspace() function gave me 18 basis vectors (despite pracma::Rank() coming up with a nullity of 17), and some of the corresponding $x_i$ were pretty big (0.29, for example) -- way too big to be zero. So my condition is (probably) satisfied if I believe EJML and (definitely) not satisfied if I believe the R computations.

Hoping for some clarity, I calculated (in R) $Ax$ for each basis vector $x$ from either EJML or pracma::nullspace(). The products with the EJML basis vectors were all zero vectors (meaning no components larger than about $10^{-15}$ in magnitude). For the pracma basis vectors, the products had components around $10^{-9}$ in many cases. So much for clarity. Do we treat $10^{-9}$ as approximately zero and believe pracma, or do we call it nonzero and believe EJML?

I'm not new to the limitations of double-precision arithmetic. My first course in grad school was numerical analysis, and I've had a fair share of adventures with MIP models in CPLEX where the numerics were, shall we say, dicey. Still, it's rather frustrating to have to try multiple packages and then have to judge (meaning guess) which one is lying to me the least.

1. I would try to use some software that works in exact arithmetic like Mathematica or Maple. If they are able to solve the problem in finite time, I guess, you can trust the results.

1. Thanks for the suggestion, Thomas. I could try this with one test case, but unfortunately the problem needs to be solved thousands of times (with different matrices) during the solution of a MIP model. I don't know if it is possible (and practical) to link to either of them from Java. Also, the matrix has double precision (non-integer) coefficients. Does Mathematica do "exact" arithmetic on doubles?

2. In Mathematica, this should be possible using Rationalize https://reference.wolfram.com/language/ref/Rationalize.html . In the past, I used Maple, I never really used Mathematica, so I have no further details about this.

When I wrote my own exact-arithmetic MIP solver (in C++, using GMP), I wrote my own code to deal with floating point numbers in lp-files. I read them digit by digit and did some trivial arithmetic. It can read arbitrary long floating point numbers or fractions. So if you can somehow represent your problem as MIP, it would be easy to solve it in exact arithmetic.

1. Thomas,

I appreciate the suggestions. I probably could rationalize the matrices without too much loss of validity, but offhand I don't know if that would increase or decrease the frequency of numerical issues (dubious rank calculations). Solving a MIP to get a null space vector, besides requiring an exact MIP solver I don't have, would unfortunately blow up the run times for the program past what could be tolerated. If necessary, I may pass the null space calculations to R, since the QR and SVD methods in R seem a bit more reliable than those in EJML or Apache Commons (although I might be wrong about that).

2. Paul,

can you elaborate a little on the research project that you are working on? Maybe, there is some other approach. Where does your floating point data come from? Do you need exact solutions, or are "almost" correct floating point solutions also ok?

3. The research project involves traffic flows in a road network. The floating point data is partly proportions of traffic exiting a given node on various arcs, but there are also a lot of rows with just zeros and ones (cover constraints, more or less). We don't need exact basis vectors for the null space, but we do need the correct number of basis vectors, and "exact" determination of singular v. nonsingular matrices.

4. So if I understand correctly, these "partly proportions" are in principle known as rationals, but only represented as floating point numbers?

That would mean, you have numerator and denominator as integers which could be exactly given to Mathematica, Maple or similar.

I don't know whether this is a viable approach, but at least, one could check some previous results for correctness.

5. My impression is the proportions are estimated from sample data, so I don't know if they are initially rationals or not. As provided, they are floats, so we do not have integer numerators and denominators. We could truncate the decimals and get rational approximations, of course.

For now, I think I'll stick with what we have and keep my fingers crossed. :-)

Due to intermittent spamming, comments are being moderated. If this is your first time commenting on the blog, please read the Ground Rules for Comments. In particular, if you want to ask an operations research-related question not relevant to this post, consider asking it on Operations Research Stack Exchange.