Twelve Days 2013: Reed–Solomon Codes
The Twelve Days of Christmas…err…Algorithms
Ok, so the “real” twelve days of Christmas starts on the 25th of December, but I like the idea of starting on the 12th. Starting today and ending Christmas Eve I’ll write about a different algorithm or data structure each day. These posts will be quick reads (except for this kickoff one) and range in complexity but they’ll all follow three simple rules:
- I’m only writing about things that I’ve used myself and that I hope are general enough to benefit others.
- If you don’t want commentary, every post will start with a TL/DR explaining the use with links to resources/implementations when applicable.
- They’ve gotta have a cool factor.
And with that, let’s start day one…
Day One: Reed–Solomon Codes
TL/DR
Reed–Solomon codes are a fast, super useful form of error correction that’s easily implemented in software. If you need to transmit data over a noisy channel (think UDP over WAN) or if you’re writing a file system/highly reliable data store, they’re your friend, especially for latency critical applications. There’s a high quality C implementation in QFS (Quantcast File System), based on the original implementation in KFS.
A Brief Introduction to Error Correction
Without error correcting codes, storing data on a hard drive, making a phone call, listening to music, or watching a DVD would all fall somewhere on the spectrum of “impossible” to “much less efficient.” Error correction is a beautiful application of pure math to a real world application. There’s a huge range of error correction/detection schemes but they come in two broad categories: backward error correction in which an error is detected and the data is re-requested, or forward error correction in which data is sent with enough redundancy to detect an error and correct for it.
Fundamental limits dictate the maximum efficiency of any error correction scheme. Within those limits, different schemes have their pros, cons and common use cases. One important consideration is the ease and efficiency of an implementation in hardware or software. Error correction is frequently applied real-time (think cellphone conversations or reading from a Blu-ray disk) so speed is often a primary concern on the encoding or decoding end, and sometimes both.
Reed–Solomon codes are a powerful and early example of error correction. They’re still in widespread use as they perform well and are relatively fast even when implemented without special hardware. Modern alternatives such as Turbo Codes are closer to the Shannon Limit but their decoding process is complex and higher latency. For Reed–Solomon, encoding is a simple linear operation, and decoding can be made efficient.
The first time I saw Reed–Solomon codes the concept went right over my head. They were presented using way more abstract algebra than I understood at the time and I missed the fact that they rely on pretty simple, elegant concepts. Yes, you do need to know about Galois field theory and abstract algebra to prove results on the general case of Reed–Solomon, or to develop an efficient implementation. However, appreciating the beauty of how it works or experimenting with a special case requires only algebra and some faith in the underlying math. What follows is a toy example. Real implementations take a different approach mathematically for performance reasons. Standard CPUs weren’t built to do efficient math on finite fields (although they often are in hardware).
The Intuition
Let’s forget about codes for a second and think back to geometry. A line is uniquely defined by two points. Giving someone a third point picked from that line doesn’t convey any additional information, or change how you would draw that line. Now imagine that you want to tell your friend Sally two numbers over a bad phone connection. One approach is to ask her to draw a line passing through a series of points, and tell her that the slope and the intercept of that line are the two numbers that you want her to have. If you state three collinear points and Sally doesn’t hear one pair—not a problem, she can still draw a line identical to yours and recover the slope and intercept. Likewise, if she goes to draw the line and finds that the three points don’t line up, she’ll know that something went wrong.
When we describe our line we can add as much redundancy as we want by including extra points in our description. As long as our final message arrives with at least two of them, we’re good to go. However, what happens if instead of not hearing us at all, our friend hears us incorrectly? Now we have a new problem—determining which points are garbage.
Sally knows what points are permissible—the two of you agreed on some rules ahead of time. Maybe you decided that you’ll never tell her a negative coordinate, or that the x value of every coordinate will always be a power of two. If she hears a point that’s not valid, or if the points don’t line up, she’ll know that there’s a problem. She might ask you to repeat yourself: backward error correction. However, she doesn’t want to waste your time, so if she has enough information to sort things out on her end she’ll do it: forward error correction. Note that detecting an error in this scheme or correcting for it isn’t always possible. There’s always a chance that she hears all of the points incorrectly, but that they meet the agreed upon restrictions, and they still form a line. Sally won’t know that there was a problem in this case.
Up until now it’s not clear why you wouldn’t just tell Sally the same set of coordinates over and over again instead of picking a unique third one: . You certainly could, but if two things sound or look alike they’re easier to confuse—try sorting out whether someone is saying “Nancy” or “Mancy” over a bad radio connection. Similarly, we want coordinates that are as visually and audibly distinct as possible, so sampling unique points that are far apart makes sense.
Let’s start calling our coordinates codewords, and note that when we write them in binary the “distance” between them is
called hamming distance. Hamming distance measures the number of
flipped bits required to turn one codeword into another. Codewords that are far apart are desirable, as when they get
garbled they’re easier to tell apart. For example, if our codewords were 1111
and 0000
and we receive 1011
, it’s
much more plausible that one bit in 1111
flipped to produce 1011
than three in 0000
. If we had a little extra
information we could say for sure.
The line is a good start, but can we do better? We want something that:
- Is a unique, one-to-one mapping, meaning that we can uniquely recover any message from its code.
- Lets us use as many codewords as we want.
- Provides a consistency check (in the case of our line, the fact that valid points fall on the line is a consistency check).
What about a polynomial? A line is a polynomial of degree one, and just as a line can fit any two data points, a polynomial of degree can fit any data points. Is it unique? Yup, there’s a simple proof showing that any polynomial of degree passing through data points is unique—no other polynomial of degree can pass through all of those same points unless it’s identical to . Can we use it to construct a one-to-one mapping scheme? Not necessarily, but maybe we can with some restrictions. What about the codewords? Well, we’re free to pick coefficients on our polynomial, so we can use those as codewords, just as we did slope and intercept on a line. What about the consistency check? By the uniqueness property above oversampling from a polynomial is no different from oversampling from a line—there’s only one possible fit, so given a fixed amount of redundancy we’ll be able to detect a fixed number of errors.
Reed–Solomon codes are, in a certain light, a formula for how to sample from a polynomial and use those samples to interpolate the original polynomial. The encoding procedure is a simple linear operation but there are lots of ways to do the decoding, and that’s where things get confusing/mathy. Following the Reed-Solomon methodology gives you provable bounds on the maximum number of errors that can be detected or automatically repaired. As noted previously, and as with any coding scheme, there’s always a probability of failure in which we 1) detect more damage than we can recover, or 2) end up with something that’s internally consistent but incorrect. It’s impossible to eliminate this probability, but we can make it arbitrarily small depending on how much duplication we’re willing to tolerate (more duplication = less information density).
The procedure that follows is, at the end of the day, a glorified means of interpolating a family of polynomials. We can tolerate a certain number of errors and still pick the correct polynomial out of the family as (assuming there are at most the threshold number of errors) consistency arguments across different combinations of points rule out all curves but one, providing a unique solution. The restrictions placed on the codewords used and the fact that it needs some strange arithmetic (adding and inverting numbers modulo a prime number) isn’t germane to the bigger picture. They’re vestiges of our need to provide provable guarantees on the codes, make them one-to-one, and make decoding efficient.
The Procedure
What follows is a rote demonistration of Reed–Solomon from scratch. If you’re interested in the field theory aspect I included a very brief “why fields” motivator below the fold. These are some notes on practical implementation, and here’s the quantcast one. The math (as is) only works if we restrict ourselves to integer codewords , where is prime. I’m using (which is why everything is evaluated modulo 7) but you’re free to use a larger prime and adjust accordingly.
A few definitions:
For a sequence define the first difference as . The second difference is the first difference of the first difference, etc. Define as the sequence . Let be an matrix whos elements are defined by:
We’ll take (three symbols in our message), (seven symbols in our encoded, redundant message), and note that by our formula for we can correct up to errors. For a message we’ll use: , which in turn yields the polynomial . We evaluate it at the points because , so we need to sample seven points:
m.vec <- c(2, 3, 4)
r.vec <- c(2, 2, 6, 5, 3, 5, 3)
polyval <- function(c,x) {
n <- length(c)
y <- x*0+c[1];
for (i in 2:n) {
y <- c[i] +x*y
}
return(y)
}
p.vec <- polyval(rev(m.vec), 0:6) %% 7
We now have that . Suppose that we receive , which contains two errors. We compute a matrix , as defined above.
delta <- function(x, n) {
delta <- function(x) {
len <- length(x)
if(len > 1) {
return(x[2:len] - x[1:(len - 1)])
} else {
return(0)
}
}
for(i in 1:n) {
x <- delta(x)
}
return(x)
}
rj.vec <- function(r, j) {
len <- length(r)
(0:(len - 1))^j \* r
}
a0 <- rj.vec(r.vec, 0)
a1 <- rj.vec(r.vec, 1)
a2 <- rj.vec(r.vec, 2)
b <- cbind(delta(a0, 5), delta(a1, 5), delta(a2, 5)) %% 7
Thus:
We need to solve a system of linear equations (modulo 7) with as our coefficient matrix:
This one is easy as works, but in general solving a system of linear equations modulo some integer is pretty annoying, and only meaningful if said integer is prime. As much fun as working through that would be…let’s use Wolfram Alpha’s solver instead. The solution that we want is the minimal non zero one on the second line of output: .
Reed-Solomon hinges on an equation called the key equation, a theorem stating that there exists polynomials and such that . There’s a proof guaranteeing that it has non-zero solutions, and the system that we just solved gave us the coefficients of , so we’re half way there. There’s another result that allows us to express a polynomial as a series of successive differences (see the paper that I took this example from if you’re interested). The end result is that, using and as we previously defined them, we can write any degree polynomial as:
We’ll use this to reconstruct . To do so we need the first element of difference of the sequence for .
e.coef <- c(1, 1, 1)
qi.seq <- (polyval(e.coef, 0:6) \* r.vec) %% 7
diff.vec <- unlist(lapply(apply(as.matrix(0:4), 1,
function(x) delta(qi.seq, x)), function(x) x[[1]])) %% 7
Which gives us a diff.vec = (2, 4, 4, 4, 5)
. We can now write:
Those polynomials have a factor of in them, which would lead you to believe that they’re fractional. However, because we’re doing our arithmetic modulo we’re actually dealing with a modular multiplicative inverse. We can expand the polynomial above as we would any polynomial and work out the congruence on each coefficient by hand (for that’s tantamount to solving the congruence ). Or we can take the lazy approach again: PolynomialMod
Doing so gives us . If we ask
Wolfram to factor this for us,
mod 7
, we get . Doing the same for gives . As we
can see is the product of and something else. That something else is
(expanding modulo 7):
. Look familiar? That’s our original message, derived from the damaged one that we received.
Having walked through this it’s easy to see why this wouldn’t lend itself well to practical use as is. We needed a computational algebra system to do this, and factoring polynomials over finite fields as we did to see our result isn’t fun. As such, practical implementations take a different view of the math and won’t touch any algebra directly.
May your 2014 be slightly less error prone.
Optional Deeper Dive
Polynomials live in a function space, and forms its basis. A basis is the smallest linearly independent set of vectors (meaning that no one vector can be produced by any combination of the others) needed to represent any other point in a vector space. Just as forms a basis over a three dimensional space defined in terms of coordinates, the basis above can represent any polynomial as a linear combination of basis vectors.
Polynomials can, generally speaking, have any type of coefficient. However, the type of polynomial that we were using had integer coefficients less than a prime number : . More formally:
These integers form a field and we get some neat properties because of this. Our codewords are the coefficients of these polynomials—elements of the field. We chose the field that we did for our coefficients out of convenience, but in practice fields with elements are used so that every byte has a representation as a single codeword. Fields can be finite or infinite, and in this case, it’s finite. Such fields are called Galois fields. Galois theory makes a lot of modern day electronics possible.
Fields are defined axiomatically, and one of the axioms, closure, means that the sum or product of any two elements in a field is still in that field. For vector spaces over the reals, the intuition is that you can never add or multiply two real numbers and get something that’s not in the reals. The intuition is somewhat different for finite fields. For example, the smallest possible field has two elements, and is called GF(2). It contains the elements 0 and 1, and operations on elements in the field are carried out modulo 2. We can see that it’s closed under addition by checking all combinations: . If we didn’t carry out operations modulo 2 we would have , which is not in the field. It’s easy to check the other field axioms hold for , and we can do the same for the field that we just defined above.
Field extensions of such as admit a nice representation of codewords as bytes. There’s a powerful result guaranteeing that a finite field with elements for every prime and every positive integer . Working with fields makes it a lot easier to reason about certain types of algorithms, and as such they play an important role in both coding theory and cryptography.
In short, why do we care? Our ability to manipulate polynomials as we did and treat them as we would treat ordinary numbers, carrying out operations such as modular arithmetic and long division, hinges on these facts. Generalized Reed-Solomon actually takes this a step further and uses codes over rings of matrices (our coefficients are over a field so certain sets of polynomials will form a polynomial ring). Aside from that, working with is required to prove an identity that gives the weight of a Reed–Solomon code, and by extension the maximum number of errors that it can detect/correct. Last but not least, doing all of this in an efficient manner (and not like we did above) requires lots of tricks grounded in abstract algebra and finite field theory.