A communications revolution from error-correcting code evolution?
Share this article
As MIT’s Claude Shannon wrote in 1948, communication methods are bound by bandwidth and noise. Simply put, bandwidth is the total range of frequencies that a given signal may travel upon, or how much you can possibly get out of any connection -- and noise is anything that gets in the way of the signal. Shannon created a calculation of bandwidth and noise that he called the channel capacity, or the maximum rate of a signal that can sent along that connection with zero error -- which has since been referred to as the “Shannon limit”.
Error-correcting codes (ECC): When noise is present, additional redundant information must be sent to make up for it. Like giving the mailman several copies of a book on a rainy day, you know that your recipient will have a better chance to piece together enough dry pages to get the complete book. The harder it rains, the more copies you need. The only problem is that the mailman can only carry so many copies before he’s weighed down completely (i.e., not enough bandwidth for all the redundancies) -- and there’s always the chance that one or more pages won’t be dry at the other end (i.e., unrecoverable errors).
Checksums and Parity Bits: To make it easier, you send fewer copies, but you number the pages so that your recipient can tell right away if any are missing, and where they’re supposed to be in the book. To make it even better, you can add a short message to each page number with the total number of words on that particular page. This is a simplified example of error-correcting codes contained within transmitted data, including information on how to piece together the redundant information to decode the message without errors -- decreasing the necessary amount of bandwidth, and getting that much closer to the Shannon limit.
Checksums and other ECCs enable, among many other things, magnetic recording, file downloads, satellite transmissions, and the packets on which TCP/IP Internet protocol is based. These types of error-correcting codes follow relatively simple rules, but the new process described in US 08023570 is far more complex, utilizing pseudorandom or even predetermined nonrandom linear transformations of subcodewords and redundancies of the encoded message -- one major benefit of which is that the rate and noise levels don’t even need to be a part of the encoding/ decoding equation.
Wornell, Trott and Erez have come up with a new class of encoding methods that can be as ‘big’ or ‘small’ as they need to be to transmit the message -- unlike current methods, which requires often bandwidth-wasting information on the current noise level beforehand. Instead of repeating redundant information through retransmission, the initial part of the message (the "master codeword") is transmitted, followed by the fewest necessary number of subcodewords and redundancies in sequence until the message can be fully decoded -- after which any remaining parts of the message need not be transmitted.
Ultimately, this could mean that a message need not be transmitted in its entirety to be reconstructed, as long as the receiving end is equipped to decode the master codeword. Roughly analogous to digital compression of analog data, the original ‘size’ need not be maintained; not only would the heretofore unattainable goal of zero error be possible, but messages with a correctly decoded master codeword could theoretically take up less bandwidth in encoded form than they would in their raw state.
A more pragmatic short view offers the likely potential for the currently evolving WiMAX standard (IEEE 802.16), similar to WiFi but with greater distance and higher-bandwidth (100 Mbit/s mobile, 1 Gbit/s fixed). Sending separate parts to multiple receivers (or even transmitting distinct layers with multiple antennae) is another of the examples referenced in the patent, opening the possibility of multiplexing and other multiple-channel applications (whether divisions of “time, space, frequency, and/or subchannels”). This directly impacts the structure of CDMA (including GPS, GSM and 3G) and ODFM (digital TV and audio broadcasting, DSL, wireless networks and 4G),
Although wireless applications are most often referenced in conjunction with these error-correcting methods, the potential for extending the penetration of wired networks is also considerable -- mitigating many of the signal-to-noise ratio complications of long or otherwise noisy cable runs.
The class of methods is potentially revolutionary for reducing error in, and maximizing bandwidth of, virtually all forms of both wired and wireless communication. Successful applications could mitigate a majority of the problems of low power, long-distance, poor-quality and/or high-interference signals. The cost of network deployment could be significantly reduced, and the efficiency of existing networks greatly increased and expanded. New forms of data communication could be developed to take specific advantage of the benefits of the methods.
Implementation of the methods could conceivably reveal unforeseen flaws or limitations. Applications could prove to be far more limited in scope than the description suggests. We may find that the methods deliver little practical advantage over existing error-correction methods.