Swarajya Logo

Technology

To Err Is Natural; To Correct, Mathematics

  • The greatest enabler of digitisation today is our ability to convey huge amounts of information across long distances in minuscule time.

Dr Kaneenika SinhaJan 29, 2017, 01:50 PM | Updated 01:33 PM IST


Digitisation is a great enabler.

Digitisation is a great enabler.


“November Oscar, November Oscar,” my father spoke sharply into his handheld wireless device. As a soldier, he was often posted to insurgency zones. During our occasional visits to him, we found him uttering strange words like ‘Alfa’, ‘Bravo’, ‘Foxtrot’, ‘Tango’, ‘Whiskey’, ‘Yankee’ while corresponding with troops on field duty. For a long time, I thought it was a language to share secrets which his family should not hear. Much later, during a lecture on coding theory, I came to know that he was using these code words, not for secrecy, but as a substitute for alphabets to convey his message correctly. A for Alfa, B for Bravo, C for Charlie and so on. In bad transmission conditions, it was very likely for ‘NO’ to be transmitted as ‘GO’. It was, therefore, safer to say “November Oscar” (and more than once, for added effect). There is very little chance of hearing ‘November’ as ‘Golf’. This multiplication of information so that the receiver can deduce the original message without error is called redundancy.

The greatest enabler of digitisation today is our ability to convey huge amounts of information across long distances in minuscule time. But, nature poses hurdles in the transmission of messages. These hurdles, technically termed “noisy channels,” cause errors in message transmission. Can we detect such errors? Can we have a mechanism to correct an error before the message reaches the intended receiver?

Mathematics gives us powerful tools to do both. A computer, as we all know, interprets each message as a string of 0’s and 1’s, called ‘bits’. Now, imagine that we have to send a string of 0’s and 1’s of length 100 and there is a 1 per cent chance of error (that is, 0.01 probability of error) in transmission of a single bit. Seems reasonably safe! But, if we send this message as it is, the probability that it will reach error free is 0.99 raised to the power of 100. This is approximately 0.366. So, the chance of our message being transmitted correctly is as low as 37 per cent. You can only imagine how low the chances of correct transmission fall as the length of the message increases!

On the other hand, let’s say we arrange to transmit every bit five times. This is the computer’s way of changing an alphabet into a word. So, 0 will be sent as 00000 and 1 as 11111. In case the message received is 10011, the receiver will apply the majority rule and decode the message as 1. Therefore, for a wrong interpretation of a bit, at least three errors must be made. The likelihood of three errors is much lesser than that of one or two errors and by the principle of redundancy, we hugely improve our chances of sending a correct message. But, this comes with a huge cost: in this method, 80 per cent of the information communicated is redundant. A finer question to ask is how to reduce the possibility of error with minimal redundancy.

In 1950, Richard Hamming described a fundamental mathematical technique, which would not only detect, but also correct errors far more efficiently than by saying something five times over. Hamming was part of the Manhattan Project during the Second World War and had later joined Bell Labs. Roughly speaking, his idea was to take a string of bits, multiply it by a suitable “matrix” to stretch it into a longer string and reduce it again to original length through a technique called the nearest-neighbour decoding method. Although based on very simple mathematical principles, his work revolutionised the theory of error-correcting codes. Through the 1950s, mathematicians and computer scientists continued to experiment with and improvise these ideas. In 1971, the Mars Mariner spacecraft used Hamming codes to relay photographs of Mars to us. The quality of photographs sent back to us by space missions has improved through the successive use of better codes over the years.


Their ideas are based on concepts in abstract algebra, more precisely, the arithmetic of finite fields. A code, in the precise language of their famous 1960 paper, is “a mapping from a vector space of dimension m over a finite field K into a vector space of higher dimension from the same field.”

This definition will not make sense to you unless you have had a university-level course in abstract algebra. They interpret a code in terms of classical algebraic objects called “finite fields” and “polynomials over finite fields”, whose origins go as far back as early 1800s. In terms of dealing with and correcting large bursts of errors with remarkable efficiency, this definition massively scores over the previous understanding of codes.

Today, the Reed-Solomon codes have been converted into technology that can be used in compact disks and computer hard disk drives. They enable our cable television provider to provide a vast number of channels to us, which we can now view on high definition television! The Reed-Solomon codes and their variants are now the preferred choice for information transmitted by satellites. NASA’s spacecraft, Voyager 2, launched in 1977 to study the outer planets, has been operating for the last 40 years and these codes to transmit breathtakingly beautiful pictures of Jupiter, Saturn, Uranus and Neptune.

These codes completely blur the distinction between ‘pure’ and ‘applicable’ mathematics.

Join our WhatsApp channel - no spam, only sharp analysis