How can we tell if a number is prime?
There are methods to do so, but the larger the number gets, the harder it gets to determine the primality of a number.
This problem, which many struggled with for years, was solved finally through an algorithm devised as part of a Bachelor’s degree project.
About 15 years ago, at the beginning of August 2002, Manindra Agrawal, Neeraj Kayal and Nitin Saxena from the Department of Computer Science and Engineering at IIT Kanpur sent out a nine-page paper titled ‘PRIMES is in P’ to some experts in computer science and mathematics. The title of their paper indicated that they had obtained a "polynomial time algorithm" to test if a number is prime. This work came out of an undergraduate research project undertaken by Kayal and Saxena under the supervision of Agrawal. Within a matter of hours, their paper generated excitement all over the world and was widely circulated.
Prime numbers, as we studied in high school, are unbreakable. Any natural number greater than one which is indivisible by all numbers other than one or itself is called a prime number. On the other hand, composite numbers, that is, those numbers which are not prime, can be broken down into a product of smaller prime numbers. Agrawal, Kayal and Saxena and many before them were motivated by the following fundamental question: Is there a nice way to recognise a prime number when we see one? More precisely, is there an efficient method to determine whether a given number ‘n’ is prime? This problem, intellectually challenging for its own sake, also has ramifications towards safe data transmission.
Going back to basics, the first method that comes to mind to determine primality is, of course, trial division. For example, if we want to establish that n = 11 is prime, we simply attempt division by all numbers between 2 and 10, and when neither of them divides 11, we declare 11 a prime. But, in this method, the number of steps needed is almost the size of n.
Sometime in the third century BCE, the Greek scholar Eratosthenes had a clever idea. If n is not a prime number, then it must have a factor not bigger than its square root. Therefore, to test the primality of n, it would be sufficient to check divisibility by all primes less than or equal to the square root of n. Take, for example, n = 101. We just need to check if n is divisible by all primes up to 10, that is, 2, 3, 5 and 7. This reduces the number of steps required to check primality of 101 from 99 to 4. That's a good improvement, no doubt, but as the values of n become larger and larger, this method also becomes unfeasible.
From the point of view of computer science, for an input n, any method which requires the number of steps to be a positive power of n, however small that power is (even as small as 1/11, for example), will soon become inefficient. If n has 200 digits, for example, (and we do require it to be this large and more for various purposes), by the time our current computers determine primality using the method of Eratosthenes, we humans would have evolved into another species!
We would much rather prefer a method which determines primality of a number right here, right now (or at least before it is too late). In order to stay within manageable time frames, we would like to have an algorithm in which the number of steps (or more precisely, the number of "bit operations" in a computer) is about (log n)^c, where ‘c’ is a positive, absolute constant. Such an algorithm is called a polynomial-time algorithm. An algorithm which performs n^c steps, on the other hand, is an exponential-time algorithm.
Primality-testing algorithms could be of various types. They could be deterministic; that is, they return a definite and correct yes or no at the end of a finite number of steps. Alternatively, they could be probabilistic. Such algorithms would tell us that n is probably prime, but that the probability of this not being true is close to zero up to very, very large sizes of n. One such algorithm, obtained by G L Miller and M G Rabin in 1976, has found feasible applications in cryptography and e-commerce.
In 1983, Leonard Adleman (the "A" behind the RSA public key cryptosystem used widely for secure data transmission), Carl Pomerance and Robert Rumely developed a deterministic algorithm for primality testing. Although not exactly polynomial-time, the number of steps required for their algorithm is, around, hold your breath,
(log n)^(c log log log n).
Here, the power of log n is also a function of n and not an absolute constant as we need, but this function grows slowly enough for us to determine primality of very large numbers in manageable time. For example, it takes up to an hour to determine if a 1000-digit number is prime.
It was only in 2002 that a polynomial-time algorithm was found for primality testing. This was the accomplishment of Agrawal, Kayal and Saxena. They found the very first deterministic and unconditional algorithm that certifies whether a number n is prime in approximately (log n)^12 steps and, therefore, in polynomial-time. By unconditional, we mean that their method does not rely upon any unproven conjectures.
Their work was exciting for many reasons: it solved a long-standing open problem, and it used easy-to-understand and elegant arguments. It was described as beautiful and ingenious by people who had struggled with this problem for several decades. A problem, which had been approached by experts in many different ways, using heavy-duty mathematics arising from cyclotomic fields and elliptic curves, was solved in a Bachelor's degree project through a relatively elementary approach. Their approach cleverly builds upon some ideas going back to the French mathematician Pierre de Fermat in the seventeenth century in a way that had not been considered for algorithmic purposes before. Their paper appeared in the very prestigious Mathematics journal ‘Annals of Mathematics’ and was awarded the Gödel Prize and Fulkerson Prize in 2006. Interested readers who have had an undergraduate course in basic algebra and number theory will be able to read and understand this paper.
As Folkmar Bornemann writes aptly in his expository article on the work of Agrawal, Kayal and Saxena, “PRIMES is in P: A Breakthrough for "Everyman”” (which appeared in Notices of the American Mathematical Society in May 2003),
In his speech at the 1998 Berlin International Congress of Mathematicians, Hans-Magnus Enzensberger took the position that mathematics is in the midst of a golden age due to successes of a quality that he saw neither in theater nor in sports. ... some of these successes have many mathematicians pondering the gulf between the priesthood and the laity within mathematics. ... So it is that each one adds bricks to his parapet in the Tower of Babel named Mathematics and deems his constructions there to be fundamental. Rarely is there such a success as at the beginning of August (2002): a foundation stone for the tower that “Everyman” can understand.