The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primalityproving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in an article titled "PRIMES is in P".^{[1]} The algorithm was the first one which is able to determine in polynomial time, whether a given number is prime or composite and this without relying on mathematical conjectures such as the generalized Riemann hypothesis. The proof is also notable for not relying on the field of analysis.^{[2]} In 2006 the authors received both the Gödel Prize and Fulkerson Prize for their work.
AKS is the first primalityproving algorithm to be simultaneously general, polynomialtime, deterministic, and unconditionally correct. Previous algorithms had been developed for centuries and achieved three of these properties at most, but not all four.
While the algorithm is of immense theoretical importance, it is not used in practice, rendering it a galactic algorithm. For 64bit inputs, the Baillie–PSW test is deterministic and runs many orders of magnitude faster. For larger inputs, the performance of the (also unconditionally correct) ECPP and APR tests is far superior to AKS. Additionally, ECPP can output a primality certificate that allows independent and rapid verification of the results, which is not possible with the AKS algorithm.
The AKS primality test is based upon the following theorem: Given an integer and integer coprime to , is prime if and only if the polynomial congruence relation
(1) 
holds within the polynomial ring .^{[1]} Note that denotes the indeterminate which generates this polynomial ring.
This theorem is a generalization to polynomials of Fermat's little theorem. In one direction it can easily be proven using the binomial theorem together with the following property of the binomial coefficient:
While the relation (1) constitutes a primality test in itself, verifying it takes exponential time: the brute force approach would require the expansion of the polynomial and a reduction of the resulting coefficients.
The congruence is an equality in the polynomial ring . Evaluating in a quotient ring of creates an upper bound for the degree of the polynomials involved. The AKS evaluates the equality in , making the computational complexity dependent on the size of . For clarity,^{[1]} this is expressed as the congruence
(2) 
which is the same as:
(3) 
for some polynomials and .
Note that all primes satisfy this relation (choosing in (3) gives (1), which holds for prime). This congruence can be checked in polynomial time when is polynomial to the digits of . The AKS algorithm evaluates this congruence for a large set of values, whose size is polynomial to the digits of . The proof of validity of the AKS algorithm shows that one can find an and a set of values with the above properties such that if the congruences hold then is a power of a prime.^{[1]}
In the first version of the abovecited paper, the authors proved the asymptotic time complexity of the algorithm to be (using Õ from big O notation)—the twelfth power of the number of digits in n times a factor that is polylogarithmic in the number of digits. However, this upper bound was rather loose; a widelyheld conjecture about the distribution of the Sophie Germain primes would, if true, immediately cut the worst case down to .
In the months following the discovery, new variants appeared (Lenstra 2002, Pomerance 2002, Berrizbeitia 2002, Cheng 2003, Bernstein 2003a/b, Lenstra and Pomerance 2003), which improved the speed of computation greatly. Owing to the existence of the many variants, Crandall and Papadopoulos refer to the "AKSclass" of algorithms in their scientific paper "On the implementation of AKSclass primality tests", published in March 2003.
In response to some of these variants, and to other feedback, the paper "PRIMES is in P" was updated with a new formulation of the AKS algorithm and of its proof of correctness. (This version was eventually published in Annals of Mathematics.) While the basic idea remained the same, r was chosen in a new manner, and the proof of correctness was more coherently organized. The new proof relied almost exclusively on the behavior of cyclotomic polynomials over finite fields. The new upper bound on time complexity was , later reduced using additional results from sieve theory to .
In 2005, Pomerance and Lenstra demonstrated a variant of AKS that runs in operations,^{[3]} leading to another updated version of the paper.^{[4]} Agrawal, Kayal and Saxena proposed a variant which would run in if Agrawal's conjecture were true; however, a heuristic argument by Pomerance and Lenstra suggested that it is probably false.
The algorithm is as follows:^{[1]}
Here ord_{r}(n) is the multiplicative order of n modulo r, log_{2} is the binary logarithm, and is Euler's totient function of r.
Step 3 is shown in the paper as checking 1 < (a,n) < n for all a ≤ r. It can be seen this is equivalent to trial division up to r, which can be done very efficiently without using gcd. Similarly the comparison in step 4 can be replaced by having the trial division return prime once it has checked all values up to and including
Once beyond very small inputs, step 5 dominates the time taken. The essential reduction in complexity (from exponential to polynomial) is achieved by performing all calculations in the finite ring
consisting of elements. This ring contains only the monomials , and the coefficients are in which has elements, all of them codable within bits.
Most later improvements made to the algorithm have concentrated on reducing the size of r, which makes the core operation in step 5 faster, and in reducing the size of s, the number of loops performed in step 5.^{[5]} Typically these changes do not change the computational complexity, but can lead to many orders of magnitude less time taken; for example, Bernstein's final version has a theoretical speedup by a factor of over 2 million.
For the algorithm to be correct, all steps that identify n must be correct. Steps 1, 3, and 4 are trivially correct, since they are based on direct tests of the divisibility of n. Step 5 is also correct: since (2) is true for any choice of a coprime to n and r if n is prime, an inequality means that n must be composite.
The difficult part of the proof is showing that step 6 is true. Its proof of correctness is based on the upper and lower bounds of a multiplicative group in constructed from the (X + a) binomials that are tested in step 5. Step 4 guarantees that these binomials are distinct elements of . For the particular choice of r, the bounds produce a contradiction unless n is prime or a power of a prime. Together with the test of step 1, this implies that n is always prime at step 6.^{[1]}
Input: integer n = 31 > 1. (* Step 1 *) If (n = a^{b} for integers a > 1 and b > 1), output composite. For ( b = 2; b <= log_{2}(n); b++) { a = n^{1/b}; If (a is integer), Return[Composite] } a = n^{1/2}...n^{1/4} = {5.568, 3.141, 2.360} (* Step 2 *) Find the smallest r such that O_{r}(n) > (log_{2} n)^{2}. maxk = ⌊(log_{2} n)^{2}⌋; maxr = Max[3, ⌈(Log_{2} n)^{5}⌉]; (* maxr really isn't needed *) nextR = True; For (r = 2; nextR && r < maxr; r++) { nextR = False; For (k = 1; (!nextR) && k ≤ maxk; k++) { nextR = (Mod[n^{k}, r] == 1  Mod[n^{k}, r]==0) } } r; (*the loop over increments by one*) r = 29 (* Step 3 *) If (1 < gcd(a,n) < n for some a ≤ r), output composite. For (a = r; a > 1; a) { If ((gcd = GCD[a,n]) > 1 && gcd < n), Return[Composite] } gcd = {GCD(29,31)=1, GCD(28,31)=1, ..., GCD(2,31)=1} ≯ 1 (* Step 4 *) If (n ≤ r), output prime. If (n ≤ r), Return[Prime] (* this step may be omitted if n > 5690034 *) 31 > 29 (* Step 5 *) For a = 1 to do If ((X+a)^{n} ≠ X^{n} + a (mod X^{r} − 1,n)), output composite; φ[x_] := EulerPhi[x]; PolyModulo[f_] := PolynomialMod[PolynomialRemainder[f, x^{r}1, x], n]; max = Floor[Log[2, n]√φ[r]]; For (a = 1; a ≤ max; a++) { If (PolyModulo[(x+a)^{n}  PolynomialRemainder[x^{n}+a, x^{r}1], x] ≠ 0) { Return[Composite] { } (x+a)^{31} = a^{31} +31a^{30}x +465a^{29}x^{2} +4495a^{28}x^{3} +31465a^{27}x^{4} +169911a^{26}x^{5} +736281a^{25}x^{6} +2629575a^{24}x^{7} +7888725a^{23}x^{8} +20160075a^{22}x^{9} +44352165a^{21}x^{10} +84672315a^{20}x^{11} +141120525a^{19}x^{12} +206253075a^{18}x^{13} +265182525a^{17}x^{14} +300540195a^{16}x^{15} +300540195a^{15}x^{16} +265182525a^{14}x^{17} +206253075a^{13}x^{18} +141120525a^{12}x^{19} +84672315a^{11}x^{20} +44352165a^{10}x^{21} +20160075a^{9}x^{22} +7888725a^{8}x^{23} +2629575a^{7}x^{24} +736281a^{6}x^{25} +169911a^{5}x^{26} +31465a^{4}x^{27} +4495a^{3}x^{28} +465a^{2}x^{29} +31ax^{30} +x^{31} PolynomialRemainder [(x+a)^{31}, x^{29}1] = 465a^{2} +a^{31} +(31a+31a^{30})x +(1+465a^{29})x^{2} +4495a^{28}x^{3} +31465a^{27}x^{4} +169911a^{26}x^{5} +736281a^{25}x^{6} +2629575a^{24}x^{7} +7888725a^{23}x^{8} +20160075a^{22}x^{9} +44352165a^{21}x^{10} +84672315a^{20}x^{11} +141120525a^{19}x^{12} +206253075a^{18}x^{13} +265182525a^{17}x^{14} +300540195a^{16}x^{15} +300540195a^{15}x^{16} +265182525a^{14}x^{17} +206253075a^{13}x^{18} +141120525a^{12}x^{19} +84672315a^{11}x^{20} +44352165a^{10}x^{21} +20160075a^{9}x^{22} +7888725a^{8}x^{23} +2629575a^{7}x^{24} +736281a^{6}x^{25} +169911a^{5}x^{26} +31465a^{4}x^{27} +4495a^{3}x^{28} (A) PolynomialMod [PolynomialRemainder [(x+a)^{31}, x^{29}1], 31] = a^{31}+x^{2} (B) PolynomialRemainder [x^{31}+a, x^{29}1] = a+x^{2} (A)  (B) = a^{31}+x^{2}  (a+x^{2}) = a^{31}a {1^{31}1 = 0 (mod 31), 2^{31}2 = 0 (mod 31), 3^{31}3 = 0 (mod 31), ..., 26^{31}26 = 0 (mod 31)} (* Step 6 *) Output prime. 31 Must be Prime
Where PolynomialMod is a termwise modulo reduction of the polynomial. e.g. PolynomialMod[x+2x^{2}+3x^{3}, 3] = x+2x^{2}+0x^{3}
^{[6]}
Primality tests  

Primegenerating  
Integer factorization  
Multiplication  
Euclidean division  
Discrete logarithm  
Greatest common divisor  
Modular square root  
Other algorithms  
