10.5 A Primality Test

Team-Fly

10.5 A Primality Test

Not to stretch out the suspense, the largest known Mersenne prime, M11213, and, I believe, the largest prime known at present, has 3375 digits and is therefore just about T-281.[6]

Isaac Asimov, Adding a Dimension, 1964

26972593 1 is prime!!!

http://www.utm.edu/research/primes/largest.html (May 2000)

The study of prime numbers and their properties is one of the oldest branches of number theory and one of fundamental significance for cryptography. From the seemingly harmless definition of a prime number as a natural number greater than 1 that has no divisors other than itself and 1 there arises a host of questions and problems with which mathematicians have been grappling for centuries and many of which remain unanswered and unsolved to this day. Examples of such questions are, "Are there infinitely many primes?" "How are the primes distributed among the natural numbers?" "How can one tell whether a number is prime?" "How can one identify a natural number that is not prime, that is, a number that is composite?" "How does one find all the prime factors of a composite number?"

That there are infinitely many primes was proven already by Euclid about 2300 years ago (see, for example, [Bund], page 5, and especially the amusing proof variant and the serious proof variant on pages 39 and 40). Another important fact, which up to now we have tacitly assumed, will be mentioned here explicitly: The fundamental theorem of arithmetic states that every natural number greater than 1 has a unique decomposition as a product of finitely many prime numbers, where the uniqueness is up to the order of the factors. Thus prime numbers are truly the building blocks of the natural numbers.

As long as we stick close to home in the natural numbers and do not stray among numbers that are too big for us to deal with easily we can approach a number of questions empirically and carry out concrete calculations. Take note, however, that the degree to which results are achievable depends in large measure on the capacities of available computers and the efficiency of the algorithms used.

A list of the largest numbers identified as prime, published on the Internet, demonstrates the impressive size of the most recent discoveries (see Table 10.1).

Table 10.1: The largest known primes (as of August 2000)

Prime

Digits

Discoverer

 

texttttextbfYear

   

26972593 1

2098960

Hajratwala, Woltman, Kurowski, GIMPS

1999

23021377 1

909526

Clarkson, Woltman, Kurowski, GIMPS

1998

22976221 1

895932

Spence, Woltman, GIMPS

1997

21398269 1

420921

Armengaud, Woltman, GIMPS

1996

21257787 1

378632

Slowinski, Gage

1996

4859465536 + 1

307140

Scott, Gallot

2000

2859433 1

258716

Slowinski, Gage

1994

2756839 1

227832

Slowinski, Gage

1992

16717632768 + 1

171153

Yves Gallot

2000

169719·2557557 + 1

167847

Scott, Gallot

2000

The largest known prime numbers are of the form 2p 1. Primes that can be represented in this way are called Mersenne primes, named for Marin Mersenne (15881648), who discovered this particular structure of prime numbers in his search for perfect numbers. (A natural number is said to be perfect if it equals the sum of its proper divisors. Thus, for example, 496 is a perfect number, since 496 = 1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248.)

For every divisor t of p we have that 2t 1 is a divisor of 2p 1, since if p = ab, then

click to expand

Therefore, we see that 2p 1 can be prime only if p is itself prime. Mersenne himself announced in 1644, without being in possession of a complete proof, that for p 257 the only primes of the form 2p 1 were those for the primes p Î {2,3,5,7,13,17,19,31,67,127,257}. With the exception of p = 67 and p = 257, for which 2p 1 is not prime, Mersenne's conjecture has been verified, and analogous results for many additional exponents have been established as well (see [Knut], Section 4.5.4, and [Bund], Section 3.2.12).

On the basis of the discoveries thus far of Mersenne primes one may conjecture that there exist Mersenne primes for infinitely many prime numbers p. However, there is as yet no proof of this conjecture (see [Rose], Section 1.2).

An interesting overview of additional unsolved problems in the realm of prime numbers can be found in [Rose], Chapter 12.

Because of their importance in cryptographic public key algorithms prime numbers and their properties have come increasingly to public attention, and it is a pleasure to see how algorithmic number theory in regard to this and other topics has become popular as never before. The problems of identifying a number as prime and the decomposition of a number into its prime factors are the problems that have attracted the greatest interest. The cryptographic invulnerability of many public key algorithms (foremost among them the famous RSA procedure) is based on the fact that factorization is a difficult problem (in the sense of complexity theory), which at least at present is unsolvable in polynomial time.[7]

The same holds true, in a by now weakened form, for the identification of a number as prime, if one is looking for a definitive proof that a number is prime. On the other hand, there are tests that determine, up to a small degree of uncertainty, whether a number is prime; furthermore, if the test determines that the number is composite, then that determination is definitive. Such probabilistic tests are, in compensation for the element of doubt, executable in polynomial time, and the probability of a "false positive" can be brought below any given positive bound, as we shall see, by repeating the test a sufficient number of times.

A venerable, but nonetheless still useful, method of determining all primes up to a given natural number N was developed by the Greek philosopher and astronomer Eratosthenes (276-195 B.C.E.; see also [Saga]), and in his honor it is known as the sieve of Eratosthenes. Beginning with a list of all natural numbers greater than 1 and less than or equal to N, we take the first prime number, namely 2, and strike from the list all multiples of 2 greater than 2 itself. The first remaining number above the prime number just used (2 in this case) is then identified as a prime p, whose multiples p(p + 2i), i = 0,1,...,are likewise struck from the list. This process is continued until a prime number greater than is found, at which point the procedure is terminated. The numbers in the list that have not been struck are the primes less than or equal to N. They have been "caught" in the sieve.

We would like to elucidate briefly why it is that the sieve of Eratosthenes works as advertised: First, an induction argument makes it immediately plain that the next unstruck number above a prime is itself prime, since otherwise, the number would have a smaller prime divisor and thus would already have been struck from the list as a multiple of this prime factor. Since only composite numbers are struck, no prime numbers can be lost in the process.

Furthermore, it suffices to strike out only multiples of primes p for which p , since if T is the smallest proper divisor of N, then T . Thus if a composite number n N were to remain unstruck, then this number would have a smallest prime divisor p , and n would have been struck as a multiple of p, in contradiction to our assumption. Now we would like to consider how this sieve might be implemented, and as preparation we shall develop a programmable algorithm, for which we take the following viewpoint: Since except for 2 there are no even prime numbers, we shall consider only odd numbers as candidates for primality. Instead of making a list of odd numbers, we form the list fi, 1 i (N 1)/2, which represents the primality property of the numbers 2i + 1. Further, we use a variable p that will contain the current value 2i + 1 of our (imagined) list of odd numbers, as well as a variable s for which the relation 2s + 1 = p2 = (2i + 1)2, that is, s = 2i2 + 2i, always holds. We may now formulate the following algorithm (cf. [Knut], Section 4.5.4, Exercise 8).

Sieve of Eratosthenes, algorithm for calculating all prime numbers less than or equal to a natural number N

  1. Set L (N 1)/2 and B /2. Set fi 1 for 1 i L. Set i 1, p 3, and s 4.

  2. If fi = 0, go to step 4. Otherwise, output p as a prime and set k s.

  3. If k L, set fk 0, k k + p, and repeat step 3.

  4. If i B, then set i i + 1, s s + 2p, p p + 2, and go to step 2. Otherwise, terminate the algorithm.

The algorithm leads us to the following program, which as result returns a pointer to a list of ULONG values that contains, in ascending order, all primes below the input value. The first number in the list is the number of prime numbers found.

start sidebar

Function:

prime number generator (sieve of Eratosthenes)

Syntax:

ULONG * genprimes (ULONG N;)

Input:

N (upper bound for the prime search)

Return:

a pointer to vector of ULONG values with primes less than or equal to N. (At position 0 the vector contains the number of primes found.)

NULL, if an error with malloc().

end sidebar

 ULONG * genprimes (ULONG N) {   ULONG i, k, p, s, B, L, count;   char *f;   ULONG *primes; 

start sidebar

Step 1: Initialization of the variables. The auxiliary function ul_iroot() computes the integer part of the square root of a ULONG variable. For this it uses the procedure elucidated in Section 10.3. Then comes the allocation of the vector f for marking the composite numbers.

end sidebar

   B = (1 + ul_iroot (N)) >> 1;   L = N >> 1;   if (((N & 1) == 0) && (N > 0))     {       −−L;     }   if ((f = (char *) malloc ((size_t) L+1)) == NULL)     {       return (ULONG *) NULL;     }   for (i = 1; i <= L; i++)     {       f[i] = 1;     }   p = 3;   s = 4; 

start sidebar

Steps 2, 3, and 4 constitute the actual sieve. The variable i represents the numerical value 2i + 1.

end sidebar

   for (i = 1; i <= B; i++)     {       if (f[i])          {            for (k = s; k <= L; k += p)               {                 f[k] = 0;               }            }     s += p + p + 2;     p += 2;   } 

start sidebar

Now the number of primes found is reported, and a field of ULONG variables of commensurate size is allocated.

end sidebar

   for (count = i = 1; i <= L; i++)     {       count += f[i];     }   if ((primes = (ULONG*)malloc ((size_t)(count+1) * sizeof (ULONG))) == NULL)     {       return (ULONG*)NULL;     } 

start sidebar

The field f[] is evaluated, and all the numbers 2i + 1 marked as primes are stored in the field primes. If N 2, then the number 2 is counted as well.

end sidebar

   for (count = i = 1; i <= L; i++)     {       if (f[i])          {            primes[++count] = (i << 1) + 1;          }     }   if (N < 2)     {       primes[0] = 0;     }   else     {       primes[0] = count;       primes[1] = 2;     }     free (f);     return primes;   } 

To determine whether a number n is composite it is sufficient, according to what we have said above, to institute a division test that divides n by all prime numbers less than or equal to . If one fails to find a divisor, then n is itself prime; the prime test divisors are given to us by the sieve of Eratosthenes. However, this method is not practicable, since the number of primes that have to be tested becomes rapidly too large. In particular, we have the prime number theorem, formulated as a conjecture by A. M. Legendre, that the number π (x) of primes p, 2 p x, approaches x/ ln x asymptotically as x goes to infinity (see, for example, [Rose], Chapter 12).[8] A few values of the number of primes less than a given x will help to make clear the size of numbers we are dealing with. Table 10.2 gives the values of both π (x), the actual number of primes less than or equal to x, and the asymptotic approximation x/ ln x. The question mark in the last cell indicates a number to be filled in by the reader. ;-)

Table 10.2: The number of primes up to various limits x

x

102

104

108

1016

1018

10100

x/ ln x

22

1,086

5,428,681

271,434,051,189,532

24,127,471,216,847,323

4 × 1097

π (x)

25

1,229

5,761,455

279,238,341,033,925

24,739,954,287,740,860

?

The number of necessary calculations for the division test of x grows almost with the number of digits of x in the exponent. Therefore, the division test alone is not a practicable method for determining the primality of large numbers. We shall see, in fact, that the division test is an important aid in connection with other tests, but in principle we would be content to have a test that gave information about the primality of a number without revealing anything about its factorization. An improvement in the situation is offered by the little theorem of Fermat, which tells us that for a prime p and all numbers a that are not multiples of p the congruence ap1 1 mod p holds (see page 175).

From this fact we can derive a primality test, called the Fermat test: If for some number a we have gcd(a, n) 1 or gcd(a, n) = 1 and 1 an1 mod n, then n is composite. An exponentiation an1 1 mod n requires O (log3 n) CPU operations, and experience indicates that in only a few tries a composite number will reveal its lack of primality. However, there are exceptions, and these limit the utility of the Fermat test. Therefore, we shall have to have a closer look at them.

We must face the fact that the converse of Fermat's little theorem does not hold: Not every number n with gcd(a, n) = 1 and an1 1 mod n for 1 a n 1 is prime. There exist composite numbers n that pass the Fermat test as long as a and n are relatively prime. Such numbers are called Carmichael numbers, named for their discoverer, Robert Daniel Carmichael (18791967). The smallest of these curious objects are

click to expand

All Carmichael numbers have in common the property that each of them possesses at least three different prime factors (see [Kobl], Chapter 5). It was only in the early 1990s that it was proven that there are infinitely many Carmichael numbers (see [Bund], Section 2.3).

The relative frequency of numbers less than n that are relatively prime to n is

(10.24) 

(for the Euler φ function see page 175), so that the proportion of numbers that are not relatively prime to n is close to 0 for large n. Therefore, in most cases one must run through the Fermat test very often to determine that a Carmichael number is composite. Letting a run through the range 2 a n 1, eventually, one encounters the smallest prime divisor of n, and it is only when a assumes this value that n is exposed as composite.

In addition to the Carmichael numbers there are further odd composite numbers n for which there exist natural numbers a with gcd(a, n) = 1 and an1 1 mod n. Such numbers are known as pseudoprimes to the base a. To be sure, one can make the observation that there are only a few pseudoprimes to the bases 2 and 3, or that, for example, up to 25 × 109 there are only 1770 integers that are simultaneously pseudoprimes to the bases 2, 3, 5, and 7 (see [Rose], Section 3.4), yet the sad fact remains that there is no general estimate of the number of solutions of the Fermat congruence for composite numbers. Thus the problem with the Fermat test is that the uncertainty as to whether the method of random tests will reveal a composite number as such cannot be correlated with the number of tests.

However, such a connection is offered on the basis of the Euler criterion (see Section 10.4.1): For an odd prime p and for all integers a that are not multiples of p, we have

(10.25) 

where ±1 mod p denotes the Legendre-Jacobi symbol. In analogy to Fermat's little theorem we obtain an exclusionary criterion by taking the contrapositive of the following statement:

  • If for a natural number n there exists an integer a with gcd(a, n) = 1 and a(n1)/2 mod n, then n cannot be a prime number.

The required computational expenditure for establishing this criterion is the same as that for the Fermat test, namely O (log3 n).

As with the Fermat test, where there is the problem of pseudoprimes, there exist integers n that for certain a satisfy the Euler criterion although they are composite. Such n are called Euler pseudoprimes to the base a. An example is n = 91 = 7·13 to the bases 9 and 10, for which we have 945 1 mod 91 and 1045 1 = mod 91.[9]

An Euler pseudoprime to a base a is always a pseudoprime to the base a (see page 218), since by squaring a(n1)/2 mod n it follows that an1 1 mod n.

There is, however, no counterpart to the Carmichael numbers for the Euler criterion, and based on the following observations of R. Solovay and V. Strassen we can see that the risk of a false test result for Euler pseudoprimes is favorably bounded from above.

  1. For a composite number n the number of integers a relatively prime to n for which a(n1)/2 mod n is at most ;φ(n) (see [Kobl], Section 2.2, Exercise 21). From this we have the following proposition.

  2. The probability that for a composite number n and k randomly selected numbers a1,...,ak relatively prime to n one has mod n, for 1 r k, is at most 2k.

These results make it possible to implement the Euler criterion as a probabilistic primality test, where "probabilistic" means that if the test returns that result "n is not prime," then this result is definitive, but it is only with a certain probability of error that we may infer that n is in fact prime.

Algorithm: Probabilistic primality test à la Solvay-Strassen for testing a natural number n for compositeness

  1. Choose a random number a n 1 with gcd(a, n) = 1.

  2. If has a(n1)/2 mod n, then output "n is a probable prime." Otherwise, output "n is composite."

This test requires computation time O (log3 n) for the calculation of the exponent and the Jacobi symbol. By repeated application of this test we can reduce the probability of error in step (ii). For example, for k = 60 we obtain a vanishingly small probability of error less than 260 1018, and D. Knuth has indicated that this value is less than that of a transient hardware error, caused, for example, by an alpha particle that has found its way into the CPU or memory of a computer and thereby switched the value of a bit.

We might be satisfied with this test, since we have control over the probability of error and we have efficient algorithms for all the required computations. However, there are results that lead to a more efficient algorithm. For this we would like to introduce a few considerations that will improve our understanding of the most widely used probabilistic primality tests

Let us make the hypothesis that n is prime. Then by Fermat's little theorem we have an 1 1 mod n for integers a that are not multiples of n. The square root of an1 mod n can assume only the value 1 or 1, since these are the only solutions of the congruence x2 1 mod n (see Section 10.4.1). If we also compute from an1 mod n the successive square roots

click to expand

one after another until (n 1)/2t is odd, and if in the process we arrive at a residue not equal to 1, then this residue must have the value 1, for otherwise, n cannot be prime, which we have hypothesized. For the case that the first square root different from 1 has the value 1, we stick by our hypothesis that n is prime. If n is nevertheless composite, then we shall call n on the basis of this special property a strong pseudoprime to the base a. Strong pseudoprimes to a base a are always Euler pseudoprimes to the base a (see [Kobl], Chapter 5).

We assemble all of this into the following probabilistic primality test, though for the sake of efficiency we shall first compute the power click to expand mod n with odd (n 1)/2t, and if this is not equal to 1, we continue to square b until we obtain a value of ±1 or have reached a(n1)/2 mod n. In the last case we must have either b = 1 or that n is composite. The idea of shortening the algorithm so that the last square does not have to be calculated has been taken from [Cohe], Section 8.2.

Probabilistic primality test à la Miller-Rabin for odd integers n > 1

  1. Determine q and t with n 1 = 2tq, with q odd.

  2. Choose a random integer a, 1 < a < n. Set e 0, b aq mod n. If b = 1, output "n is probably prime" and terminate the algorithm.

  3. As long as we have b ±1 mod n and e < t 1, set b b2 mod n and e e + 1. If now b n 1, then output "n is composite." Otherwise, output "n is probably prime."

With a running time of O (log3 n) for the exponentiations, the Miller-Rabin test (MR test for short) has complexity the same order of magnitude as the Solovay-Strassen test.

The existence of strong pseudoprimes means that the Miller-Rabin primality test offers us certainty only about the compositeness of numbers. The number 91, which we trotted out above as an example of an Euler pseudoprime (to base 9) is alsoagain to base 9a strong pseudoprime. Further examples of strong pseudoprimes are

click to expand

and

click to expand

These two numbers are the only pseudoprimes below 1013 to the prime bases 2, 3, 5, 7, and 11 (see [Rose], Section 3.4).

Fortunately, the number of bases of strong pseudoprimes is again diminished by these numbers themselves. M. Rabin has proved that for a composite number n there are fewer than n/4 bases a, 2 a n 1, to which n is a strong pseudoprime (see [Knut], Section 4.5.4, Exercise 22, and [Kobl], Chapter 5). From this we obtain with k-fold repetition of the test with k randomly chosen bases a1,...,ak a probability smaller than 4k that a strong pseudoprime has been falsely accepted as a prime. Therefore, for the same amount of work, the Miller-Rabin test is always superior to the Solovay-Strassen, which with k repetitions has probability of error 2k.

In practice, the Miller-Rabin test does much better than advertised, since the actual probability of error is in most cases much smaller than that guaranteed by the theorem of Rabin (see [Schn], Section 11.5).

Before we get down to implementing the Miller-Rabin test, we look at two approaches to improving efficiency.

By beginning the Miller-Rabin test with a division sieve that divides the prime candidates by small primes, we obtain an advantage: If a factor is found in the process, then the candidate can be eliminated from consideration without the expense of a Miller-Rabin test. The question at once presents itself as to how many prime numbers would be optimal to divide by before undertaking the MR test. We give a recommendation due to A. K. Lenstra: The greatest efficiency is achieved if one divides by the 303 prime numbers less than 2000 (see [Schn], Section 11.5). The reason for this arises from the observation that the relative frequency of odd numbers that are not multiples of a prime less than n is about 1.12/ ln n. Dividing by prime numbers under 2000 eliminates about 99 percent of all composite numbers without using the MR test, which is then used only on the remaining small percentage of candidates.

Each division by a small divisor requires computation time of order only O (ln n). We make use of an efficient division routine especially for small divisors and use it in instituting the division sieve.

The division sieve is implemented in the following function sieve_l(). It, in turn, uses the primes less than 65536 stored in the field smallprimes-[NOOFSMALLPRIMES]. The primes are stored as differences, where for each prime only a byte of storage is required. The diminished access to these primes is not a serious problem, since we are using them in their natural order. The case that the candidate itself is a small prime and is contained in the prime number table must be specially indicated.

Finally, we profit from the exponentiation function for small bases (see Chapter 6) by applying the MR test with small primes 2,3,5,7,11,... < B instead of randomly selected bases. According to experience this in no way impairs the results of the test.

We now introduce the division sieve. The function uses the division routine for short divisors that we developed for the function div_l().

start sidebar

Function:

division sieve

Syntax:

USHORT sieve_l (CLINT a_l, unsigned no_of_smallprimes);

Input:

a_l (candidate for primality search)

no_of_smallprimes (number of primes to serve as divisors, without 2)

Return:

prime factor, if one is found

1, if the candidate itself is prime

0, if no factor is found

end sidebar

   USHORT   sieve_l (CLINT a_l, unsigned int no_of_smallprimes)   {     clint *aptr_l;     USHORT bv, rv, qv;     ULONG rhat;     unsigned int i = 1; 

start sidebar

For the sake of completeness we first test whether a_1 is a multiple of 2. If, in fact, a_1 has the value 2, then 1 is returned, while if a_1 is greater than 2 and is even, then 2 is returned as a factor.

end sidebar

   if (ISEVEN_L (a_l))     {       if (equ_l (a_l, two_l))          {            return 1;          }       else          {            return 2;          }     }   bv = 2;   do     { 

start sidebar

The prime numbers are computed by successive addition of the numbers stored in smallprimes[] and stored in bv. The first prime that serves as a divisor is 3. We use the code of the fast routine for division by a USHORT (see Section 4.3).

end sidebar

     rv = 0;     bv += smallprimes[i];     for (aptr_l = MSDPTR_L (a_l); aptr_l >= LSDPTR_L (a_l); aptr_l−−)        {          qv = (USHORT)((rhat = ((((ULONG)rv) << BITPERDGT) + (ULONG)*aptr_l)) / bv);          rv = (USHORT)(rhat  (ULONG)bv * (ULONG)qv);        }   } while (rv != 0 && ++i <= no_of_smallprimes); 

start sidebar

If an actual divisor was found (rv == 0 and bv a_l; otherwise, a_l itself is prime!), this is returned. If a_l is itself a small prime, then 1 is returned. Otherwise, 0 is returned.

end sidebar

   if (0 == rv)     }        if (DIGITS_L (a_l) == 1 && *LSDPTR_L (a_l) == bv)          }             bv = 1;          }        /* else: result in bv is a prime factor of a_l */     }   else /* no factor of a_l was found */     }        bv = 0;     }   return bv; } 

The function sieve_l() can be used for splitting off prime factors under 65536 from CLINT objects. To enable this, the macro SFACTOR_L(n_l) is defined in flint.h, which uses the call sieve_l(n_l, NOOFSMALLPRIMES) to test divide n_l by the primes stored in smallprimes[]; SFACTOR_L() returns the same value as sieve_l(). By repeated calls to SFACTOR_L() with subsequent division by the factors found, integers below 232, that is, integers that can be represented by the standard integer types, can be completely factored. If no factor is found, then we are dealing with a prime number.

The full-blown test function prime_l() integrates the division sieve and the Miller-Rabin test. To retain maximum flexibility the function is constituted in such a way that the number of divisions in the pretest and the number of passes through the Miller-Rabin test can be passed as parameters. To simplify the situation in applications, the macro ISPRIME_L(CLINT n_l) can be used, which in turn calls the function prime_l() with preset parameters. We use the exponentiation function wmexpm_l(), which combines Montgomery reduction with the advantages that accrue from exponentiation of small bases (see Chapter 6).

start sidebar

Function:

probabilistic primality test à la Miller-Rabin with division sieve

Syntax:

 int prime_l (CLINT n_l, unsigned no_of_smallprimes,                 unsigned iterations); 

Input:

n_l (candidate for primality)

no_of_smallprimes (number of primes for the division sieve)

iterations (number of test iterations)

Return:

1 if the candidate is "probably" prime

0 if the candidate is composite or equal to 1

end sidebar

   int   prime_l (CLINT n_l, unsigned int no_of_smallprimes, unsigned int iterations)   {     CLINT d_l, x_l, q_l;     USHORT i, j, k, p;     int isprime = 1;     if (EQONE_L (n_l))       {          return 0;       } 

start sidebar

Now the division test is executed. If a factor is found, then the function is terminated with 0 returned. If 1 is returned by sieve_l(), indicating that n_l is itself prime, then the function is terminated with return value 1. Otherwise, the Miller-Rabin test is carried out.

end sidebar

     k = sieve_l (n_l, no_of_smallprimes);     if (1 == k)       {          return 1;       }     if (1 < k)       {          return 0;       }     else       { 

start sidebar

Step 1. The decomposition of n 1 as n 1 = 2k q with odd q is carried out by the function twofact_l(). The value n 1 is retained in d_l.

end sidebar

         cpy_l (d_l, n_l);         dec_l (d_l);         k = (USHORT)twofact_l (d_l, q_l);         p = 2;         i = 1;         isprime = 1;         do           { 

start sidebar

Step 2. The bases p are formed from the differences stored in the field smallprimes[]. For the exponentiation we use the Montgomery function wmexpm_l, since the base is always of type USHORT and, after the pretest with the division sieve of the prime candidate n_l, always odd. If afterwards the power in x_l is equal to 1, then the next test iteration begins.

end sidebar

           p += smallprimes[i++];           wmexpm_l (p, q_l, x_l, n_l);           if (!EQONE_L (x_l))              {                j = 0; 

start sidebar

Step 3. Squaring, as long as x_l is different from ±1 and k 1 iterations have not yet been executed.

end sidebar

               while (!EQONE_L (x_l) && !equ_l (x_l, d_l) && ++j < k)                 {                     msqr_l (x_l, x_l, n_l);                   }               if (!equ_l (x_l, d_l))                 {                   isprime = 0;                 }          }     } 

start sidebar

Loop over the number iterations of test iterations.

end sidebar

     while ((--iterations > 0) && isprime);     return isprime;   } } 

As for the open question as to how many iterations of the Miller-Rabin test should be executed to achieve reliable results, the literature provides various suggestions: [Gord] and [Schn] recommend five iterations (for cryptographic applications), while the algorithm in [Cohe] suggests 20 iterations. Knuth [Knut] informs us that with 25 test iterations the number of errors in a set of a billion prime candidates accepted as prime would be less than 106, though he does not actually recommend the number 25 and instead asks the philosophical question, "Do we really need to have a rigorous proof of primality?"[10]

For the cases in which this question must be answered in the affirmative, the APRCL test, published in 1981 by its developers L. Adleman, C. Pomerance, R. Rumely, H. Cohen, and A. K. Lenstra, shows the state of affairs at that time. H. Riesel praised this test as a breakthrough, proving that fast, generally applicable, definitive primality tests were possible (see [Ries], page 131). The test determines the primality property of an integer n in time of order O ((ln n)C ln ln ln n) for a suitable constant C. Since the exponent ln ln ln n behaves like a constant for all practical purposes, it can be considered a polynomial-time procedure, and integers with several hundreds of decimal digits can have their primality or lack thereof determined definitively in times that are otherwise achieved only by probabilistic tests.[11] The algorithm, which uses analogues of Fermat's little theorem for higher algebraic structures, is theoretically complicated and difficult to implement. For further information see [Cohe], Chapter 9, or the original article cited therein, as well as the extensive explication in [Ries].

One might also ask whether one would obtain a definitive proof of primality by testing sufficiently many bases with the Miller-Rabin test. In fact, G. Miller has proved, on the assumption of the extended Riemann hypothesis (see page 196) that an odd natural number n is prime if and only if for all bases a, 1 a C · ln2 n, the Miller-Rabin test indicates the primality of n (the constant C is specified in [Kobl], Section 5.2, as 2). Used in this way the Miller-Rabin test is a deterministic polynomial-time primality test that uses about 2.5 × 105 iterations, for primes of about 512 binary digits, to produce a definitive answer. If we suppose 101 seconds for each iteration (this is the order of magnitude for the time required for an exponentiation on a fast PC; cf. Appendix D), then a definitive test would take about seven hours. Considering that there is an unproven hypothesis to be reckoned with as well as a long time for the calculation, this theoretical result will satisfy neither the mathematical purists nor the computational pragmatists interested in fast procedures.

However, Henri Cohen seems to be answering the above-quoted question of Knuth when he categorically asserts ([Cohe], Section 8.2), "Primality testing, however, requires rigorous mathematical proofs."

[6]T is for trillion, whereby Asimov denotes the order of magnitude 1012. Thus T-281 stands for 1012·281.25 = 103375 211211.5.

[7]For a discussion of the complexity-theoretic aspects of cryptography one might have a look at [HKW], Chapter 6, or [Schn], Sections 19.3 and 20.8, and the many further references therein. One should also read the footnote in the present book on page 188.

[8]The prime number theorem was proved independently in 1896 by Jacques Hadamard and Charles-Jacques de la Vallée Poussin (see [Bund], Section 7.3).

[9]We have 93 1 mod 91, since 3 is the order of 9 and 6 is the order of 10 in . Therefore, 945 93·15 1 mod 91 and 1045 106·7+3 103 1 mod 91.

[10]In the article "The Generation of Random Numbers That Are Probably Prime," by P. Beau-chemin, G. Brassard, C. Crépeau, C. Goutier, and C. Pomerance, Journal of Cryptology 1, 1, 1988, it is suggested that Knuth's assertion holds only because the probability of error for most composite numbers is significantly below. Otherwise, the quoted error considered by Knuth would be significantly greater than the given value of 106.

[11]Cohen suggests in this connection that the practicably implementable variant of the APRCL algorithm is again probabilistic, but that nonetheless a less practical, but deterministic, version exists (see [Cohe], Chapter 9).


Team-Fly


Cryptography in C and C++
Cryptography in C and C++
ISBN: 189311595X
EAN: 2147483647
Year: 2001
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net