Appendix 9B The Complexity of Algorithms


[Page 286 (continued)]

The central issue in assessing the resistance of an encryption algorithm to cryptanalysis is the amount of time that a given type of attack will take. Typically, one cannot be sure that one has found the most efficient attack algorithm. The most that one can say is that for a particular algorithm, the level of effort for an attack is of a particular order of magnitude. One can then compare that order of magnitude to the speed of current or predicted processors to determine the level of security of a particular algorithm.

A common measure of the efficiency of an algorithm is its time complexity. We define the time complexity of an algorithm to be f(n) if, for all n and all inputs of length n, the execution of the algorithm takes at most f(n) steps. Thus, for a given size of input and a given processor speed, the time complexity is an upper bound on the execution time.


[Page 287]

There are several ambiguities here. First, the definition of a step is not precise. A step could be a single operation of a Turing machine, a single processor machine instruction, a single high-level language machine instruction, and so on. However, these various definitions of step should all be related by simple multiplicative constants. For very large values of n, these constants are not important. What is important is how fast the relative execution time is growing. For example, if we are concerned about whether to use 50-digit (n = 1050) or 100-digit (n = 10100) keys for RSA, it is not necessary (or really possible) to know exactly how long it would take to break each size of key. Rather, we are interested in ballpark figures for level of effort and in knowing how much extra relative effort is required for the larger key size.

A second issue is that, generally speaking, we cannot pin down an exact formula for f(n). We can only approximate it. But again, we are primarily interested in the rate of change of f(n) as n becomes very large.

There is a standard mathematical notation, known as the "big-O" notation, for characterizing the time complexity of algorithms that is useful in this context. The definition is as follows: if and only if there exist two numbers a and M such that

Equation 9-4


An example helps clarify the use of this notation. Suppose we wish to evaluate a general polynomial of the form

p(x) = anXn + an1Xn1 + ... + a1x + a0

The following simple-minded algorithm is from [POHL81]:

algorithm P1;     n, i, j: integer; x, polyval: real;     a, S: array [0..100] of real;     begin        read(x, n);        for i := 0 upto n do        begin           S[i] := 1; read(a[i]);           for j := 1 upto i do S[i] := x x S[i];           S[i] := a[i] x S[i]        end;        polyval := 0;        for i := 0 upto n do polyval := polyval + S[i];        write ('value at', x, 'is', polyval)     end. 


In this algorithm, each subexpression is evaluated separately. Each S[i] requires (i + 1) multiplications: i multiplications to compute S[i] and one to multiply by a[i]. Computing all n terms requires


multiplications. There are also (n + 1) additions, which we can ignore relative to the much larger number of multiplications. Thus, the time complexity of this algorithm is f(n) = (n + 2)(n + 1)/2. We now show that f(n) = O(n2). From the definition of Equation (9.4), we want to show that for a = 1 and M = 4, the relationship holds for g(n) = n2. We do this by induction on n. The relationship holds for n = 4 because (4 + 2) (4 +1)/2 = 15 < 42 = 16. Now assume that it holds for all values of n up to k [i.e.,(k + 2)(k + 1)/2 < k2. Then, with n = k + 1.


[Page 288]


Therefore, the result is true for n = k + 1.

In general, the big-O notation makes use of the term that grows the fastest. For example,

  1. O[ax7 + 3x3 + sin(x)] O(ax7) = O(x7)

  2. O(en + an10) = O(en)

  3. O(n! + n50) = O(n!)

There is much more to the big-O notation, with fascinating ramifications. For the interested reader, two of the best accounts are in [GRAH94] and [KNUT97].

An algorithm with an input of size n is said to be

  • Linear: If the running time is O(n)

  • Polynomial: If the running time is O(nt) for some constant t

  • Exponential: If the running time is O(th(n)) for some constant t and polynomial h(n)

Generally, a problem that can be solved in polynomial time is considered feasible, whereas anything worse than polynomial time, especially exponential time, is considered infeasible. But you must be careful with these terms. First, if the size of the input is small enough, even very complex algorithms become feasible. Suppose, for example, that you have a system that can execute operations per unit time. Table 9.5 shows the size of input that can be handled in one time unit for algorithms of various complexities. For algorithms of exponential or factorial time, only very small inputs can be accommodated.

Table 9.5. Level of Effort for Various Levels of Complexity

Complexity

Size

Operations

log2n

21012 = 103x1011

1012

N

1012

1012

n2

106

1012

n6

102

1012

22

39

1012

n!

15

1012


The second thing to be careful about is the way in which the input is characterized. For example, the complexity of cryptanalysis of an encryption algorithm can be characterized equally well in terms of the number of possible keys or the length of the key. For the Advanced Encryption Standard (AES), for example, the number of possible keys is 2128 and the length of the key is 128 bits. If we consider a single encryption to be a "step" and the number of possible keys to be N = 2n, then the time complexity of the algorithm is linear in terms of the number of keys [O(N)] but exponential in terms of the length of the key [O(2n)].




Cryptography and Network Security Principles and Practices
Cryptography and Network Security (4th Edition)
ISBN: 0131873164
EAN: 2147483647
Year: 2005
Pages: 209

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net