Chapter 5: Cryptographic Techniques

Team-Fly

In this chapter, we introduce some basic cryptographic techniques that are used in the rest of this book. More specifically, we introduce the topic in Section 5.1; address cryptographic hash functions, secret key cryptography, and public key cryptography in Sections 5.2, 5.3, and 5.4, respectively; address digital envelopes in Section 5.5; and elaborate on some techniques to protect private keys and generate pseudorandom bit sequences in Sections 5.6 and 5.7. Finally, we discuss some legal issues that surround the use of cryptography in Section 5.8, and introduce a notation that can be used to describe cryptographic protocols and applications in Section 5.9. Note that this chapter is far too short to provide a comprehensive overview about all cryptographic techniques that are relevant for Internet and intranet security. For this purpose, you must read one (or several) of the many books on cryptography that are available today. Among these books, I particularly recommend [1–3].

5.1 INTRODUCTION

According to [3], the term cryptography refers to the study of mathematical techniques related to various aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication. It is commonly agreed that cryptography is a major enabling technology for network security, and that cryptographic algorithms and protocols are the essential building blocks:

  • A cryptographic algorithm is an algorithm defined by a sequence of steps precisely specifying the actions required to achieve a specific security objective.

  • A cryptographic protocol is a distributed algorithm defined by a sequence of steps precisely specifying the actions required of two or more entities to achieve a specific security objective.

Cryptographic algorithms and protocols are being studied in both theory and practice. The aim is to design and come up with algorithms and protocols that are both secure and practical. Note, however, that there are at least two basic approaches to discussing the security of cryptographic algorithms and protocols:

  • Computational security measures the computational effort required to break a specific cryptographic algorithm or protocol. An algorithm or protocol is said to be computationally secure if the best method for breaking it requires at least n operations, where n is some specified, usually very large, number. The problem is that no known practical algorithm or protocol can be proven to be secure under this definition. In practice, an algorithm or protocol is called computationally secure if the best known method of breaking it requires an unreasonably large amount of computational resources (e.g., time or memory). Another approach is to provide evidence of computational security by reducing the security of an algorithm or protocol to some well-studied problem that is thought to be difficult. For example, it may be possible to prove that an algorithm or protocol is secure if a given integer cannot be factored or a discrete logarithm cannot be computed. Algorithms and protocols of this type are sometimes called provably secure, but it must be understood that this approach only provides a proof of security relative to the difficulty of solving another problem, not an absolute proof of security.

  • Unconditional security measures the security of a cryptographic algorithm or protocol when there is no upper bound placed on the amount of computational resources for an adversary. Consequently, an algorithm or protocol is called unconditionally secure if it cannot be broken, even with infinite time and memory at hand.

The computational security of a cryptographic algorithm or protocol can be studied from the point of view of computational complexity, whereas the unconditional security cannot be studied from this point of view because computational resources are allowed to be infinite. The appropriate framework in which unconditional security must be studied is the probability theory, and the application thereof is the communication or information theory.

Obviously, unconditional security is preferable from a security point of view, because it protects against a potentially very powerful adversary. Unfortunately, unconditional security is generally hard and expensive to achieve in many cases, and sometimes impossible. For example, the theory shows that unconditionally secure encryption systems use very long keys, making them unsuitable for most practical applications. Similarly, there is no such thing as an unconditionally secure public key cryptosystem. The best we can achieve is provable security, in the sense that the problem of breaking the public key cryptosystem is arguably at least as difficult as solving a complex mathematical problem. Consequently, one is satisfied with computational security, given some reasonable assumptions about the computational power of a potential adversary. But keep in mind that the security that a computationally secure cryptographic algorithm or protocol may provide is, for the most part, based on the perceived difficulty of a mathematical problem, such as the factorization problem or the discrete logarithm problem in the case of public key cryptography. Confidence in the security of such systems may be high because the problems are public and many minds have attempted to attack them. However, the vulnerability remains that a new insight or computing technology may defeat this type of cryptography. There are at least two developments that provide some evidence for this intrinsic vulnerability:

  • In 1994, Peter W. Shor proposed randomized polynomial-time algorithms for computing discrete logarithms and factoring integers on a quantum computer, a computational device based on quantum mechanical principles [4, 5]. Presently, it is not known how to actually build a quantum computer, or if this is even possible.

  • Also in 1994, Len M. Adleman[1] demonstrated the feasibility of using tools from molecular biology to solve an instance of the directed Hamiltonian path problem, which is known to be hard[2] [6]. The problem instance was encoded in molecules of deoxyribonucleic acid (DNA), and the steps of the computation were performed with standard protocols and enzymes. Adleman notes that while the currently available fastest supercomputers can execute approximately 1012 operations per second, it is plausible for DNA computers to execute 1020 or even more operations per second. Moreover, a DNA computer would be far more energy efficient than existing supercomputers. Similar to the quantum computer, it is not clear at present whether it is feasible to actually build a DNA computer with such performance characteristics. Further information on DNA computing can be found in [7].

Should either quantum computers or DNA computers ever become practical, they would have a tremendous impact on cryptography. In fact, many cryptographic algorithms and protocols that are computationally secure today would be rendered worthless. This is particularly true for algorithms and protocols that use public key cryptography.

Cryptographic algorithms and protocols are used to establish secured channels (both in terms of authenticity and integrity, as well as confidentiality). Note the subtle difference between a secure channel and a secured channel. Certain channels are assumed to be secure, including trusted couriers and personal contacts between communicating parties, whereas other channels may be secured by physical or cryptographic techniques. Physical security may be established through physical means, such as dedicated communication links with corresponding access controls put in place, or the use of quantum cryptography. Contrary to conventional cryptography, the security of quantum cryptography does not rely upon any complexity-theoretic or probability-theoretic assumptions, but is based on the Heisenberg uncertainty principle of quantum physics [8]. As such, quantum cryptography is immune to advances in computing power and human cleverness. In the future, quantum cryptography may provide a physical alternative to unconditionally secure cryptographic algorithms and protocols. In the meantime, however, conventional and computationally secure cryptographic algorithms and protocols are much easier to use and deploy. Consequently, we are not going to delve into the details of quantum cryptography in this book.

[1]Len M. Adleman is a coinventor of the Rivest, Shamir, and Adleman (RSA) cryptosystem.

[2]According to theoretical computer science, the directed Hamiltonian path problem is NP-complete.


Team-Fly


Internet and Intranet Security
Internet & Intranet Security
ISBN: 1580531660
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net