Algorithms

 < Day Day Up > 



The algorithm is essentially the mathematical building block with which encryption techniques are constructed. Consider it the overall blueprint for any given cryptographic process. The algorithm is what's referenced when an encryption program alters your original data. The same algorithm must be used to reverse the encryption process. With symmetric-key systems, the private key used by the algorithm must be transmitted securely to the trusted parties before any transfers of encoded data can take place. This is one of the greatest challenges in the cryptography routine and will lead to our discussion of the two most common types of encryption and the algorithms they employ.

The length of the key it uses generally measures the strength of any given algorithm. A key is simply a string of code that tells the algorithm exactly how to scramble the data it's working with. When this same key is used in a reverse iteration of the original algorithm, the data is altered from ciphertext back into plain text. We've all heard the term, 128-bit, used to measure the strength of Internet browser security. This number is simply a benchmark for what's called strong encryption. Any encryption process using less than a 128-bit key can't be called strong. Until January 2000, the U.S. government had an export ban on products using 128-bit encryption.

The number of rounds it goes through in its process also determines an algorithm's strength. Think of a round as a single stir in the mixing pot. The more rounds the algorithm goes through, the more stirred up the data gets. Different algorithms also encrypt data in blocks of varying sizes. These blocks, typically 64 to 128 bits, are just slices of the original data, broken into smaller, more manageable chunks. By dividing the data in this way, cipher strength can be increased while processing time is reduced. First, individual blocks are encrypted and then the resultant blocks of encrypted data are combined.

Although it's highly unlikely that you'll need to memorize the math employed by a particular algorithm, the exam will require your knowledge of the most commonly used ciphers. You should also be able to distinguish which category an algorithm belongs in-symmetric, asymmetric, hash, and so on. Let's take a look at the three basic categories of algorithms in use today.

Note 

An algorithm provides a road map for any given cryptographic process. The key an algorithm uses gives it the information needed to apply its math in a unique way.

Symmetric Algorithms

As we briefly discussed in Chapter 3, symmetric (or private key) encryption uses a single key to encode and decode messages. This approach is fast and capable but the problem lies in exchanging the private key securely. Before any encrypted data can be exchanged, both parties must possess the same key. In some cases, keys are exchanged through means that are other than digital, such as snail mail or at a meeting. Believe it or not, people still have meetings, shake one another's hands and occasionally exchange private keys. Because of this 'key' issue, symmetric encryption techniques are not fit for dealings such as Internet e-commerce. However, the private-key method is still widely used in environments where the benefit of its speed is vital and the key exchange dilemma is inconsequential. These situations include large data transmissions, recurring transfers of data between the same parties, smart card implementations, and local storage of encrypted information. Here are the algorithms native to the symmetric encryption method that you should know intimately.

Note 

Symmetric encryption is faster than asymmetric but because the secret key must be exchanged, it is not suited for all transactions.

Advanced Encryption Standard (AES)

Regulated under The National Institute of Standards and Technology (NIST), the Advanced Encryption Standard (AES) is the Federal Information Processing Standard (FIPS) algorithm used by U.S. government organizations. NIST has recently selected the Rijndael algorithm as a replacement for the DES algorithm, which was the previous government standard.

Data Encryption Standard (DES)

DES, developed in 1975 by IBM and specified by the American National Standards Institute (ANSI) in 1981 as ANSI X3.92, is among the most popular symmetric-key systems available today. DES is known as a 64-bit block cipher, which means it works with 64-bit plain text blocks and generates 64-bit ciphertext blocks as a result of its process. Using a 56-bit key, DES encrypts the 64-bit blocks of data using a 16-round algorithm. A 64-bit block is equal to 16 hexadecimal numbers. After the data has been sectioned into 64-bit chunks, each chunk is subsequently permutated (or systematically rearranged) to alter its bit order. Each 64-bit block is then divided into two, 32-bit blocks: a right and a left block.

The 56-bit key that DES uses is actually 64-bits long. However, every 8th bit in the key (bit numbers 8, 16, 24, 32, 40, 48, 56, and 64) is not used by the algorithm, but rather, is used for error detection. Now the 48-bit subkeys (16 in all) used in the 16 rounds of the algorithm are generated. This is accomplished by a process called key scheduling. The key scheduling routine produces subkeys that are intended to emulate randomly produced keys. At this time, the 56-bit key itself is split into two 28-bit parts. Each half is rotated, rejoined, and then expanded to 48 bits. These 48-bit subkeys are applied to each 32-bit block of data until all 16 rounds are complete. The data is now encrypted and the only way (theoretically) to decrypt it is to apply the original key with the same algorithm in reverse.

Note that this is a simplified explanation of the DES algorithm offered to give you a basic understanding of its inner workings. DES only uses a 56-bit key, technically classifying it as an all-around weak cipher. Another frailty of DES, and other block ciphers as well, is the weak-key factor. DES will occasionally generate weak keys resulting in regularities in its encryption. This unlikely but possible occurrence can provide a way to recover plain text within an encrypted message. A set of weak keys can have identical encryption and decryption characteristics, increasing the chances of the encryption being cracked. Although feasible, understand that this scenario is highly unlikely and the number of weak keys, compared to the number of possible keys, is more or less trivial.

So, why apply DES only once when you can apply it three times at triple the cost?

Triple DES

Very simply put, Triple DES (also known as 3DES) is just DES applied three times. The effective key length is a hefty 168-bits. Voilà, we now have strong encryption. It comes at a price, though. Triple DES slows down the process somewhat. However, it's still faster than other private-key methods available. As defined in ANSI X9.52, there are several possible modes of operation used by Triple DES. One method is to make all three keys unique. Another is to make keys 1 and 2 unique but key 3 is simply key 1 repeated again. Still another method is to use the same key three times. When this last method is used, the effective result is a Triple DES that's backward compatible with DES.

Rijndael

Developed by two Belgian cryptographers, Vincent Rijmen and Joan Daemen, Rijndael (which sounds like rain doll) is an iterated block cipher that supports variable key and block lengths of 128, 192, or 256 bits. Its name is derived from equal parts of both developers' names. Rijndael has its roots in Square, a previous project of the same two cryptographers. Accepted in October 2000, Rijndael has become the new Advanced Encryption Standard (AES), replacing DES. This algorithm can be implemented to operate at speeds that are uncharacteristically high for a block cipher of its kind. It is also well suited for implementation on smart cards as it can be installed and run with a small amount of code using little RAM. It uses a four-step, parallel series of rounds making it stand up very well to cryptanalytic attacks.

Rijndael is also known for its ability to work proficiently in a wide range of hardware environments on various types of processors. It is also purported to work well over voice and ISDN lines, ATM networks, and satellite transmissions, but its developers state that as long as the processor in question can support the algorithm, it can be implemented in these and many other ways.

Note 

Rijndael was selected as the Advanced Encryption Standard (AES).

Blowfish and Twofish

Developed in 1993 by Bruce Schneier, founder of Counterpane Internet Security, Blowfish is a symmetric 64-bit block cipher employing variable length keys of 32 to 448-bits implemented in a 16-round process. It can be used as a drop-in substitute for the more time-consuming algorithms, IDEA and DES. Blowfish is unpatented, royalty-free, and requires no license to use.

Twofish, a 128-bit block cipher, is the follow-up project to Blowfish. It uses 128, 192, or 256-bit keys in a 16-round process. It is much faster than Blowfish and is also unpatented and free to use. The Twofish algorithm is efficient for use on smart cards and upon implementation, allows for performance trade-offs between speed, key scheduling, and memory usage.

Serpent

A finalist in the AES competition, Serpent ended up with 59 votes in the final round, coming in second to Rijndael, which won with 86 votes. Serpent, designed by Ross Anderson, Eli Biham, and Lars Knudsen, is a 128-bit block cipher using 128, 192, or 256-bit keys in a 32-round process. While slower than Rijndael, (and indeed all of the other finalists, as well) Serpent is still faster than DES. It uses a relatively simple, straightforward process and its developers claim that although slower, it's more secure than Rijndael. I'll let you, future security professional, do your own cryptanalysis and come to your own conclusions.

Skipjack

Developed by the United States National Security Agency (NSA), the Skipjack algorithm is classified as secret. This means its details have not been released to the public for scrutiny. As this led to much debate on its level of security, the government invited a group of cryptographers to dissect the algorithm in private and they concluded in a report that it was indeed secure. The Skipjack algorithm processes 64-bit data chunks with an 80-bit key in 32 rounds of processing. It is the algorithm used in the Clipper chip. As a result of its secrecy, implementation of the algorithm is limited to government-authorized hardware manufacturers: it cannot be used in software.

International Data Encryption Algorithm (IDEA)

Widely known for its use within PGP, IDEA is a 64-bit block cipher that uses a 128-bit key in an 8-round process. The algorithm effectively uses 52, 16-bit keys, which it generates from the original 128-bit key. The 128-bit key is first split into eight 16-bit keys. The original key bits are then shifted to make a new key. That key is then split into eight 16-bit keys. This process is reiterated until there are 52 keys.

It is said to be faster than DES but reports show that software implementations of IDEA operate at relatively equivalent speeds of those using DES. Hardware implementations of IDEA, however, prove to be somewhat quicker than DES.

MARS

MARS (Multiplication, Addition, Rotation, Substitution), IBM's submission to AES as a replacement for DES, is a shared-key block cipher that supports 128-bit data blocks and uses a variable key length of 128 to over 400-bits. It is purported to be faster and more secure than DES. The algorithm has a small footprint making it ideal for use in smart cards and other implementations where space is a concern.

RSA RC Series Algorithms

Ron Rivest, lead developer of the asymmetric RSA algorithm, collaborated with some other great minds at RSA Laboratories to develop a series of private-key algorithms as well. The latest effort in the RC (Ron's Code or Rivest's Cipher) series-RC6-was one of the five finalists in the AES competition. RC6 addressed some problems that studies revealed within its predecessor, RC5, concerning some 'interesting theoretical attacks.' In line with the requirements of the AES, RC6 is a robust block cipher capable of handling 128-bit blocks of data. RC6 was designed so that the number of rounds, the size of the block it manipulates, and the size of the key are all adjustable. The key that RC6 uses has an upper limit of 2,040 bits. It is also well suited for hash functions, which we'll discuss later in the chapter.

Because the demands of the AES stated that the proposed algorithm work with 128-bit blocks, the RC5 design presented a complication. For RC5 to accomplish this, its design would have called for the use of two 64-bit registers (or chunks of memory), which would have been in contrast with the specified architecture for AES. The developers went back to the drawing board, and with RC6, presented a way to achieve this result using four 32-bit registers. Although RC6 was not selected for the AES, the algorithm is still used widely in many software applications.

RC2, an earlier algorithm in the series, is a variable key-size, 64-bit block cipher that can be used as a drop-in substitute for the slower DES algorithm. RC4 is a speedy, variable key-size stream cipher found in products such as RSA SecurPC. A stream cipher encrypts data on the fly, making this algorithm work well in conjunction with SSL to encrypt data transferred between creators of secure Web sites and their customers.

A few other symmetric algorithms worth mentioning are the following:

  • GOST: A 64-bit symmetric algorithm from the former Soviet Union. It employs a 256-bit key and can be used in software and hardware implementations.

  • Tiny Encryption Algorithm (TEA): Designed by David Wheeler, TEA is a 128-bit cipher that uses a minimal amount of code to implement. TEA uses a large number of rounds opposed to a complex program.

  • CAST: This is a Feistel cipher similar in style to DES. CAST comes is different strengths, namely 128 and 256. It is also implemented within PGP among other places.

There is an afterthought to the preceding discussion that should be addressed. You might be confused by the fact that two of the aforementioned private-key algorithms, IDEA and CAST, are used within PGP.

Note 

PGP also employs Triple DES. Being aware that PGP is a public-key encryption method used for secure e-mail, you might have asked, 'Why is PGP being discussed within the context of private-key algorithms?' The answer is that different types of algorithms can work together. The recent versions of PGP enlist the use of private-key algorithms for data transfers while using public-key algorithms such as Diffie-Hellman for the creation and exchange of keys.

Note 

Cryptosystems can employ both symmetric and asymmetric algorithms, simultaneously.

Asymmetric Algorithms

Asymmetric (or public-key) encryption uses a public key and a private key. As we've discussed, the private key is never shared in this method, whereas the public key is freely distributed. Someone wishing to send a public-key encrypted e-mail, for instance, would encrypt it using the recipient's readily available public key. Public keys can be obtained through key-managing servers or from the recipients themselves. If the sender were to digitally sign the message, their private key would be used. Upon receiving the message, the recipient uses their private key, which is mathematically related to their public key, to decipher the message. If the e-mail message were digitally signed, the recipient would verify the signature with the sender's public key. Secure e-mail is but one implementation of the many asymmetric encryption methods in use. Nevertheless, we're here to talk about the algorithms that are fit to be classified as asymmetric, so here they are.

RSA

Named after its developers-Ron Rivest, Adi Shamir, and Leonard Adleman-the RSA asymmetric algorithm uses some elaborate math to deliver its brand of encryption. Originally developed in 1977, the algorithm was finally released into the public domain in September 2000. The RSA public-key cryptosystem is known for its implementation in the better part of all Internet e-commerce activities and is included in many popular software applications, such as Microsoft Internet Explorer and Netscape Navigator. This algorithm provides a means for encryption, authentication, and integrity verification, the latter two features being achieved by the use of Digital Signatures (DS).

This cipher bases its strength on the difficulty of factoring very large numbers. RSA uses huge prime numbers in its computations, resulting in distinctively strong encryption. You might remember that prime numbers are simply numbers divisible only by themselves and 1, such as 13. However, RSA uses much larger prime numbers exceeding100 digits. The RSA algorithm takes two of these large primes and multiplies them together. The resultant product of this computation, called the modulus, is the basis for the formula that RSA uses. In turn, the results of this formula are what RSA's public and private keys are comprised of.

Currently, there is no easy way to discover the private key if all one has is the public key. It is said to take an infeasible amount of time, with today's technology, to factor these large numbers out of a strong public key with a size of 2,048-bits, for instance. Of course, people are constantly developing new ways to make computers work faster and think harder, and if a discovery were made that enabled the quick factoring of these large numbers, RSA would become useless.

It is notable that in a challenge by RSA Security Inc., one of these numbers was indeed factored. On August 22, 1999, the factorization of a relatively small number (155 digits/512 bits) was accomplished. It took a group of researchers with almost 300 computers more than seven months to achieve. A 2,048-bit number would take an exponentially greater amount of time to crack and hasn't been done. If you're feeling optimistic, you can visit RSA Security's Factoring Challenge Web site at http://www.rsasecurity.com/rsalabs/challenges/factoring/index.html and give it a try.

Be aware that RSA is much slower than DES and other block ciphers. In software implementations, DES is roughly 100 times faster than RSA. It is also important to understand that key sizes for RSA need to be much greater than those of block ciphers such as DES in order to deliver similar strength. This contrast is due to the fact that in asymmetric methods a more detailed clue as to the value of the private key (in the form of the public key) is freely distributed for anyone to examine. For example, a 1,024-bit key size is recommended for use in a corporate environment, while a 2,048-bit key size should be reserved for only the most sensitive data transfers. These keys are 8 and 16 times larger, respectively, than the 128-bit strong key size of symmetric encryption. To make a long story short, the strength of an RSA key should not be compared to the strength of a DES or similar key based on its size alone. The two methods are like the proverbial apple and orange.

Note 

RSA's strength is based on the complexity of factoring large numbers.

Diffie-Hellman (DH)

Developed by W. Diffie and M.E. Hellman while attending Stanford University in 1976, the Diffie-Hellman algorithm (or key agreement protocol) is the second most widely used asymmetric encryption algorithm next to RSA. Stanford held the patent for the DH algorithm until 1997 when it was released to the public domain.

Operating on a similar basis as RSA, DH provides a way to exchange private keys across an open connection such as the Internet without first exchanging any secret data. Instead of using the factoring problem that RSA does, DH provides security by way of the discrete logarithm problem. This mathematical dilemma is reportedly as hard to crack as RSA's technique and the formula that produces its values runs on CPUs at comparable speeds. Because of its strength, the discrete logarithm problem is implemented in other asymmetric cryptosystems such as ElGamal and DSS.

Elgamal

Elgamal is a DH-inspired public-key cryptosystem developed in 1985 by Dr. Taher Elgamal. Is it reported that there has never been a successful assault on this algorithm. As with DH, Elgamal gets its strength by means of the discrete logarithm problem.

Note 

Asymmetric encryption uses two keys, a public key and a private key. Because the public key is out in the open, this method offers clues about the private key used to encrypt plain text. For this reason, asymmetric encryption requires larger keys than symmetric encryption to provide the same strength.

Hash Algorithms

As we've learned, there are many uses for the hashing (or signature-only) process. These range from safely storing passwords to creating fingerprints of data that are used to apply digital signatures. The various hashing algorithms achieve their result through different processes. These processes are all irreversible. Two distinct features of hash algorithms are that input strings can be of any length but output strings have a fixed length. Hashing data is fundamentally different from encrypting it because a hash cannot yield the original data that created it. Nevertheless, hashing plays a vital role in cryptography. Essentially, to produce a hash (a.k.a. message digest, fingerprint, md5sum), a formula is applied to a string of text. The formula returns a smaller value called the hash. This condensed representation of the original data is the unique result of a specific formula applied to a specific string of text. The idea is that no other string of text (hashed by the same algorithm) will produce the same hash value. The hashing process makes it possible to do the following:

  • Store passwords ambiguously: When a hash value of a password is stored on a system, the risk of passwords being exposed from an attack is diminished. A password's hash is generated whenever a user logs in. That hash is compared to the one that's stored on the system and access is granted or denied accordingly.

  • Verify the integrity of transferred files: When you download a file from a trusted source, that source might also post a hash value of the file for viewing. Upon downloading the file, you can apply the same algorithm used to produce the original hash and compare the two values in order to verify that the file has transferred without corruption or alteration.

  • Create hash tables (or indexes): A list of names, for example, can be converted to hashes, stored in a database, and made available for query. Upon entering a name to look up, the system makes a hash of the name, finds its match in the database, and sends the user directly to that entry. This provides a search method that outperforms a standard, database-wide comparison search of all the entries.

  • Verify digitally signed electronic documents: The creation of a hash is an integral part of the DS process. Wrapped up within a DS is a hash of the original message, which enables a recipient to verify the signature and the message contents.

Let's examine a few of the algorithms that possess this unique ability.

Secure Hash Algorithm (SHA)

The SHA algorithm was developed originally by NIST in 1993 but technical revisions in 1995 led to the release of SHA-1, which is the version used today. Specified in Federal Information Processing Standards (FIPS) 180-1, SHA-1 is the U.S. government standard hash algorithm. The government also requires the use of SHA-1 with DSA.

The SHA-1 algorithm creates a 160-bit message digest in five rounds of processing. It also uses a process known as message padding to force the size of the pre-hashed text string to be a multiple of 512. Although it's possible, SHA-1 does not typically use any shared secrets or keys to accomplish this fingerprinting. Instead, security is provided by the fact that it's computationally infeasible for two different strings of text to produce identical message digests. More importantly, it's infeasible that an altered string of text would produce the same fingerprint as its unaltered, original counterpart. This is why a hash value comparison is a trusted method for verification of data. So you see, the security of SHA-1 is assumed because of the mathematical improbabilities laid out by this algorithm. If an assumption of security gives you an uneasy feeling, great!

You must be paying attention. Although not required by the exam, comprehension of the math behind SHA-1 (and any of the other algorithms) is the only way to wash away your fears. It is recommended that as a future security professional, you gain a broader understanding of these mathematical concepts. However, it's okay to wait until you pass the test to take that step.

Note 

SHA-1 is the U.S. government standard hash algorithm.

MD5

MD5 is the latest in the MD series of hash algorithms developed by Ron Rivest. Introduced in 1991, MD5 addressed some security weaknesses found in its predecessor, MD4. Using a four-round process, this algorithm produces a 128-bit message digest, making it faster but less secure than SHA-1. MD5's formula also includes a step that pads the message length before hashing occurs. MD5's source code is freely available on the Internet and it can be put to use in a wide array of software and hardware implementations.

Note 

Hash encryption produces a fixed-length string that represents data in a unique way and does not actually contain the encrypted data.



 < Day Day Up > 



The Security+ Exam Guide (TestTaker's Guide Series)
Security + Exam Guide (Charles River Media Networking/Security)
ISBN: 1584502517
EAN: 2147483647
Year: 2003
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net