ADVANCED ENCRYPTION: THE NEED TO CONCEAL

 < Day Day Up > 



On German television several years ago, a stunned audience looked on as an unsuspecting Web surfer had his computer scanned while he was visiting a site. The site operators determined that a particular on-line banking program was installed on his computer, and they remotely modified a file in it so that the next time the user connected to his bank on-line, he also directed his bank (unbeknownst to him) to send a payment to the owners of that Web site.

The vulnerability of computer data affects everyone. Whenever a computer is connected to a network, be that a corporate intranet or the Internet, unless proper precautions are taken, the data residing in the machine can be accessed and otherwise modified by another knowledgeable user. Even computer data that the user may believe to be deleted or overwritten can be retrieved. Courts now routinely subpoena individuals’ and companies’ magnetic media as evidence; forensic experts can reconstruct data files that have been erased. In these cases, possession is not nine-tenths of the law. The best way to protect electronic data is to encrypt it.

The purpose of encryption is to render a document unreadable by all except those authorized to read it. The content of the original document, referred to by cryptographers as “plaintext,” is scrambled using an algorithm and a variable, or key. The key is a randomly selected string of numbers; generally speaking, the longer the string, the stronger the security.

Provably unbreakable encryption has been around since the dawn of recorded history, and although computers have made encryption more accessible, they are certainly not a requirement. One precomputer method is the conceptually simple, yet very strong, encryption scheme known as the one-time pad, developed in 1926 by Gilbert S. Vernam (see sidebar, “Computer-Free Encryption”).

start sidebar
Computer-Free Encryption

The durable encrypting scheme known as the one-time pad gets its name from the use of a key once and once only for just one message. It works like this:

Toni, the sender of a sensitive message wakes up one morning and starts shouting out two-digit numbers at random: 56, 34, 01, 92, 27, 11, and so on. These numbers become the key. Toni then assigns a sequential number to each letter of the alphabet: A=01, B=02, C=03, D=04, E=05, and so on.

Next, she encodes the plaintext word “hello,” which, in accordance with the preceding sequential numbering of the letters, corresponds to the sequence 08, 05, 12, 12, and 15. She then does a simple modulo-10 addition with no carry, using the key she generated in the preceding. In other words,

   H  E   L   L   O

   08 05 12 12 15

+ 56 34 01 92 27

--------------------------------------

= 54 39 13 04 32

This last sequence (54, 39, 13, 04, 32) is the ciphertext, which gets sent to Wolfgang, the intended recipient. Note that the same plaintext letters do not necessarily get encrypted into the same ciphertext symbols (the letter L is both 13 and 04 in this case).

Wolfgang has an exact copy of the key (56, 34, and so on). To decode Toni’s message, he does the reverse operation, again with no carry:

  54 39 13 04 32

- 56 34 01 92 27

------------------------------------

= 08 05 12 12 15

= H  E   L   L   O

Generating long keys by “shouting out” long strings of numbers can be impractical. So in modern applications of the one-time pad, computers are often assigned to create the keys. But the result is not truly random: computers’ pseudorandom number generators use only 16 (or, in some cases, 32) bits to store their values. The entire space of such values can be searched within a week or so. One remedy is to tweak the pseudorandom number generator by applying an external physical process to generate noise—maybe a sufficiently amplified semiconductor junction of 1/f noise. But that further requires removing the influence of predictable external influences, such as 50–60-Hz noise.

A self-evident shortcoming of the one-time pad is that the key is at least as long as the plaintext being encrypted. To escape cryptanalytic attacks involving statistical analyzes, the key must be used only once. A more serious shortcoming is that the same key is used to both encrypt and decrypt. The sender and the recipient, therefore, need a totally secure opportunity to exchange the key, which is hard to come by when the two are far apart.

An amusing feature of the one-time pad is that a fake key can be created that will “decode” the encrypted document into something quite innocent—an excerpt from the Bible, say, or the Bill of Rights. Alternatively, a fake key could be designed to yield a plausible-looking, but still false, document, thereby fooling people into believing they have cracked the code.[ii]

end sidebar

Note 

The term “unbreakable encryption” is somewhat misleading. In many cases, the plaintext has a limited lifespan, and so the protection afforded by encryption need not last forever. Tactical data, for example, often requires encryption that takes only slightly longer to break than the useful life of that data. This truism is often forgotten in debates about the relative strengths of encryption methods.

Symmetric Encryption

Vernam’s one-time pad is an example of symmetric encryption, in which the same key is used to both encode and decode a message. Many of the encryption schemes available today are also symmetric, most notably the Data Encryption Standard (DES) (see sidebar, “A Menu of Symmetric Encryption Algorithms”).

start sidebar
A Menu Of Symmetric Encryption Algorithms

In symmetric encryption, the same key is used to encrypt and decrypt a message. Here are the most popular.

The Data Encryption Standard

DES was developed in the 1970s and is still widely used worldwide, although it will be replaced in 2002 by the Advanced Encryption Standard (AES).

Triple DES

Encrypting the already DES-encrypted output with a different output with a different key provides no measurable security, but adding a third round of DES encryption yields a highly secure, albeit slower, algorithm. Most purportedly triple-DES implementations, however, use only two keys: key 1 for the first round of encryption, key 2 for the second round, and key 1 again for the third round.

The International Data Encryption Algorithm

IDEA uses a 128-bit key developed by ETH Zurich, in Switzerland. Its U.S. and European patents are held by Ascom Systec Ltd. of Bern, Switzerland, but noncommercial use is free. IDEA is viewed as a good algorithm for all except the best-funded attacks. It is used in PGP and Speak Freely (a program that allows encrypted digitized voice to be sent over the Internet).

Blowfish

Blowfish is a 64-bit block code having key lengths of 32 to 448 bits. Developed in 1993 by Bruce Schneier of Counterpane Internet Security Inc., San Jose, Calif., it is used in over 100 products and is viewed as one of the best available algorithms.

Twofish

Twofish, also developed by Schneier, is reputedly very strong, and, as one of five candidates for AES, is now being extensively reviewed by cryptanalysts.

RC4

RC4 is a stream cipher of unknown security, designed by Ronald Rivest for RSA Security Inc., Bedford, Mass. It adds the output of a pseudorandom number generator bit by bit to the sequential bits of the digitized plaintext.[iii]

end sidebar

Developed in the 1970s, DES is still popular, especially in the banking industry. It is a block cipher, meaning that it encodes text in fixed-bit blocks using a key whose length is also fixed in length. The alternative, known as stream ciphers, encode the stream of data sequentially without segmenting it into blocks.

After nearly three decades of use, DES is headed for the garbage can. Currently, the United States’ National Institute of Standards and Technology (NIST), in Gaithers- burg, Maryland, is considering five finalists for an Advanced Encryption Standard (AES) that will replace DES. In all likelihood, the new standard will become nearly as ubiquitous as its predecessor. Unlike DES, however, it will be competing with other algorithms—algorithms that will not suffer from any suspicion that the U.S. government has a back door into the code.

AES will be selected in late 2002. The five contenders are Mars, created by IBM Corp.; RC6, by RSA Laboratories and Ronald Rivest of the Massachusetts Institute of Technology; Rijndael, by two Belgians, Joan Daemen and Vincent Rijmen; Serpent, by Ross Anderson, Eli Biham, and Lars Knudsen, of the UK, Israel, and Norway, respectively; and Twofish, by Bruce Schneier, of Counterpane Internet Security, Inc., and colleagues.

For some encryption algorithms, a plaintext that is repetitive will result in a repetitive ciphertext. This is clearly undesirable because the encrypted output betrays important information about the plaintext. One solution is to encrypt the ciphertext block and add it bit by bit to the sequential bits of the previously encrypted plaintext.

Another problem with symmetric key encryption is that it requires that the sender and recipient of a message have a secure means for exchanging the encryption key. This is clearly difficult when the two parties are far apart, and the problem is compounded every time the keys are updated. Repeated use of the same key creates its own security weakness.

Public Key Encryption

An ingenious scheme that avoids many of the problems of symmetric encryption was proposed in 1976 by Stanford professor Martin Hellman and his graduate student Whitfield Diffie. Their public key encryption scheme, first described in IEEE Transactions on Information Theory, also allows the recipient to verify that the sender is who he or she appears to be; and that the message has not been tampered with.

The method works like this: Bob and Alice have a copy of openly available software that implements the public key algorithm. Each directs his or her copy of the software to create a key, or rather, a pair of keys. A file encrypted with one key of a pair can only be decrypted with the other key of that same pair; and one key cannot be mathematically inferred from the other key in the pair.

Bob makes known (by e-mail, by posting to a Web site, or however else he chooses) one of the keys of his pair; this becomes his “public key.” Alice does the same. Each retains under tight control the other key in the pair, which is now his or her “private key.”

If Bob wants to encrypt a message that only Alice can read, he uses Alice’s public key (which is available to anyone); that message can only be decoded by Alice’s private key (Figure 19.1).[iv] The reciprocal process (sending an encrypted message from Alice to Bob) is clear. In effect, Bob and Alice can now exchange encrypted files in the absence of a secure means to exchange keys, a major advantage over symmetric encryption.

click to expand
Figure 19.1: In public-key encryption [top], Alice encrypts a message using Bob’s public key, and Bob decrypts it using his private key. This scheme allows encrypted files to be sent in the absence of a secure means to exchange keys, a major improvement over symmetric encryption. It’s still possible, though, for Alice to receive a public key (or a conventional symmetric key) that ostensibly came from Bob, but that, in fact, belongs to a third party claiming to be Bob—the so-called man-in-the-middle attack (bottom). (©Copyright 2002. IEEE. All rights reserved).

Sender authentication verifies that the sender is who he or she appears to be. Suppose Bob sends a message to the world after encrypting it with his private key. The world uses Bob’s public key to decrypt that message, thereby validating that it could only have come from Bob.

Message authentication, the validation that the message received is an unaltered copy of the message sent, is also easy: Before encrypting an outgoing message, Bob performs a cryptographic hash function on it, which amounts to an elaborate version of a checksum. The hash function compresses the bits of the plaintext message into a fixed-size digest, or hash value, of 128 or more bits. It is extremely difficult to alter the plaintext message without altering the hash value (Figure 19.2).

click to expand
Figure 19.2: Public key encryption allows Alice to verify that a message from Bob actually came from him and that it is unaltered from the original. Here’s how: Bob encrypts the hash value with his private key; encrypts the plaintext with Alice’s (green) public key; and sends both to her. Alice then decodes the received ciphertext using her own [orange] private key; decodes the hash value using Bob’s public key, thereby confirming the sender’s authenticity; and compares the decrypted hash value with one that she calculates locally on the just decrypted plaintext, thereby confirming the message’s integrity. (©Copyright 2002. IEEE. All rights reserved).

The widely used hash function MD5, developed by Rivest in 1991, hashes a file of arbitrary length into a 128-bit value. Another common hash function is SHA (short for Secure Hash Algorithm), published by the U.S. government in 1995, that hashes a file into a longer, 160-bit value.

Public key encryption has been a part of every Web browser for the last few years. It is used, for example, when sending credit-card information to an on-line vendor or when sending e-mail using the standard S/MIME protocol and a security certificate, which can either be obtained from on-line commercial vendors or created locally using special software.

One drawback of public key encryption is that it is more computationally intensive than symmetric encryption. To cut back on the computing, almost all implementations call on the symmetric approach to encrypt the plaintext and then use public key encryption to encode the local key. The differently encrypted plaintext and key are then both sent to the recipient.

In terms of resistance to brute force cryptanalysis (the exhaustive search of all possible decryption keys) a good 128-bit symmetric encryption algorithm is about as strong as 2304-bit public key algorithm. Realistically, though, the public key should be even longer than that, because the same public and private key pair is used to protect all messages to the same recipient. In other words, although a broken symmetric key typically compromises only a single message, a broken public key pair compromises all messages to a given recipient. To be sure, cracking an encryption key is just one way to get at sensitive data (see sidebar, “Human and Hardware Frailties”).

start sidebar
Human And Hardware Frailties

The encryption of material to withstand a brute force attack still leaves many avenues open to invasion. Often, the real weaknesses in security lie in the human tendency to cut corners. It is all too tempting to use easy-to-remember passwords, or keep unencrypted copies of sensitive documents on one’s computer, intentionally or otherwise. Windows-based computers and many software products, in their quest to be user-friendly, often leave extensive electronic trails across the hard drive. These trails include not only copies of unencrypted files that the user deleted but also passwords and keys typed.

Furthermore, unless each file is encrypted using a different key and/or a different encryption method, an attacker who can somehow read one encrypted file from or to a given person can probably also read many other encrypted files from or to that person.

Cryptanalysts have also been known to exploit the hardware on which the encryption algorithm is used. In 1995, the so-called timing attack became popular. It allowed someone with access to the hardware to draw useful inferences from the precise time it took to encrypt a document using a particular type of algorithm. Public key encryption algorithms such as RSA and Diffie-Hellman are open to such attacks. Other exploitable hardware phenomena include power consumption and RF radiation. It is also possible to assess the electronic paper trail left behind when the hardware is made to fail in the course of an encryption or decryption.

Most of today’s commercial e-mail programs, Web browsers, and other Internet applications include some encryption functions. Unfortunately, these schemes are often implemented as an afterthought, by engineers who may be very competent in their respective fields, but have minimal experience in cryptography.

Just like a decent forgery, bad encryption can look like good encryption on the surface. In general, however, “proprietary,” “secret,” or “revolutionary” schemes that have not withstood the scrutiny of cryptanalysts over time are to be avoided.

One easy test is to attempt to decrypt a file with a different key from the one used for encryption. If the software proudly informs the user that this is the wrong key, that encryption method should be discarded. It means that the encryption key has been stored in some form along with the encrypted file. The cryptanalyst would merely have to keep trying different keys until the software identified the correct one. This is only one of many weaknesses. Given the preceding, the odds favor the person attacking an encrypted file, unless the person being attacked is very knowledgeable in the ways of information security.[v]

end sidebar

Public key encryption is also victim of the uncertainty besetting any cryptographic scheme when the two communicating parties lack a secure channel by which to confirm the other’s identity (Figure 19.1). There is, as yet, no technical fix to this problem.

One of the most commonly used public key algorithms is the 24-year-old RSA, named for its creators, Ronald Rivest, Adi Shamir, and Leonard Adleman of the Massachusetts Institute of Technology, Cambridge. Its security derives from the difficulty of factoring large prime integers. At present, a key length of at least 1024 bits is generally held secure enough. However, RSA may be somewhat vulnerable to “chosen plaintext attacks,” namely, attacks in which the cryptanalyst already possesses a plaintext file and the corresponding RSA-encrypted ciphertext.

The Diffie-Hellman public key algorithm is used mostly for exchanging keys. Its security rests on the difficulty of computing discrete logarithms in a finite field generated by a large prime number, which is regarded as even harder than factoring large numbers into their prime-number components. The algorithm is generally viewed as secure if long enough keys and proper key generators are used.

By far the most popular public key encryption scheme is PGP, which stands for “pretty good privacy.” PGP was created in 1991 by a programmer and activist named Philip Zimmermann as a means of protecting e-mail. After one of his colleagues posted PGP on the Internet, the Department of Justice launched an investigation of Zimmermann, for possible violation of U.S. laws governing export of encryption products. The case against him was eventually dropped in 1996, after which Zimmermann started a company to market PGP. It has since become a mainstream commercial product, sold by Network Associates Inc., of Santa Clara, Calif., although freeware versions continue to be available from the Internet.

Crackdown on Cryptography

What happened to Zimmermann is just one small skirmish in the much wider campaign waged by governments worldwide against cryptography. At issue is whether, and to what extent, persons and organizations should have the ability to encrypt information that the state cannot decipher itself.

Private citizens have legitimate reasons to preserve confidentiality: to protect trade secrets; to prevent legal or medical records from falling into strangers’ hands; and to voice dissenting political or religious opinions without retribution. The international group Human Rights Watch, for example, regularly encrypts eye-witness reports of serious abuse, gathered in parts of the globe where the victims may be subject to further reprisals.

From a government’s perspective, however, encryption is a double-edged sword: It has honorable purposes, true, but it can also be used to conceal out-and-out criminality. In an effort to keep encryption from gaining ground, many countries have passed laws criminalizing its import, export, and/or use.

Note 

To be sure, exactly what constitutes a crime is not always clear; governments have been known to capitalize on the aura of the term “criminal” and apply it to conduct they dislike or consider threatening.

The proliferation of encryption has coincided with the explosive growth of the Internet. Nowadays, the man in the street can reach an instant global audience of millions, bypassing the chain of command that rules almost any institution, be that the military, a religious group, or a corporation. In essence, the simultaneous spread of encryption and the Internet has amounted to a transfer of power to the individual.

This turn of events has been viewed differently by different states. An interesting case is the People’s Republic of China. There, the outlawed religious sect Falun Dafa has used the Web to great effect to spread its ideology and recruit new members; repeated attempts by the authorities to shut down the group have largely failed. Recently, the government began requiring any company doing business in China to disclose the types of Internet encryption software it uses, as well as the names of employees who use it. It further banned the sale of foreign-designed encryption products. Overseeing the regulations is a newly established body, the State Encryption Management Commission, which is believed to be staffed by China’s secret police.

China’s unwavering opposition to encryption suggests a more fundamental reason why a government (any government) would want to control the technology: to preserve the ability to exercise censorship. Even enlightened and democratic regimes have topics that are taboo. And when any and all information being exchanged by private citizens can be monitored, it has a chilling effect on dissenting opinions. Conversely, when citizens can communicate freely and privately using encryption, censorship becomes unenforceable. Few sovereign states can accept this loss of control. It’s like having two rude guests at one’s dinner table who keep whispering in each other’s ears.

Encryption, though, is good for business, and that factor is largely responsible for the gradual loosening in the U.S. government’s stance on encryption. Until 1996, strong encryption technology was listed as a munition, and until just recently, it fell under the same export restrictions as advanced weaponry. Under concerted pressure from the U.S. business community, which claimed that such controls were reducing sales and choking the growth of e-commerce, the government came out with a revised policy recently that lifts many of the bureaucratic burdens from companies wanting to export encryption. Even so, every encryption product must still undergo a one-time review by the U.S. Commerce Department’s Bureau of Export Administration before it can be exported; sales to the so-called terrorist seven (Cuba, Iran, Iraq, Libya, North Korea, Sudan, and Syria) are still excluded. The new stipulation has some cynics wondering if only products with an identifiable weakness will receive an export license.

What’s more (although encryption proponents have largely welcomed the relaxation of export rules), another concern has been raised: The same legislation would grant law enforcement new powers, such as the right to present a plaintext in court without disclosing how it was obtained from a suspect’s encrypted files. Here the potential for abuse is obvious.

Other Legal Responses

The United States is not alone in backing away from strict encryption bans. What started as a global campaign to limit encryption has splintered into various approaches, with some governments now even encouraging encryption among their respective citizens, as a precaution against snooping by other governments.

Generally speaking, laws pertaining to encryption are quite convoluted and rife with exceptions and qualifications. In Sweden, for instance, encryption importation and use are allowed, and so is its export, except to certain countries; authorities may search someone’s premises for a decryption key, but may not compel the person to assist in the investigation by, say, handing over the key to the authorities.

The first international attempt to control encryption was made by the 17-country Coordinating Committee for Multilateral Strategic Export Controls (COCOM), which came together in 1991 to restrict the export of items and data deemed “dangerous” if acquired by particular countries. COCOM members, with the notable exception of the United States, permitted the export of mass-market and public domain cryptography, and restricted export of strong encryption to select countries only. One such item was Global System for Mobile Communications (GSM) cellular telephony,[vi] which has two grades of encryption. Under COCOM, only the lower-grade version could be sent to the restricted countries.

Note 

Both grades of encryption have since been broken.

In March 1994, Cocom was dissolved, to be replaced the following year by the multilateral Wassenaar Arrangement, which has now been joined by (at last count) 33 countries. Under the nonbinding agreement, countries agreed to restrict the export of mass-market software with keys longer than 64 bits.

Note 

The arrangement, administered through a small office in Vienna, Austria, is not a treaty, and so not subject to mandatory review by any country’s legislature.

But, do such encryption bans work? In a word, no. For one thing, the penalty for using encryption is likely to be far less than the damage caused by disclosing whatever was deemed sensitive enough to warrant encryption. What’s more, sophisticated techniques for hiding data, unencrypted or not, are now readily available and extremely hard to detect, so that prosecution of cryptography-ban violations is all but impossible. Who can prove that an innocuous-sounding e-mail message reporting “The temperature in the garage was 86 degrees” really means “Meet me behind Joe’s garage on August 6”? Or again, out of tens of millions of digitized images posted to a Usenet electronic bulletin board, who can detect the one image in particular, perhaps of an antique car, that contains a secret message intended for a specific person, who along with millions of unsuspecting others will download that image to his or her computer?

The very existence of the Internet has made it easy to circumvent bans. In most, though not all, countries, a sender can log-on to any public computer connected to the Internet, such as those in public libraries or Internet cafes, and send encryption software anonymously to a recipient, who can also retrieve it anonymously. A would-be user of encryption software can anonymously download it from any of the thousands of Internet servers that openly provide a large collection of programs of this kind.

It may make sense for a country to ban the exportation of something that it alone possesses and that could be used against it. But, it makes no sense for a country to ban the export of what other nations already produce locally. A 2001 survey by the Cyberspace Policy Institute of George Washington University, in Washington, D.C., identified 1,490 encryption products (hardware and software) developed in 79 countries.

The study, published before the latest relaxing of U.S. export laws, states that “on average, the quality of foreign and U.S. products is comparable” and that “in the face of continuing U.S. export controls on encryption products, technology and services, some U.S. companies have financed the creation and growth of foreign cryptographic firms. With the expertise offshore, the relatively stringent U.S. export controls for cryptographic products can be avoided since products can be shipped from countries with less stringent controls.”

Nevertheless, in recent years, the war over encryption has moved beyond the mere control of the technology itself. Although encryption proponents may have won the first round, law enforcement and intelligence agencies have responded with a slew of powerful tools for getting at computerized data (encrypted or not). These efforts are in turn being met by ingenious new schemes for hiding and protecting information, including one’s identity.

Now, let’s look at advanced hacking as part of this chapter’s continuing theme of advanced computer forensics. In other words, hack yourself before somebody else does.

[ii]Michael A. Caloyannides, “Encryption Wars: Early Battles,” © 2000 IEEE, IEEE Spectrum, 445 Hoes Lane, Piscataway, New Jersey 08855, 2001. All rights reserved.

[iii]Ibid.

[iv]Ibid.

[v]Ibid.

[vi]John R. Vacca, i-mode Crash Course, McGraw-Hill, 2002.



 < Day Day Up > 



Computer Forensics. Computer Crime Scene Investigation
Computer Forensics: Computer Crime Scene Investigation (With CD-ROM) (Networking Series)
ISBN: 1584500182
EAN: 2147483647
Year: 2002
Pages: 263
Authors: John R. Vacca

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net