Long a popular classification in the industry, the principal objectives of information security fit within the three overarching categories of confidentiality, integrity, and availability. We mentioned this briefly in Chapter 1, "Introduction to Network Protection," commenting about its lack of nuance in covering everything necessary to protect networks from attackers . There have been numerous attempts by concerned and well-meaning individuals to change these categories, to add to them, or to replace them entirely. Nevertheless, this taxonomy still serves to adequately describe our objectives; and for any system, there will be varying requirements for each. For instance, financial transactions must have absolute integrity, while trade secrets require absolute confidentiality.
Let's examine each category in turn . For each category, we'll also consider an important, often-overlooked corollary that can help flesh out the taxonomy's nuance.
Confidentiality ensures that information is visible only to those authorized to view it. Essentially, confidentiality is about privacy: Alice wants to send some information to Bob and doesn't want anyone else to read it. Confidentiality can't necessarily prevent Eve  from intercepting the information, but it can obfuscate or conceal the information so that only Bob can make sense of it.
 Eve represents our hypothetical attacker. No biblical references intended.
Encryption is the mechanism that provides confidentiality. Using any of a variety of mathematical algorithms and a digital key, the information is transformed into an unintelligible collection of bits. Only the authorized parties (presumably) know the key and are therefore able to use this key to decrypt the collection into its original form.
Breaches of confidentiality can occur when:
An unauthorized third party somehow obtains the key.
The encryption software has a vulnerability that, when exploited, reveals either the key or the encrypted information.
An authorized party violates the trust of other authorized parties by disclosing the information to unauthorized parties.
Confidentiality requires strong authentication and authorization systems. Without a highly trusted way both to authenticate a partyknowing who the party isand to authorize the actions of that partypermitting some things and denying othersconfidentiality is impossible . Before Alice sends confidential information to Bob, she needs to know both that Bob is who he claims to be and that Bob is allowed to receive such confidential information.
A centralized directory that allows Alice to check that Bob is a legitimate user (in other words, that the system knows who Bob is because he can authenticate to it) and that Bob is permitted to receive certain kinds of information (in other words, that the system allows Bob to do something because he is authorized) allows Alice to trust that the private information she sends to Bob is really going to Bob. Noncentralized systems in which Bob asserts his own access permissions can't offer the same level of trust.
Earlier we stated that confidentiality can't prevent an attacker from intercepting confidential information. The concern, then, is how to fix this. How do we prevent unauthorized users from possessing unauthorized information? Software can be duplicated without the manufacturer's permission; passwords can be accidentally revealed (or intentionally shared). Various forms of access control can help here; see Chapter 17, "Data-Protection Mechanisms."
Integrity ensures that information is in a state the owner intendsthat the information is authentic , is complete, and is sufficiently accurate for its intended purpose. When Alice sends some information to Bob, we need to guarantee that what Bob receives is the same information that Alice sends. Like confidentiality, integrity can't prevent Eve from intercepting the information, but it does let Bob know whether Eve modifies that information after Alice sent it.
Hashes and digital signatures are the primary mechanisms that provide integrity. Similar to encryption, hashes and signatures rely on mathematical operations and digital keys to create a series of bits that represent the information; these bits are attached to the information. If the information is altered during delivery, the receiving side's computed signature or hash won't match the attached one, and the receiver then knows the information is compromised. A fundamental property of integrity functions is that they won't operate in reverse: there's no way to deduce some information if all you possess is the signature or hash. An attacker can't alter the signature or hash to match changes in the information.
Breaches of integrity can occur when:
An unauthorized third party obtains the keynow it's possible to alter the information and recompute the signature or hash.
The signing software or hash usage contains a vulnerability that, when exploited, reveals the key or allows a single hash to represent multiple information sets.
An authorized party removes the signature or hash, maliciously changes the information, and then computes a new signature or hash.
Integrity requires strong identity systems. The strength of the integritythat is, how much trust a receiver is able to put in a signature or checksumis directly proportional to the strength of the sender's identity. Systems in which a mutually trusted third party vouches for the identity of the sender permit much stronger integrity than systems where only the sender itself maintains and asserts its identity. Although there are occasionsgenerally personalwhere web-of-trust is sufficient for providing identity, in the world of business communications it is absolutely essential that trust flows from a hierarchy rooted in a system that provides assertions of strong identity, backed by verifiable policies, with constraints on purpose and name space.
Note that encryption doesn't guarantee any kind of integrity. Encryption conceals communications between Alice and Bob but by itself can't let Bob know whether Eve altered the information. Of course, altering encrypted information isn't too useful an attack, and when Bob decrypts the information it would most likely turn out to be junk, but there's no mechanism in the encryption process that alerts Bob to the information's loss of integrity. So if Alice and Bob want both confidentiality and integrity, they need to use tools that provide both functionsfor example, encrypting a piece of mail then digitally signing it.
The strong identity we discussed earlier provides authenticity. It lets us know who we are communicating with. Authenticity also provides nonrepudiation: if we know who the receiver is, that party can't later claim not to have received what we sent.
Availability is the assurance that a system responsible for processing, storing, or delivering information is accessible when users need it. The February 2000  attack against several prominent Web sites was the first of a growingand arguably now the most popularclass of attack: denials of service. Confidentiality and integrity measures have gotten quite strong and reliable lately; attacks against them require a lot of time and increasing sophistication on the part of the attacker. Alas, it's still quite easy to render a system useless by overwhelming its ability to process incoming data and generate replies. The attacker neither steals nor modifies data but still enjoys the publicity of "bringing someone down."
 A Canadian script kiddie known as "mafiaboy" brought down 11 sites (including Yahoo!, Buy.com, eBay, CNN, Amazon.com, ZDNet, E*Trade, and Excite) using 75 computers in 52 networks to send 10,700 messages in 10 seconds. In April 2000 he was arrested and charged; police discovered him through his boasting in chat rooms. In January 2001 he pled guilty to 56 charges and was sentenced in April 2001 to two years in a juvenile detention center.
Simple denial-of-service (DoS) attacks usually involve an attacker sending unexpected or malformed traffic to a computer. Often this traffic exploits a known (but unpatched) vulnerability in the computer's software; depending on the characteristics of the vulnerability, the affected service might shut down or the computer's operating system might crash completely.
A distributed denial-of-service (DDoS) attack relies on the insecurity of one network to attack a computer on another. An attacker commandeers several insecure machinesperhaps hundreds or thousandsand secretly installs "zombie" software on them. The owners of these machines rarely know this has happened . The attacker has configured the zombie to listen to a "wake up" command; upon receipt, each zombie directs some traffic to the attacker's target. The target, inundated by the traffic, usually stops operating, sometimes catastrophically. (Data loss is a side-effect of some DoS attacks.)
Many DDoS "constellations" have second-level zombiesallowing quite elegant attacks, really. For example, 100 first-level zombies might simultaneously send just one "ping" packet to 1,000 second-level zombies, but with a forged source addressthat of the intended target. The second-level zombies reply to that forged address. If each first-level zombie had a different set of 1,000 second-level zombies, the hapless target suffers the onslaught of 100 x 1,000 = 100,000 ping replies, literally pounding it off the network.
Denial-of-service attacks do have one seriously devastating effect: the potential loss of business. A DoS attack against an e-commerce system responsible for $20,000 per hour of transactions would lose its owner almost $1,000,000 if it were knocked offline for two days. Reputation damage is also common, and just might be even more costly to recover from.
Defending against DoS attacks is very straightforward yet often ignored. See the section on network defenses later in this chapter.
Regardless of whether information is available, it must have some usefulness to be valuable . For instance, encrypted information, although available to anyone who can access it, has very little utility for anyone who doesn't possess the key to decrypt that information. Don't think that just encrypting everything will make you useless, however: if you've got a network connection, you're still interesting because an attacker might simply want to cause you harm with a DoS attack or use your network to launch an attack on someone else.
Conversely, because utility immediately decreases to zero whenever attacks against availability succeed, even the most highly available system can become entirely useless if not properly protected.