Security Technologies

This chapter breaks down all the security technologies into specific categories. Each category and each element within the categories are detailed through the rest of the chapter. They are as follows:

  • Identity technologies Reusable passwords, Remote Authentication Dial-In User Service (RADIUS)/Terminal Access Control Access Control System (TACACS+), OTP, Public Key Infrastructure (PKI), smart cards, biometrics
  • Host and application security File system integrity checking, host firewalls, HIDS, host antivirus
  • Network firewalls Router with access control list (ACL), stateful firewall
  • Content filtering Proxy servers, web filtering, e-mail filtering
  • Network intrusion detection systems (NIDS) Signature-based NIDS, anomaly-based NIDS
  • Cryptography Layer 2 (L2) crypto, network layer crypto, L5 to L7 crypto, file system crypto

Identity Technologies

Identity technologies are primarily concerned with verifying who the user (or the user's computer) is on the network. It is the first A in AAA: authentication, authorization, and accounting. As stated earlier, your network identity can be associated with several different elements (IP address, username, and so on); this section focuses on user identity. The IP address of a computer identifies that computer on the network. The following technologies verify that you are the actual user sitting at that computer:

  • Reusable passwords
  • RADIUS and TACACS+
  • OTP
  • PKI
  • Smart cards
  • Biometrics

Reusable Passwords

Table 4-2 shows the summary information for reusable passwords.

Table 4-2. Reusable Passwords

Name

Reusable passwords

Common example

UNIX username/password

Attack elements detected

Identity spoofing

Attack elements prevented

Direct access

Difficulty in attacker bypass

2

Ease of network implementation

5

User impact

4

Application transparency

5

Maturity of technology

5

Ease of management

4

Performance

5

Scalability

3

Financial affordability

5

Overall value of technology

59

Reusable passwords are presented here only as a benchmark for comparison purposes. They are only as strong as the password policy to which users adhere. There are numerous password-auditing tools that can check the strength of passwords after they are created (as is the case with password cracking) and before they are selected in the first place (often called proactive password checking). An interesting paper titled "A Note on Proactive Password Checking" is available at the following URL: http://www.cl.cam.ac.uk/~jy212/proactive2.pdf.

You will no doubt have many places in your network where reusable passwords are required. In these environments, consider proactive password-checking tools and user education as valuable methods to promote good password selection. Also, take advantage of secondary authentication features in most server systems. The easiest example of this is a generally available control that locks a user account from logging in for a period of time after a set number of incorrect login attempts have been attempted. This mitigates online brute-force login attempts (versus offline dictionary attacks in which the attacker already has a file containing encrypted passwords).

RADIUS and TACACS+

Table 4-3 shows the summary information for RADIUS and TACACS+.

Table 4-3. RADIUS and TACACS+

Name

RADIUS and TACACS+

Common example

Cisco Secure ACS

Attack elements detected

Identity spoofing

Attack elements prevented

Direct access

Difficulty in attacker bypass

3

Ease of network implementation

4

User impact

4

Application transparency

4

Maturity of technology

5

Ease of management

5

Performance

4

Scalability

5

Financial affordability

4

Overall value of technology

61

RADIUS and TACACS+ are protocols that offer centralized authentication services for a network. Both operate on the premise that a centralized server contains a database of usernames, passwords, and access rights. When a user authenticates to a device that uses RADIUS or TACACS+, the device sends the login information to the central server, and a response from the server determines whether the user is granted access. RADIUS and TACACS+ servers are commonly called AAA servers because they perform authentication, authorization, and accounting.

AAA servers are used throughout the design sections of this book as a way to centralize the management of usernames and passwords for networked systems. Deployment scenarios include administrator authentication for network devices (routers, switches, and firewalls) as well as user authentication for remote access services (dial-in and virtual private networking).

Because the usernames and passwords are centralized, it is easier to audit password selection and to control user access. AAA servers can also be configured to access other user data stores, as discussed in Chapter 9, "Identity Design Considerations."

Choosing between RADIUS and TACACS+ is fairly easy. RADIUS is an open standard and is widely supported on networking devices of many vendors. TACACS+ was developed by Cisco Systems and runs only on Cisco devices. Even on Cisco devices, RADIUS is becoming more widely available than TACACS+. In devices that support both protocols, TACACS+ is often used for management access rights, while RADIUS is used for user authentication through the device. TACACS+ does offer some advantages:

  • TACACS+ uses TCP, while RADIUS uses UDP.
  • TACACS+ encrypts the entire communications, while RADIUS encrypts only the password.

TACACS+ is also very useful in controlling router access. It can be set up, for example, so that only certain administrators can execute the show ip route command. Think of this as offering further differentiation in access beyond what the Telnet and Enable modes provide.

AAA servers can be combined with OTPs, which are discussed in the next section, to provide even greater security and manageability.

OTPs

Table 4-4 shows the summary information for OTPs.

Table 4-4. OTPs

Name

One-time passwords (OTPs)

Common example

RSA SecurID

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Difficulty in attacker bypass

5

Ease of network implementation

4

User impact

2

Application transparency

3

Maturity of technology

5

Ease of management

2

Performance

4

Scalability

4

Financial affordability

3

Overall value of technology

63

OTPs attempt to counter several problems in strong password selection by users. Most OTPs operate on the principle of two-factor authentication. To authenticate to a system, you need something you have (your token card/software) and something you know (your personal identification number [PIN]). The method of generating and synchronizing passwords varies based on the OTP system. In one popular OTP method, the token card generates passcodes on a time interval basis (generally, every 60 seconds). This random-looking string of digits is actually tied to a mathematical algorithm that is run at both the OTP server and on the tokens. A passcode from a token might look like the following: 4F40D974. The PIN is either used in combination with the algorithm to create the passcode (which then becomes the OTP), or it is used with the passcode. For example, a PIN 3957 could be combined with the passcode generated by the token to create the OTP 39574F40D974.

NOTE

Some OTP systems are single-factor-based systems that utilize prepopulated lists of passwords at both the server and the client. This book assumes a two-factor system when referring to OTP.

Using systems in which the passcode is created by the algorithm and the PIN prevents individuals from learning the user's PIN after repeat sniffing of the network. OTP improves on passwords in the following ways:

  • Users are no longer able to choose weak passwords.
  • Users need only remember a PIN as opposed to what is traditionally considered a strong password. This makes users less likely to write passwords down on sticky notes.
  • Passwords sniffed on the wire are useless as soon as they are acquired.

All this being said, there is a reason OTP isn't used everywhere for most passwords. OTP has the following negative characteristics:

  • Users need to have the token card with them to authenticate.
  • OTP requires an additional server to receive requests relayed from the authenticating server.
  • Entering a password using OTP takes more time than entering a password the user has memorized.
  • OTP can be expensive in large networks.

When looking at the overall ratings for OTP, it is clearly a valuable technology, just not used everywhere. Most organizations choose to use OTP for critical systems in their security policy or for locations where password-cracking attempts are high. For the typical organization, this can mean financial and human resource (HR) systems, as well as remote access systems such as dial-up or virtual private networks (VPNs).

Basic PKI

Table 4-5 shows the summary information for basic PKI.

Table 4-5. Basic PKI

Name

Basic PKI

Common example

Entrust Authority PKI

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Difficulty in attacker bypass

3

Ease of network implementation

2

User impact

2

Application transparency

3

Maturity of technology

3

Ease of management

1

Performance

4

Scalability

3

Financial affordability

3

Overall value of technology

54

PKI is designed as a mechanism to distribute digital certificates that verify the identity of users. Digital certificates are public keys signed by a certificate authority (CA). Certificate authorities validate that a particular digital certificate belongs to a certain individual or organization. At a high level, all a PKI attempts to accomplish is to validate that when Alice and Bob talk to one another, Alice can verify that she is actually talking to Bob and vice versa. It sounds simple, yet it is anything but.

PKI is a technology that has come under fire in recent years. Never in recent memory has a security technology that arrived with such fanfare failed to deliver on its promise on so many accounts. It's been available in one form or another for a number of years, but I still see less than 5 percent of hands to go up in the audience at speaking engagements when I ask, "Raise your hand if you have a PKI system in place for user identity." Much has been written about problems in PKI systems. The best place to start is to read a work titled "Ten Risks of PKI: What You're Not Being Told About Public Key Infrastructure" by B. Schneier and C. Ellison. It is available at the following URL: http://www.schneier.com/paper-pki.html.

Some of the risks are listed in Table 4-5. PKI is hard to manage, expensive, and, without an accompanying technology such as smart cards or biometrics, difficult for users to use securely, in part because of the difficulty in securely storing a private key on a general-purpose PC.

There are two types of PKI systems, open and closed. An open PKI system is one in which a hierarchy of trust exists to allow multiple organizations to participate. The current Secure Sockets Layer (SSL) certificates that your browser validates when making a "secure" online purchase are the best example. CAs certify one another and then certify that particular merchants are who they say they are when you elect to make a purchase. It is these open systems that are under fire from security researchers. The biggest barrier to the successful operation of an open PKI is that you must trust completely the organization asserting an identity; otherwise, you aren't provided any security. Organizations that can command implicit trust globally do not exist on the Internet. Some PKI providers, for example, disclaim all liability from the certificates they providecertainly not an action that fills users with confidence.

A closed PKI is one in which the CA is contained within a single organization and certifies only the identities of a limited group. Closed PKIs have had the most utility. By avoiding the sticky issues of whom to trust and how to manage identity when there are thousands of "John Smiths" in the world, a closed PKI can provide a passable solution in these limited applications. This book primarily talks about closed PKIs in the context of managing identity for VPN connections since Internet Protocol Security (IPsec) is best implemented with public key cryptography in large site-to-site deployments. PKI considerations are discussed further in Chapter 9 and Chapter 10, "IPsec VPN Design Considerations."

Smart Cards

Table 4-6 shows the summary information for smart cards.

Table 4-6. Smart Cards

Name

Smart cards

Common example

Gemplus

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Difficulty in attacker bypass

4

Ease of network implementation

2

User impact

2

Application transparency

2

Maturity of technology

2

Ease of management

2

Performance

4

Scalability

4

Financial affordability

2

Overall value of technology

55

Smart cards are self-contained computers. They have their own memory, microprocessor, and serial interface to the smart card reader. All of this is contained on a credit cardsized object or smaller (as is the case with subscriber identity modules [SIM] cards in Global System for Mobile Communications [GSM] phones).

From a security perspective, smart cards offer the ability to store identity information in the card, from which it can be read by a smart card reader. Smart card readers can be attached to a PC to authenticate users to a VPN connection or to another networked system. Smart cards are a safer place to store private keys than on the PC itself because, for instance, if your PC is stolen, your private key doesn't go with it.

NOTE

Smart cards have uses far beyond the scope of this book. They are used throughout Europe in financial transactions, and certain satellite TV systems use smart cards for authentication.

Since smart cards are a lot like a regular computer in principle, they are subject to most of the same attacks. The specific attacks against smart cards are beyond the scope of this book but can be found in Ross Anderson's excellent book Security Engineering: A Guide to Building Dependable Distributed Systems (Wiley, 2001).

Because readers are not built in to most PCs, the cost incurred in deploying smart cards throughout an organization can be considerable, not just in capital expenses but in long-term management expenses.

Biometrics

Table 4-7 shows the summary information for biometrics.

Table 4-7. Biometrics

Name

Biometrics

Common example

Market is too new

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Difficulty in attacker bypass

3

Ease of network implementation

1

User impact

4

Application transparency

1

Maturity of technology

1

Ease of management

3

Performance

3

Scalability

3

Financial affordability

1

Overall value of technology

52

Biometrics incorporates the idea of using "something you are" as a factor in authentication. It can be combined with something you know or something you have. Biometrics can include voice recognition, fingerprints, facial recognition, and iris scans. In terms of enterprise security, fingerprint recognition systems are the most economical biometric technology. The main benefit of biometrics is that users don't need to remember passwords; they just stick their thumb on a scanner and are granted access to a building, PC, or VPN connection if properly authorized.

Biometrics should not be deployed in this fashion, however. The technology isn't mature enough, and even if it were, relying on a single factor for authentication leaves you open to holes. One option is to consider biometrics as a replacement for a smart card or an OTP. The user still must combine the biometric authentication with a PIN of some sort.

A significant barrier to biometrics is that it assumes a perfect system. That is, one of the foundations of public key cryptography is that a certificate can be revoked if it is found to be compromised. How, though, do you revoke your thumb? Biometrics also assumes strong security from the reader to the authenticating system. If this is not the case, the biometric information is in danger of compromise as it transits the network. Once this information is compromised, attackers can potentially launch an identity spoofing attack claiming a false identity. (This is one of the main reasons including a second factor in the authentication process is desirable.)

Although this could also be considered a strength from an ease-of-use standpoint, the final problem with biometrics is when the same biometric data is used in disparate systems. If my government, employer, and bank all use fingerprints as a form of identification, a compromise in one of those systems could allow all systems to be compromised. After all, your biometric data is only as secure as the least-secure location where it is stored. For all these reasons, look carefully at the circumstances around any potential biometric solution to an identity problem.

Identity Technologies Summary

Table 4-8 shows the summary scores for the various identity technologies.

Table 4-8. Identity Technology Summary

Attack Element

Reusable Passwords

RADIUS and TACACS +

OTP

PKI

Smart Cards

Biometrics

Detection

14

14

0

0

0

0

Prevention

39

39

81

81

81

81

Bypass

2

3

5

3

4

3

Ease of network implementation

5

4

4

2

2

1

User impact

4

4

2

2

2

4

Application transparency

5

4

3

3

2

1

Maturity

5

5

5

3

2

1

Ease of management

4

5

2

1

2

3

Performance

5

4

4

4

4

3

Scalability

3

5

4

3

4

3

Affordability

5

4

3

3

2

1

Overall

59

61

63

54

55

52

From this chart, based on the weightings and rankings, OTP seems to provide the most overall security while encountering the least amount of detrimental ratings among the technologies. When designing a system with strong authentication requirements, RADIUS and TACACS+ combined with OTP could be a good solution. Although all of the PKI variants scored lower, their use in specific applications is unavoidable. IPsec gateways in a site-to-site VPN, for example, can't take advantage of OTP because there is no one there to enter the passcode when they authenticate with another peer.

Likewise, reusable passwords with TACACS+ are necessary for large-scale router management. Automated scripts can't easily enter dynamically generated passcodes. More details on identity can be found in Chapter 9.

Host and Application Security

Host and application security relates to the technologies running on the end system to protect the operating system, file system, and applications. Several security technologies are discussed here, none of which are part of the traditional domain of network security. It is important, though, to understand their functions because, to deploy a security system that satisfies the requirement of "defense-in-depth," application security cannot be ignored. This section covers the following technologies:

  • File system integrity checkers
  • Host firewalls
  • HIDS
  • Host antivirus

File System Integrity Checking

Table 4-9 shows the summary information for file system integrity checking.

Table 4-9. File System Integrity Checking

Name

File system integrity checking

Common example

Tripwire

Attack elements detected

Application manipulation

Rootkit

Virus/worm/Trojan

Attack elements prevented

 

Difficulty in attacker bypass

4

Ease of network implementation

5

User impact

4

Application transparency

5

Maturity of technology

5

Ease of management

3

Performance

5

Scalability

3

Financial affordability

5

Overall value of technology

61

File system integrity checking is a fundamental security technology for critical hosts and servers. File system checkers work by storing a hash value of critical files within a file system. In this way, if a rootkit, virus, or other attack modifies a critical system file, it is discovered the next time the hash values are computed. Although file system checkers do not prevent the attack in the first place, a failed index can indicate a problem that requires immediate attention.

Although there has been some discussion of attacks to bypass these tools (see http://www.phrack.org issue 51 for an example), file system checkers are a good layer of defense for critical systems.

Host-Based Firewalls

Table 4-10 shows the summary information for host-based firewalls.

Table 4-10. Host-Based Firewalls

Name

Host-based firewalls

Common example

IPFilter

Attack elements detected

Probe/scan

Attack elements prevented

Direct access

Remote control software

Difficulty in attacker bypass

2

Ease of network implementation

2

User impact

2

Application transparency

1

Maturity of technology

2

Ease of management

1

Performance

4

Scalability

2

Financial affordability

3

Overall value of technology

51

Host-based firewalls are also commonly called personal firewalls when run on a client PC. They are exactly what their name describes: a firewall running on a host configured to protect only the host. Many host-based firewalls offer IDS in the form of rudimentary application checks for the system they are configured to protect.

Trying to maintain these firewalls on all client PCs or even just on critical systems is operationally burdensome. Organizations have enough trouble managing firewalls when they exist only at security perimeters. Like network firewalls, host firewalls are only as good as their configuration. Some firewalls have wizards to aid in configuration; others ask the user to select a default posture such as default, cautious, or paranoid.

As the configuration of a host firewall increases in security, the impact on the host increases as well. This is particularly true with user PCs, which often have lots of applications running. One popular firewall prompts the user when an unknown application attempts to access the network. Often, though, the application is referenced using an obscure system file rather than the name of the application. This can confuse the user, who might make an incorrect choice, causing failures on your system and an increase in calls to your support center.

Host firewalls are getting better over time and will become more viable as their manageability increases. Many modern OSs ship with firewall software built-in; often it is basic in its function, but that also reduces the chances of user error. With today's technology, it is best to stick with basic firewall configuration for user PCs (for instance, allowing any outbound traffic but not allowing any inbound traffic). Deploy host firewalls on server systems only when it is significantly adding to the security of that host.

For example, if you already have a network firewall in front of some servers, adding a host firewall might improve security slightly, but it will increase the management requirements significantly. If, on the other hand, a system must be out in the open, a host firewall could be a good option. In the design sections of this book, you will see certain cases in which a host firewall makes sense to deploy. Timely patching and host hardening do more to secure a host than a firewall does, so never consider a firewall as an alternative to good system administration practices. Host firewalls should augment basic host security practices, not replace them.

HIDS

Table 4-11 shows the summary information for HIDS.

Table 4-11. HIDS

Name

Host Intrusion Detection Systems (HIDS)

Common example

Entercept

Attack elements detected

Probe/scan

Direct access

Application manipulation

TCP SYN flood

Transport redirection

Remote control software

Attack elements prevented

Read following description

Difficulty in attacker bypass

4

Ease of network implementation

5

User impact

4

Application transparency

2

Maturity of technology

2

Ease of management

2

Performance

4

Scalability

3

Financial affordability

3

Overall value of technology

61

Host intrusion detection is one of the broadest categories in this chapter. It comprises post-event-log-analysis tools, host audit and hardening tools, and inline host tools designed to stop rather than detect attacks. The ratings in Table 4-11 are an attempt to average their function. When considering HIDS in your own network, carefully consider the features you need. When you understand the capabilities of the system you are deploying, you should rebuild the preceding list of attacks and ratings based on the tool you select. HIDS are designed to detect or prevent attacks on a host. Their methods of operation are as varied as the companies that offer solutions in this category.

HIDS tools have the same tuning requirements as network IDS (NIDS) to reduce false positives and ensure the appropriate attacks are flagged. Tuning can be simplified with HIDS because it is specific to a host as opposed to the network as a whole. IDS tuning, focusing on NIDS, is discussed in Chapter 7, "Network Security Platform Options and Best Deployment Practices." Tuning can be complicated, however, if it is deployed on many different systems each with different functions because lessons learned tuning one host with specific applications do not translate to another host.

NOTE

The table for HIDS assumes that a HIDS was deployed in detect-only mode if it has prevention capabilities. If you are able to deploy a HIDS in protection mode by tuning the system properly and performing adequate testing, the overall score for the HIDS will increase significantly.

HIDS, like host firewalls, suffer from manageability issues. As such, it is inappropriate (as well as financially prohibitive) to deploy HIDS on all hosts. On critical systems, the management burden can be contained with careful tuning and adequate staffing.

HIDS will only get better as innovation continues. In principle, attacks are most easily prevented the closer you get to the victim host. This is because the closer you get to the host, the likelihood that the attack is not a false positive increases. The seriousness of the attack increases as well because, by the time it gets near the host, it has probably passed unfettered through a firewall and other network controls. HIDS sits on the victim host, so it is tough to get much closer. Attack prevention at the victim host is even easier because only at the host is there total awareness of application state and configuration.

Host Antivirus

Table 4-12 shows the summary information for host antivirus (AV).

Table 4-12. Host AV

Name

Host antivirus

Common example

McAfee VirusScan

Attack elements detected

 

Attack elements prevented

Virus/worm/Trojan horse

Remote control software

Difficulty in attacker bypass

3

Ease of network implementation

5

User impact

3

Application transparency

4

Maturity of technology

5

Ease of management

4

Performance

4

Scalability

4

Financial affordability

4

Overall value of technology

66

Host AV is a foundation technology in computer security. It is probably the most widely deployed security add-on technology in the world. It is worth deploying on almost every server in your network and all Microsoft Windowsbased user hosts (since they are the source of the majority of all viruses). AV systems work by building a virus signature database. The software then monitors the system for signature matches, which indicate virus infection. All good AV systems have the ability to clean a virus out of an infected file, or, when they can't, the file can be quarantined so it doesn't do any further damage.

This method of signature checking has one primary weakness: zero-day viruses. A zero-day virus is a virus that no one knows about, so no signature is available. These viruses cause the most damage because they pass by AV software.

A second weakness is that AV software is only as good as its last signature update. Many organizations tackle this problem with software updates that occur when the user logs on to the network. Unfortunately, with the advent of portable computers that go into standby mode as opposed to being shut down, the actual act of logging on to a network is decreasing in frequency. As an alternative, most AV vendors now advocate web-based updates that work on a time basis (checking weekly or daily for updates). This moves the AV management burden onto the vendor of the AV and can keep your systems up-to-date. Such time-based checks can also be hosted at your organization if desired. These local systems can also employ a push model of signature updates in the event of a crisis (when a critical zero-day virus finally has a signature available).

Even with all the good that AV systems do, infections are common because of the zero-day and update problems. The 2002 Computer Security Institute (CSI) computer security survey had an interesting statistic on this. Of the 503 survey responses the institute received, 90 percent said their organizations use AV software, but 85 percent said they suffered a virus outbreak over the course of the year. The CSI survey is filled with all sorts of interesting information and can be downloaded at the following URL: http://www.gocsi.com.

The ability for AV software to detect remote control software, worms, and Trojan horses varies quite a bit. Common software such as Back Orifice 2000 (BO2K) has signatures in AV software, but the application can be modified by the attacker, making it difficult to detect. If you are attacked by Trojan horse software that your AV system does not detect, this is just like a zero-day virus and is why user education is so important. Teaching users the proper way to deal with attachments to e-mail messages can significantly reduce the spread of some zero-day exploits.

WARNING

As a word of caution, don't go crazy turning on every bell and whistle in your AV package; this can make systems take longer to boot and degrade their performance.

For instance, one year, I sat down at a family member's computer over the holidays to check something on the Net. I was amazed at how slow the system was; every action seemed to take forever (on or off the Internet). After asking about it, I was told that it was because I wasn't used to the dial-up Internet connection, having been spoiled by broadband in my home. However, I came to find out that the computer was running antivirus software with every feature turned on. The system was constantly checking executables before they were run, scanning memory for viruses, and generally doing a good job of turning a Intel Pentium II machine into a 386.

 

Host and Application Security Summary

Table 4-13 shows the summary scores for the various host-based technologies.

Table 4-13. Host and Application Security Summary

Attack Element

File System Checkers

Host Firewalls

HIDS

Host AV

Detection

52

12.33

83

0

Prevention

0

76

0

79

Bypass

4

2

4

3

Ease of network implementation

5

2

5

5

User impact

4

2

4

3

Application transparency

5

1

2

4

Maturity

5

2

2

5

Ease of management

3

1

2

4

Performance

5

4

4

4

Scalability

3

2

3

4

Affordability

5

3

3

4

Overall

61

51

61

66

As you might expect, host AV scores the best out of the bunch. File system checking and HIDS also score well, with host firewalls scoring lower. You really can't go wrong with any of these technologies. Plenty of organizations use all of them in different parts of their networks: host AV most everywhere, file system checking on all servers, HIDS on key servers, and host firewalls for traveling workers. Not all OSs support all of these technologies, however, so be aware of your own system options when planning your design. Unlike identity technologies for which you wouldn't implement both OTP and PKI for the same application, host security options can be stacked together to achieve stronger host security.

Network Firewalls

Often the focal point of a secure network perimeter, network firewalls (hereafter called firewalls) allow ACLs to be applied to control access to hosts and applications running on those hosts. Firewalls certainly enhance security, though they can cause application problems and can impact network performance. This section outlines two common types of firewalls in use today:

  • Routers with Layer 3/4 stateless ACLs
  • Stateful firewalls

Routers with Layer 3/4 Stateless ACLs

Table 4-14 shows the summary information for routers with Layer 3/4 stateless ACLs.

Table 4-14. Routers with Layer 3/4 Stateless ACLs

Name

Router with Layer 3/4 stateless ACLs

Common example

Cisco IOS Router

Attack elements detected

Network flooding

Attack elements prevented

Direct access

Network manipulation

IP spoofing

IP redirect

Difficulty in attacker bypass

2

Ease of network implementation

2

User impact

3

Application transparency

2

Maturity of technology

5

Ease of management

3

Performance

3

Scalability

3

Financial affordability

5

Overall value of technology

80

Routers with basic stateless ACLs are workhorses in network security. They deserve the name firewall just as much as a stateful appliance firewall does, even though they might lack certain features.

Basic ACLs, shown throughout this book, allow an administrator to control the flow of traffic at either L3, L4, or both. An ACL to permit only network 10.1.1.0/24 to Secure Shell (SSH) (TCP 22) to host 10.2.3.4 looks like this:

access-list 101 permit tcp 10.1.1.0 0.0.0.255 host 10.2.3.4 eq 22

Because the ACL is stateless, the following ACL is needed in the opposite direction to be as restrictive as possible:

access-list 102 permit tcp host 10.2.3.4 eq 22 10.1.1.0 0.0.0.255 established

Because the ACL is stateless, the router has no idea whether a persistent SSH session is in place. This leads to the principal limitation of basic ACLs: all a stateless ACL knows is to match incoming traffic against the ACLs applied to an interface. For example, even if there were no SSH session to 10.2.3.4 from network 10.1.1.0/24, host 10.2.3.4 could send traffic to the 10.1.1.0/24 network provided the source port is 22. The established flag on the ACL adds an additional requirement that the acknowledgment (ACK) or reset (RST) bit is set in the TCP header.

The ACLs on a router aren't the only security-related features available. A network manipulation attack is prevented by hardening the router (for example, using no ip source-route). IP redirection can be prevented with the proper authentication of your routing traffic.

NOTE

Because the router is so versatile from a security perspective, there are more attacks that could be added to the prevent list. TCP SYN floods can be stopped with TCP Intercept; smurf attacks, in part, with no ip directed-broadcast; and so on. As you get into the design sections of this book, the considerations around these different features are discussed more fully.

 

Stateful Firewalls

Table 4-15 shows the summary information for stateful firewalls.

Table 4-15. Stateful Firewalls

Name

Stateful firewalls

Common example

Cisco PIX Firewall

Cisco IOS Firewall

Attack elements detected

Network flooding

Attack elements prevented

Direct access

Network manipulation

IP spoofing

TCP SYN flood

Difficulty in attacker bypass

4

Ease of network implementation

3

User impact

4

Application transparency

3

Maturity of technology

5

Ease of management

4

Performance

4

Scalability

4

Financial affordability

4

Overall value of technology

89

A stateful firewall has many of the same capabilities as a router with ACLs, except the stateful firewall tracks connection state. In the SSH example in the preceding section, the second line allowing the return traffic is not necessary. The firewall knows that a host on network 10.1.1.0/24 initiated the SSH session, so allowing the return traffic from the server is automatic. As a result, the stateful firewall provides increased security (part of the reason the bypass score is higher) because the SSH server would be unable to initiate any communications to the 10.1.1.0/24 network without an established session. Merely setting the ACK or RST bit in the TCP header is not enough to get past the firewall. Although they vary in implementation, most stateful firewalls track the following primary values in their connection tables:

  • Source port
  • Destination port
  • Source IP
  • Destination IP
  • Sequence numbers

It is the last entry, sequence numbers, that provides the most differentiation from a basic ACL. Without guessing the proper sequence number, an attacker is unable to interject into an established session, even if the attacker can successfully spoof the other four fields.

In addition to stateful connection tracking, stateful firewalls often have built-in TCP SYN flood protection. Such protection causes the firewall to broker TCP connections on behalf of the server when a certain half-open connection limit is reached. Only when the connection request is assured as being legitimate is the server involved. From a management perspective, stateful firewalls generally offer more security-oriented functions than routers with ACLs. Their configuration in a security role is easier, and the messages generated in response to attacks tend to be more descriptive.

Beyond L4, stateful firewalls diverge greatly in functionality. Expect any decent firewall to have basic L7 coverage just to ensure that applications work. For example, without some knowledge of the port command in File Transfer Protocol (FTP), active-mode FTP will never work no matter how much state you keep on the connection. Beyond these basic L7 functions, some firewalls offer restricted implementations of Simple Mail Transfer Protocol (SMTP) (limiting the user to "benign" commands) or more advanced Hypertext Transfer Protocol (HTTP) controls (GET versus PUT/POST).

Beyond the basic L7 functionality, anything else is a bonus to the network designer, provided it doesn't impact performance or network utility. The designs in this book assume minimal L7 functionality on the firewall and attempt to design networks in consideration of this. If you are able to use a firewall with more advanced functions without the performance impact, so much the better.

Stateful firewalls are also available in a number of form factors. The most common are router integrated, general-purpose PC, standalone appliance, and switch integrated. These options are discussed at great length in Chapter 7.

Network Firewalls Summary

Table 4-16 shows the summary scores for the two network firewall options.

Table 4-16. Network Firewall Summary

Attack Element

Router with ACL

Stateful Firewall

Detection

19.67

19.67

Prevention

123

125

Bypass

2

4

Ease of network implementation

2

3

User impact

3

4

Application transparency

2

3

Maturity

5

5

Ease of management

3

4

Performance

3

4

Scalability

3

4

Affordability

5

4

Overall

80

89

You probably expected stateful firewalls to do better in this comparison, and they did, although I was surprised to see how small the margin is. This is primarily because routers (because they control routing) can stop the IP redirection attacks, which offsets the lack of TCP SYN flood protection, which the stateful firewall includes. In actual deployments, though, routers can do SYN protection with TCP Intercept, and some firewalls can route, allowing them to take part in IP redirection prevention.

Most designs in this book utilize stateful firewalls when available primarily because of the increased security of not opening high ports and the enhanced security management available.

Content Filtering

This section discusses proxy servers, web filtering, and e-mail filtering. These technologies are best deployed in addition to a network firewall deployment and act as another layer of protection in your security system.

Proxy Servers

Table 4-17 shows the summary information for proxy servers.

Table 4-17. Proxy Servers

Name

Proxy server

Common example

SOCKS Proxy

Attack elements detected

 

Attack elements prevented

Direct access

Difficulty in attacker bypass

3

Ease of network implementation

4

User impact

2

Application transparency

1

Maturity of technology

4

Ease of management

3

Performance

2

Scalability

3

Financial affordability

4

Overall value of technology

43

Proxy servers (also called application gateways) terminate all sessions destined for a server and reinitiate on behalf of the client. In the mid-1990s, a fierce debate took place over whether application gateways were more secure than firewalls. Today, it is generally accepted that this is not the case. The strength of any security device should be measured against the types of attacks it mitigates as opposed to the method by which it mitigates those attacks.

Proxy servers are slower than firewalls by design because they must reestablish sessions for each connection. That said, if deployed as a caching and authentication solution for your users, the perceived speed might be greater because of the caching. However, proxy servers are certainly not the only location in which to do content caching.

Proxy servers also have some difficulty with application support. If you plan to proxy an application through a proxy server, the server must understand enough about the protocol to allow the traffic to pass. The SOCKS protocol works around this by tunneling all desired protocols over a single connection to the proxy. Then it is up to the proxy to handle the connection.

If you assume that a stateful firewall of some kind is deployed in your security system (and this is the case in almost all designs proposed in this book), the deployment of a proxy server becomes primarily a choice of user control. Some organizations choose to have their proxy server sitting behind a firewall controlling all user traffic outbound to the Internet. At this point, user authentication, URL filtering, caching, and other content control all become possible for the user community behind the proxy server. This allows very tight access rights to be defined for a user community, while allowing other users (when appropriate based on policy) unrestricted access to the Internet. This also leaves the firewall free to control traffic at the security perimeter without being concerned with user rights. Proxy server placement is discussed in detail in Chapter 7.

Web Filtering

Table 4-18 shows the summary information for web filtering.

Table 4-18. Web Filtering

Name

Web filtering

Common Example:

Websense

Attack elements detected

 

Attack elements prevented

Direct access

Virus/worm/Trojan

Difficulty in attacker bypass

3

Ease of network implementation

4

User impact

1

Application transparency

4

Maturity of technology

3

Ease of management

4

Performance

1

Scalability

1

Financial affordability

3

Overall value of technology

53

Web filtering refers to the class of tools designed to restrict access from your organization out to the Internet at large. The two primary technologies to do this are URL filtering and mobile code filtering. URL filtering works by sending users' web requests to the URL-filtering server, which checks each request against a database of allowed sites. A permitted request allows the user to go directly to the site, and a denied request either sends an access denied message to the user or redirects the user to another website. The typical use of URL filtering is to ensure that users are visiting only appropriate websites.

Mobile code filtering is the process of "scrubbing" user web traffic for malicious code. Here, some web-based attacks that use mobile code are stopped at the gateway. Different vendors have different methods of doing this; most have all the traffic proxy through the mobile code scanner so that inappropriate code can be stripped before it is sent to the user.

The main problem with web-filtering tools is that they tend to perform poorly and scale even worse. Attempting to implement these tools in a large enterprise is very difficult and can have severe performance impacts. If you have a smaller network, the impact might be reduced. In those small networks, though, the benefits of centralized scanning become less significant because it is easier to police systems individually.

NOTE

Depending on the firewall and web-filtering vendor, there might be some rudimentary integration between the two systems. For example, certain firewalls can be configured to route web requests to the URL-filtering system directly before sending them on their way. This prevents the users from having to configure a proxy server. Unfortunately, this integration doesn't increase performance much; it is done more to ease user configuration.

The other issue is the Big Brother aspect of these technologies (particularly the URL filtering). In certain environments they clearly make sense (in elementary schools and similar environments), but most organizations should carefully weigh the benefits that web filtering provides in relation to the pains it will cause users. False-positive matches are not uncommon with these systems (which often rely on a team of web-surfing individuals at the vendor to classify traffic into various categories). Installing technology like this tends to imply you don't trust your users (which may well be the case). You also should be prepared for increased support issues as users try to get work done on the network and are impeded by the web-filtering applications.

Finally, although these tools have the ability to stop certain types of viruses, worms, and Trojan horses, the protection isn't comprehensive and shouldn't be relied on by itself. (This is true for most of the technologies in this chapter.)

E-Mail Filtering

Table 4-19 shows the summary information for e-mail filtering.

Table 4-19. E-Mail Filtering

Name

E-mail filtering

Common example

MIMEsweeper

Attack elements detected

 

Attack elements prevented

Virus/worm/Trojan

Remote control software

Difficulty in attacker bypass

4

Ease of network implementation

5

User impact

5

Application transparency

4

Maturity of technology

4

Ease of management

4

Performance

4

Scalability

4

Financial affordability

3

Overall value of technology

69

E-mail filtering performs the same basic function as web filtering. The mail-filtering gateway inspects incoming and outgoing mail messages for malicious content. For most organizations, this means scanning incoming and outgoing e-mail for viruses. This can also mean scanning outbound e-mail for confidential information, although this can be more problematic because there is no centralized signature database to go to.

E-mail virus filtering augments host-based scanning by providing a centralized point at which e-mail attachments can be scanned for viruses. An infected file can be either cleaned or deleted, and then the e-mail message can be sent to the user with the problem file flagged. When a virus outbreak occurs, the e-mail-filtering gateway can be updated with the appropriate signatures much faster than the rest of the network.

Although e-mail filtering has many of the same performance and scaling considerations as web filtering, users generally do not notice because e-mail isn't a real-time communications medium. That said, be sure to scale your e-mail-filtering implementation to the message load on your e-mail system. Because of the ease with which e-mail can become a conduit for viruses, e-mail-filtering systems should have a place in most security systems.

Content-Filtering Summary

Table 4-20 shows the summary scores for the content-filtering options.

Table 4-20. Content-Filtering Summary

Attack Element

Proxy Server

Web Filtering

E-Mail Filtering

Detection

0

0

0

Prevention

39

81

79

Bypass

3

3

4

Ease of networking implementation

4

4

5

User impact

2

1

5

Application transparency

1

4

4

Maturity

4

3

4

Ease of management

3

4

4

Performance

2

1

4

Scalability

3

1

4

Affordability

4

3

3

Overall

43

53

69

Because the ratings in this chapter are skewed toward threat prevention, the overall ratings for the content filtering technologies are lower than other sections. E-mail filtering has a clear security benefit, as do portions of web filtering (mobile code). Proxy servers perform more as a user control function than they do in a security role, so the rating results are expected.

In your own environment, the deployment of these technologies is primarily about policy enforcement. If your security policy dictates that user access to the World Wide Web should be authenticated and controlled and likewise that all e-mail messages should be scanned for viruses, you probably must deploy all three technologies, regardless of their ratings. If, however, user authentication and control is not required, you might deploy only e-mail filtering and leave the caching to network-based cache systems, which don't require user configuration to access.

Network Intrusion Detection Systems

NIDS can act as another layer of detection in your security system. This section discusses the two primary NIDS options: signature based and anomaly based.

Signature-Based NIDS

Table 4-21 shows the summary information for signature-based NIDS.

Table 4-21. Signature-Based NIDS

Name

Signature-based NIDS

Common example

Cisco IDS

Attack elements detected

Probe/scan

Network manipulation

Application manipulation

IP spoofing

Network flooding

TCP SYN flood

ARP redirection

Virus/worm/Trojan

Remote control software

Attack elements prevented

 

Difficulty in attacker bypass

3

Ease of network implementation

3

User impact

4

Application transparency

2

Maturity of technology

2

Ease of management

1

Performance

3

Scalability

3

Financial affordability

2

Overall value of technology

68

The vast majority of NIDS operate much like sniffers. The device sits on a network with its interfaces in promiscuous mode, watching for suspect traffic. When a packet, or set of packets, matches a configured signature, an alarm is generated on the management console.

In consideration of the data in Table 4-21, it is easy to see why NIDS technology got so much momentum behind it as security technology. It has the most comprehensive set of detected attacks in this entire chapter. Unfortunately, the implementation, tuning, and manageability concerns have made it a difficult technology from which to realize significant value. This is why the numeric ratings tend to be lower, and the overall score of the technology suffers as a result.

NIDS have the ability to actively stop an attack, though most deployments do not enable these features. (See the IDS deployment discussion in Chapter 7 for more details.) Should the highlighted issues with IDS be resolved, using NIDS to stop attacks will become a more viable solution and will result in much greater usefulness.

All this being said, NIDS technology has a clear function in today's networks, provided your organization is staffed to deploy it properly. With its network-wide visibility, it can indicate problems more quickly than only auditing hosts. The keys to a successful IDS deployment are placement, tuning, and proper management.

Anomaly-Based NIDS

Table 4-22 shows the summary information for anomaly-based NIDS.

Table 4-22. Anomaly-Based NIDS

Name

Anomaly-based NIDS

Common example

Arbor Networks Peakflow DoS

Attack elements detected

Network flooding

TCP SYN flood

Virus/worm/Trojan

Attack elements prevented

 

Difficulty in attacker bypass

4

Ease of network implementation

4

User impact

5

Application transparency

5

Maturity of technology

1

Ease of management

3

Performance

4

Scalability

4

Financial affordability

2

Overall value of technology

51

The entire anomaly-based NIDS market has fallen victim to the marketing campaigns of companies trying to position their products. This has happened so much that the term "anomaly" is more of a buzzword than something with real teeth. Anomaly-based NIDS refers to a NIDS that learns normal behaviors and then logs exceptions. In the long run, this could apply to any type of attack, assuming the method of establishing a baseline is advanced enough.

Today, though, "anomaly based" generally refers to broader traffic metrics rather than finding the lone buffer overflow attack among an OC-3 of traffic.

These broader traffic metrics are not without merit, however. Their principal use today is in the detection of denial of service (DoS) conditions of the malicious and the "flash crowd" variety. If the NIDS knows, for example, that an Internet link usually has 200 kilobits per second (kbps) of Internet Control Message Protocol (ICMP) traffic, when that link spikes to 20 megabits per second (Mbps) of ICMP, an alarm can be generated. The administrator can configure the tolerance levels.

The benefit this kind of system provides is that it will generate a single alert to inform the administrator of the DoS condition. A traditional signature-based NIDS tool would see each ICMP flood attack as a discreet event that would warrant an alarm. By associating the alarms with the actual traffic load on the network, the number of false positives can be reduced. The signature-based NIDS also has no visibility into what is normal on the network; it only generates alarms when conditions match the thresholds specified by the administrator. For example, if that same 20 Mbps ICMP flood was launched against an OC-12 interface, you might want to know about it but not with the same urgency as when it was launched against an E-1 link.

NOTE

Over time, I think the anomaly-based and signature-based NIDS markets will merge into a single product capable of providing either functionality, depending on what a specific attack type warrants.

 

NIDS Summary

Table 4-23 shows the summary scores for the two NIDS options.

Table 4-23. NIDS Summary

Attack Element

Signature-Based NIDS

Anomaly-Based NIDS

Detection

123

43.67

Prevention

0

0

Bypass

3

4

Ease of network implementation

3

4

User impact

4

5

Application transparency

2

5

Maturity

2

1

Ease of management

1

3

Performance

3

4

Scalability

3

4

Affordability

2

2

Overall

68

51

Even though anomaly-based NIDS tend to be easier to manage and harder to bypass for the attacks they detect, signature-based NIDS get the better score because of the increased number of attacks they detect. As these two technologies merge over the next several years, NIDS as a broad technology will benefit immensely. Today you can use anomaly NIDS to identify broad trends on your network and signature NIDS to focus on specific issues.

Cryptography

Properly implemented cryptography is designed to protect communication between two parties. It generally has three main properties:

  • The original message cannot be read by anyone but the intended party. (This is commonly called encryption.)
  • Both parties in the communication can validate the identity of the other party. (This is commonly called authentication.)
  • The message cannot be modified in transit without the receiving party knowing it has been invalidated. (This is commonly called integrity.)

Most security books spend 30 to 40 pages or more on basic cryptography concepts. This book assumes you have that foundation knowledge or at least have access to it. In the context of a security system, cryptography is a tool, just like anything else discussed in this chapter. The most common cryptographic methods are discussed in the next few sections:

  • Layer 2 cryptography
  • Network cryptography
  • L5 to L7 cryptography
  • File system cryptography

WARNING

This might seem obvious, but the attacks listed as "prevented" in this section refer only to systems protected by the cryptographic technique discussed. For example, if you have a VPN link between two sites, IP spoofing is prevented for the IP addresses that are taking part in IPsec. The rest of the communications are vulnerable as usual.

 

L2 Cryptography

Table 4-24 shows the summary information for L2 cryptography.

Table 4-24. L2 Cryptography

Name

L2 cryptography

Common example

WEP

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Sniffer

MAC spoofing

Man-in-the-middle

Difficulty in attacker bypass

3

Ease of network implementation

2

User impact

5

Application transparency

3

Maturity of technology

3

Ease of management

2

Performance

3

Scalability

3

Financial affordability

3

Overall value of technology

91

L2 cryptography is simply the process of performing cryptographic functions at Layer 2 of the OSI model. The most well-known, though provably insecure, L2 crypto is Wired Equivalent Privacy (WEP), which is used as part of the 802.11b standard for wireless LANs. L2 crypto was, and to a certain extent still is, used by financial institutions as link encryption for their WAN links. These so-called link encryptors sit after a router on a WAN, while an identical device sits on the other end of the link. Link encryptors are sometimes being replaced by network layer encryption devices. This is primarily because of lower costs, interoperability, and better manageability.

Network Layer Cryptography

Table 4-25 shows the summary information for network layer cryptography.

Table 4-25. Network Layer Cryptography

Name

Network layer cryptography

Common example

IPsec

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Sniffer

IP spoofing

Man-in-the-middle

Difficulty in attacker bypass

5

Ease of network implementation

2

User impact

5

Application transparency

3

Maturity of technology

3

Ease of management

3

Performance

3

Scalability

3

Financial affordability

3

Overall value of technology

96

IPsec is such a de facto standard in network layer cryptography, I considered naming the category IPsec. IPsec is defined in RFCs 2401 through 2410 by the IETF. It is designed to be a flexible and interoperable method of providing L3 cryptography. It can operate in a number of modes, from simply authenticating messages to providing full encryption, authentication, and integrity. Like L2 encryption, the main benefit IPsec offers is the ability to provide encryption to multiple protocols with a single security negotiation. Session layer cryptography, discussed in the next section, is usually specific to a certain protocol, such as TCP in the case of SSH.

IPsec is used throughout this book whenever L3 cryptography is called for. IPsec is certainly not without its problems, but it is the best thing going right now. Much of the criticism of IPsec centers around the complexity of its operation. IPsec is flexible almost to its detriment. It is the standard development-by-committee problem: to please all parties involved, IPsec grew into a very complex beast. In the design chapters, this is evident in the complexity of the configurations.

NOTE

The IETF recognizes some of the difficulties with IPsec and is actively working to remedy them with new implementations of portions of the protocol. Internet Key Exchange (IKE) version 2 is a refinement of the original IKE and is currently under development.

As you learned in Chapter 1, confidentiality is not all there is to security. That said, IPsec can stop a lot of attacks. The deployment method assumed in this book is from IPsec gateway to IPsec gateway or from client PC to IPsec gateway. Although client PC to client PC is possible, the manageability considerations for this are enormous. IPsec design considerations are described in detail in Chapter 10.

TIP

As you will learn in Chapters 7 and 10, IPsec gateways can either be standalone IPsec devices, routers/switches with IPsec capability, or firewalls with IPsec capability. All of these become viable deployment options depending on the requirements.

 

L5 to L7 Cryptography

Table 4-26 shows the summary information for L5 to L7 cryptography.

Table 4-26. L5 to L7 Cryptography

Name

L5L7 cryptography

Common example

SSL

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Sniffer

Man-in-the-middle

Difficulty in attacker bypass

5

Ease of network implementation

5

User impact

3

Application transparency

2

Maturity of technology

4

Ease of management

4

Performance

3

Scalability

3

Financial affordability

4

Overall value of technology

88

For the purposes of secure networking design, L5 to L7 crypto (SSH, SSL, Pretty Good Privacy [PGP], and so on) should be viewed as an alternative to IPsec in application-specific situations. For example, it would be administratively impossible to use IPsec instead of SSL for encrypted web communications. Every server would need to establish an L3 relationship with the client before the communications can be sent encrypted. Likewise, using IPsec with Telnet as an alternative to SSH is not advantageous. SSH and SSL allow for reasonably secure communications by using reusable passwords and public keys on the server side.

Where SSH and SSL have difficulty is in providing robust application support for all enterprise needs. Today's networks have a huge variety of applications that they must support. IPsec becomes a superior alternative to SSH/SSL when trying to support all of these applications in as consistent a manner as possible.

It all comes down to choosing the right security tool for the requirements. SSH and SSL are used in this book primarily for management communications and application-specific security requirements (such as e-commerce).

File System Cryptography

Table 4-27 shows the summary information for file system cryptography.

Table 4-27. File System Cryptography

Name

File system cryptography

Common example

Microsoft's Encrypting File System (EFS)

Attack elements detected

 

Attack elements prevented

Identity spoofing

Direct access

Rootkit

Remote control software

Difficulty in attacker bypass

4

Ease of network implementation

5

User impact

3

Application transparency

4

Maturity of technology

3

Ease of management

4

Performance

4

Scalability

4

Financial affordability

4

Overall value of technology

91

Although not an integrated component of network security, file system cryptography is overlooked enough that it should be addressed here. The idea is simple: file system cryptography encrypts either the entire file system of a host or sensitive directories in that file system. The big rush toward network security has been predicated on the assumption that servers have all the juicy information.

Although it is certainly true that servers are critical resources in need of protection, stealing a portable computer can provide an attacker equally sensitive information with a lot less effort. File system security should be done in most situations where it is viable. This generally means mobile user systems are a top priority. Servers can benefit as well, but performance must be weighed against the fact that servers are often in a secure physical location.

Cryptography Summary

Table 4-28 shows the summary scores for the cryptography options.

Table 4-28. Cryptography Summary

Attack Element

L2 Crypto

Network Crypto

L5L7 Crypto

File System Crypto

Detection

0

0

0

0

Prevention

176

178

148

154

Bypass

3

5

5

4

Ease of network implementation

2

2

5

5

User impact

5

5

3

3

Application transparency

3

3

2

4

Maturity

3

3

4

3

Ease of management

2

3

4

4

Performance

3

3

3

4

Scalability

3

3

3

4

Affordability

3

3

4

4

Overall

91

96

88

91

The various cryptographic options all scored well, but network crypto gets the overall high score primarily because of its flexibility. Like host security options, most larger organizations use all of the options in different parts of the network: L2 crypto for wireless, network crypto for VPNs, session crypto for key applications and management channels, and file system crypto (hopefully) for most mobile PCs.

Part I. Network Security Foundations

Network Security Axioms

Security Policy and Operations Life Cycle

Secure Networking Threats

Network Security Technologies

Part II. Designing Secure Networks

Device Hardening

General Design Considerations

Network Security Platform Options and Best Deployment Practices

Common Application Design Considerations

Identity Design Considerations

IPsec VPN Design Considerations

Supporting-Technology Design Considerations

Designing Your Security System

Part III. Secure Network Designs

Edge Security Design

Campus Security Design

Teleworker Security Design

Part IV. Network Management, Case Studies, and Conclusions

Secure Network Management and Network Security Management

Case Studies

Conclusions

References

Appendix A. Glossary of Terms

Appendix B. Answers to Applied Knowledge Questions

Appendix C. Sample Security Policies

INFOSEC Acceptable Use Policy

Password Policy

Guidelines on Antivirus Process

Index



Network Security Architectures
Network Security Architectures
ISBN: 158705115X
EAN: 2147483647
Year: 2006
Pages: 249
Authors: Sean Convery

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net