Network Security Design Elements That Impact Performance


Almost every decision you make concerning a network's security infrastructure impacts its performance. From the choice of which firewall to field, to the architecture of the network, many factors combine (sometimes in complex ways) to affect the network's overall performance. When designing your network, you need to understand what the individual performance impact of each design element is so that you can predict what the cumulative impact will be for the resulting network.

The Performance Impacts of Network Filters

Because perimeter defense relies so heavily on packet filters and firewalls to protect networks, it makes sense to start our performance discussion with them. Network filters perform the important job of determining which packets should be allowed to enter our networks. The process they go through to make this decision takes time, and the amount of time taken directly impacts the latency of the packet. Assuming similarly performing hardware, the more complex the decision that needs to be made, the longer it will take to reach the decision. From a performance point of view, we should prefer algorithms that can make simple, quick decisions. However, substantial security advantages result from performing more complex analysis of incoming packets. The following four filtering techniques demonstrate this security/performance tradeoff:

  • Packet filters

  • Stateful firewalls

  • Proxy firewalls

  • Content filters

Packet Filters

As we discussed back in Chapter 2, "Packet Filtering," packet filters are one of the most basic forms of network filters. All decisions are made based on the contents of the packet header with no reference back to previous packets that have been received. The most common fields on which to filter are source IP address, source port, destination IP address, destination port, and status flags. A typical filter rule might be to allow connections to TCP port 80 on a web server. Assuming the web server is at IP address 192.168.1.5, the Cisco ACL would look like this:

 access-list 110 permit tcp any host 192.168.1.5 port 80 

To make the filtering decision, the packet filter need only examine the packet's header for the destination IP address (192.168.1.5) and destination port (80). No other information is required. This is a simple operation that takes little processing time to complete, resulting in little added latency to the packet. Be careful when adding access lists to routers that already have high CPU utilization. The added load, minor as it might be, could cause the router to start dropping packets.

Tip

Apply access lists on incoming interfaces to minimize the performance impact. When they are applied on an incoming interface, the router does not need to make redundant routing decisions on packets that are being rejected.


When security decisions can be made based on small amounts of information, performance impacts are small. Unfortunately, as we explained in Chapter 2, several problems exist with packet filters, including problems protecting against spoofed packets and difficulties allowing return traffic back into the network without opening up unintended holes in the filter. To address these deficiencies, the filtering device must perform additional work.

Stateful Firewalls

Stateful firewalls (covered in Chapter 3, "Stateful Firewalls") address some of packet filtering's shortcomings by introducing a memory to the packet filter in the form of an ongoing connections table. Performance is impacted due to two additional tasks that the firewall must perform on this table. First, when a new packet is allowed through the firewall, the firewall determines whether the packet is the start of a new network conversation. If it is, an entry is added to the ongoing connections table. Second, when the firewall receives the return packet, a lookup must be performed to find the corresponding entry in the ongoing connections table before access can be granted. As you could imagine, on a busy firewall, this table can grow large. As the table grows, the time necessary to locate entries also grows, increasing the time necessary to make the access decision. Stateful firewall vendors employ hash algorithms to attempt to reduce this overhead, but it cannot be completely eliminated.

Tip

Most stateful firewall vendors allow you to tune the firewall by increasing the size of the connection table and corresponding hash table. Increasing these settings can noticeably increase performance and is recommended if your firewall can support the increased memory requirements.


If performance is still unacceptable, some products allow you to disable the memory feature on a per-rule basis. Using this feature increases performance, but at the cost of reduced security.

Proxy Firewalls

Proxy firewalls have the potential to make the best security decisions, but they are the worst performing of the three types of network filters we have discussed. Why use them? As we described in Chapter 4, "Proxy Firewalls," proxy firewalls place themselves in the middle of network conversations. Clients connect first to the proxy firewall, and the proxy firewall makes the request to the server on the client's behalf. To accomplish this, the proxy firewall must maintain two network connections for every ongoing network conversation. Every ongoing connection requires its own data record that records the source and destination of the connection and the current protocol state. When a packet is received, the proxy must examine the data portion of the packet and determine whether it is making a valid request based on that protocol state. This enables the proxy to deny requests that are invalid or out of sequence for the current protocol. For instance, according to RFC 921 (the RFC for Internet mail), during an SMTP session, a mail from: command should precede a rcpt to: command. By understanding the normal states that a protocol should support, the proxy can reject requests that would break the state model of the protocolin this case, by rejecting any rcpt to: commands that are issued prior to a mail from: command being received. Although this approach has a substantial security benefit, it comes at the expense of high memory and CPU requirements. Careful consideration should be given to the choice of the firewall's hardware platform, especially when filtering high-bandwidth traffic.

Content Filters

Content filters protect a network by attempting to detect malicious activity in the content of a packet. Email filters are the most common type of content filters, but they also exist for other types of network traffic. As described in Chapter 10, "Host Defense Components," most of these tools are signature based. They contain a large database of malicious activity signatures. When a new message is received, the content filter must search the message to determine whether it contains malicious signatures. The amount of time this takes is dependent on the size of the signature database, the size of the message, and the speed of the computer performing the filtering.

Signature databases are guaranteed to grow over time. Little benefit would result from removing signatures from these databases. The outcome would be known attacks that the filter would not be able to detect. As the size of the signature database increases, performance of the content filter decreases. It is important to remember this when determining the appropriate hardware on which to host your content filter.

Another important factor to consider is the percentage of malicious traffic to normal traffic. When a malicious message is detected, most content filters need to perform additional work, such as sending out an alert or writing a log entry. This extra work is not an issue when most messages are normal. However, when the number of malicious messages becomes large, this extra work can undermine the ability of the content filter to function. It is possible for an attacker to exploit this problem by sending a large volume of malicious messages to induce a denial of service.

Network Architecture

Because a network that starts out slow will only become slower if we add security devices, it is useful to discuss ways that network design affects network performance. The following four network design issues, which can directly impact the overall performance of a network, are discussed in depth in the subsequent sections:

  • Broadcast domains

  • WAN links

  • TCP/IP tuning

  • Routing protocols

Broadcast Domains

Broadcast messages are the most expensive type of packet in terms of performance that can be transmitted on a network. Two reasons explain this. First, when a normal packet is sent out on a network, most hosts can ignore the packet if it is not being sent to them. However, every host in the same broadcast domain must process a broadcast packet, even if the packet is not relevant to it. As discussed in Chapter 13, "Separating Resources," each host in the same broadcast group must take a small performance hit for every broadcast packet received. Second, broadcast messages consume some of the network bandwidth from every host in the same broadcast domain, even when the network is implemented using network switches. Normally, network switches are able to intelligently direct packets to the intended recipient without consuming network bandwidth from other connected hosts. However, this is not possible with broadcast packets because they are supposed to be sent to all hosts.

Tip

To minimize the performance impact of broadcast messages, keep the number of hosts in a broadcast group as small as practical and try to eliminate or replace protocols that generate many broadcast messages.


WAN Links

WAN links allow geographically distributed networks to be connected. They are normally implemented using circuits that are provided by common carriers such as telephone companies, which charge a recurring fee to provide the WAN service. WAN services are sold in a wide variety of capacities, and the fee for service can grow large for high-bandwidth connections. Table 17.1 lists some of the most common circuit types.

Table 17.1. Bandwidths for Common Circuit Types

Circuit Type

Bandwidth

Dial-up modem

9.656Kbps

Switch 56

56Kbps

ISDN BRI

128Kbps

Cable modem

Approximately 1Mbps

Digital Subscriber Line (DSL)

256768Kbps

T1

1.544Mbps

T3

45Mbps

OC3

155Mbps

OC12

622Mbps

OC48

2.45Gbps


When establishing a WAN connection, it is essential that you carefully analyze the bandwidth requirements for the connection. If you order too small a circuit, the network performs unacceptably. If you order too large a circuit, you waste your company's money. Finding an appropriate balance can be tricky, especially when large price increases exist between levels of service. If circuit prices are reasonable in your area, always opt for more bandwidth than you think you will need. You will always find a use for it later. If larger circuits are too expensive, though, you will need to make a careful price-versus-performance analysis that will require a detailed understanding of your WAN performance requirements.

After the appropriate circuit has been set up, it is important not to waste bandwidth over the connection. Following are some tips to help you avoid unnecessary WAN usage:

  • Do not bridge the networks because this forces all broadcast traffic on both networks to flow over the WAN link. Route between them instead.

  • Do not make one network reliant on the other for basic network services. For example, if you are running a Windows network, place a domain controller on each side of the circuit.

  • Use network filters to restrict WAN traffic to essential connections.

  • Cache frequently referenced materials locally. Just as Internet providers use web caches to improve performance, any frequently referenced materials should be cached locally to prevent redundant WAN usage to retrieve the same information.

  • Try to schedule batch jobs that send large amounts of data over the WAN link for periods of low activity.

TCP/IP Tuning

Many TCP/IP stacks are not set up to perform optimally by default. To get the most performance possible, examine some of the specific controls available for your servers to optimize TCP/IP performance for your environment. Following are some specific issues for which you should look:

  • Maximum transmission units (MTUs) If you transmit a packet with too large an MTU, it might have to be fragmented to reach its destination. This adds substantial latency to the packet, and in some cases, it might prevent delivery. To address this, many manufacturers set the MTU to the smaller of 576 and the MTU of the outbound interface. This is normally small enough to avoid fragmentation, but it can significantly reduce transfer efficiency. RFC 1191 describes a method to dynamically discover the MTU. It works by sending out packets with the Do Not Fragment bit set. When a compliant router receives one of these packets but cannot forward the packet because the next link has a smaller MTU, it sends back an Internet Control Message Protocol (ICMP) error message that includes a new recommended MTU size. This allows the transmitting host to reduce the MTU to the largest value that will still permit the packet to reach its destination without being fragmented. Most manufacturers support RFC 1191, but it might not be turned on by default.

    Note

    Some networks block all ICMP messages at the network's border. This can break several TCP/IP protocols, including the mechanism on which RFC 1191 relies to function. For an RFC 1191compliant operating system to determine a correct MTU, it must be able to receive ICMP type 3 (Destination Unreachable) and code 4 (Fragmentation Needed and Don't Fragment Was Set) packets. If all ICMP messages are filtered, the sending host assumes that the current MTU is supported, which might cause unnecessary packet fragmentation. In some implementations, it might disable communications entirely if the Do Not Fragment bit is set on all outgoing packets. For this reason, it is important to carefully consider what types of ICMP messages should be allowed into and out of your network instead of just creating a generic "deny all" ICMP rule.


  • Window size The TCP window value determines how much TCP data a host can transmit prior to receiving an acknowledgement. This is part of TCP's error-correction mechanism. When the value is small, errors in transmission are quickly detected. This is good if the circuit is unreliable. For reliable circuits, a larger TCP window size is more appropriate. TCP is designed to dynamically vary the window size to adjust for circuit quality. This mechanism works well for reasonably high-performance circuits. When circuit speeds are extremely high (greater than 800Mbps), the maximum window size is exceeded and performance suffers. To address this, RFC 1323 was proposed. RFC 1323 adds extensions to TCP to support extremely high-performance networks, including an increase in the maximum window size.

    Tip

    When working with extremely high-performance networks, you should use operating environments that support the RFC 1323 extension. Examples of operating systems that support RFC 1323 include AIX 4.1, HP-UX, Linux (kernels 2.1.90 or later), Microsoft Windows (2000 and above), and Sun Solaris (versions 2.6 and above).


  • Socket buffer size The send and receive socket buffers hold data during transmission until an acknowledgment for the data has reached the transmitting host. At this point, the acknowledged data can be flushed from the buffer, allowing new data to be transmitted. When these buffers are too small, performance suffers because the connection between the two hosts cannot be filled up completely with data. The amount of data that will fit between two hosts is directly related to the bandwidth and latency between the hosts. It can be calculated by the following formula:

    amount of data = bandwidth * roundtrip time delay

    The resulting value is the optimum size for the transmit and receive buffers for communication between the two hosts. Of course, different values would be obtained for conversations between different hosts. Currently, TCP does not support calculating this value dynamically, so a reasonable maximum should be chosen. Some applications allow the users to specify the buffer sizes, but this is uncommon. For most applications, the only way to increase the buffer sizes is to increase the system-level defaults. This should be done with care because it causes all network applications to use additional system memory that they might not require. Keep in mind that you will only gain a performance increase if both hosts are using sufficiently sized buffers.

Routing Protocols: RIP Versus OSPF

Routing is the process of deciding how to deliver a packet from one network to another. To deliver a packet, each router between the source and destination devices must know the correct next-hop router to which to send the packet. Routers maintain routing tables to make these decisions. These tables can be configured in one of two ways: by manually entering routing information into each router or by using a routing protocol. Routing protocols are designed to automatically determine the correct routes through a network by exchanging routing information with neighboring routers.

Two of the most common routing protocols in use on LANs are Routing Information Protocol (RIP) and Open Shortest Path First (OSPF). RIP is included as a standard routing protocol on most routers and was the first major routing protocol for TCP/IP. RIP is easily implemented and normally works just by turning it on. For this reason, many manufacturers use it as the default routing protocol. This is unfortunate because RIP suffers from many deficiencies, including substantial performance problems.

RIP has two major performance problems. First, it cannot make routing decisions based on bandwidth. RIP uses hop-count as its metric to determine the shortest path between two networks. Paths with lower hop-counts are preferred over paths with higher hop-counts, even if the bandwidth of the lower hop-count path is much lower. For an extreme example, see Figure 17.1. Router A has a 100Mbps connection to Router B, which has a 100Mbps connection to Router C. In addition, Router A has a 128Kbps connection to Router C. If Host 1 attempts to send a packet to Host 2, the preferred path from a performance standpoint would be A-B-C, but RIP would choose A-C, forcing the packet to travel across the extremely slow link.

Figure 17.1. RIP networks occasionally make poor routing choices.


RIP's second problem is that it has an inefficient method of sharing routing information across a network. Every 30 seconds, routers that are running RIP must broadcast their entire routing table to each neighboring router. On large networks, these tables are big and can consume a substantial amount of network bandwidth. If the network includes slow network links, this route information can bring performance across the link to a crawl. In addition, RIP has other deficiencies that can make it a poor choice as your routing protocol. The most significant of these is that it takes a relatively long time for RIP routes to propagate throughout the network. In some cases, this can mean that RIP might not stabilize when routes are changing rapidly.

OSPF was created to provide a nonproprietary routing protocol that addressed the deficiencies of earlier routing protocols, such as RIP. OSPF includes the ability to represent a cost for each interface, which allows it to make decisions based on the bandwidth differences between paths. OSPF is significantly more efficient when sharing routing information across the network. It transmits routing changes only when they occur instead of broadcasting them at fixed intervals. The improvements that OSPF brings do come at the cost of implementation complexity. OSPF networks are more difficult to implement than RIP networks.

Case Studies to Illustrate the Performance Impact of Network Security Design Elements

Following are two case studies to illustrate some of the concepts covered so far. In the first, we examine performance issues that might occur when connecting a field office to corporate headquarters. In the second, we examine performance problems that can crop up when packet latency becomes high.

Case Study 1: Two Networks Connected Using 128K ISDN

The network presented in Figure 17.2 is a classic example of a field office connection to corporate headquarters. This office is responsible for processing sales orders for its local region, and its network has been connected to corporate headquarters over an ISDN BRI WAN circuit. The network's main resource is a file server. It contains product information, customer lists, and other materials that the field office uses to sell the company's products to its clients. Most of the sales transactions can be performed locally; however, when a new sale is processed, real-time inventory quantities must be queried from a server at corporate headquarters. Every hour, collected sales order information is uploaded to the corporate database. The field office network is modest and is organized as a single network subnet, which contains a dozen PCs, three network printers, and the file server. The network used to be a collection of different operating systems, including Windows 95, Windows NT, NetWare 4, and Mac OS 7. This changed when all systems, including the file server, were swapped out for Windows 2003 systems. The corporate headquarters network is large with dozens of subnets. It is mainly a TCP/IP network, but IPX is also in use due to some infrequently used Novell file servers. In addition, the marketing department has a few Mac OS X systems configured to run AppleTalk. This was the simplest method the marketing staff could find to enable continued access to the printers when it upgraded from earlier Macintosh systems.

Figure 17.2. Case study 1 discusses the performance issues in low-bandwidth WAN connections.


The field office has been complaining for months about sporadic performance problems on its network. The workers are concerned with slow response time when checking inventory levels because it makes it difficult for them to respond appropriately to their customers. This problem is intermittent, but it is frequent enough to affect business. The company has asked you to determine what is causing the performance problem.

Given this information, what type of questions should you ask? You should focus your attention on several clues. The field office has a small-bandwidth connection back to corporate headquarters. It would not take much unnecessary traffic to affect performance. What could some of the sources of unnecessary traffic be? Look for unnecessary protocols. In addition to TCP/IP, the corporate network is running IPX and AppleTalk. Because the field office used to have NetWare and Mac OS systems, it is possible that IPX and AppleTalk were once in use. It is reasonable to assume that the routers might be configured to pass this traffic. Now that the field office has standardized on Windows 2003, these extraneous protocols are no longer needed. Reconfiguration of systems relying on legacy protocols can significantly reduce network traffic and improve performance.

Another potential source of traffic is the routing protocol. The example did not mention which was being used, but it is highly possible that the office is using RIP. Because corporate headquarters has a large number of networks, RIP routing announcement packets will be large. Should you have the field office switch to something more efficient, such as OSPF? In this case, that might be overkill. Because the field office router has only two possible places to send packets, it might be better to use a static route.

Why is the problem sporadic? Could it be the hourly transmission of sales information to corporate headquarters? If this information does not need to be available immediately, perhaps it would be better to hold off transmission of the sales information until after business hours.

The last issue you might look at is the size of the circuit. Perhaps the office has outgrown its current bandwidth. In this case, the best course of action would be to purchase additional circuits or to upgrade to a faster type of circuit.

Case Study 2: Satellite-Based Network

The network shown in Figure 17.3 collects weather data at several remote sites across the continental United States. These data-collection sites have been in place for many years. Each uses an old Solaris 2.5 workstation connected serially to a weather data collection unit as well as a router connected to a satellite radio. The workstations send data on weather conditions whenever a change from the last transmitted value is detected. When conditions are stable, the rate of transmission is low. However, when conditions are variable, the data rate can be high. The equivalent of T1-sized circuits has been purchased to address this problem. However, even with the large satellite circuits, the scientist working on the project still experiences packet loss during rapidly changing weather conditions. You have been asked to determine what the problem is. Where do you begin? One of the major attributes of a satellite-based network circuit is that it has high latency. It takes a long time for a signal to travel the distance up to geosync orbit (45,000 kilometers) and then return to Earth. Even when bandwidth is high, this added time could cause performance problems if the network protocol is not set up to handle the added delay. If TCP is the protocol being used to transmit the data, it is possible that the packet buffers are too small or the maximum window size has been reached. Either or both of these issues could cause reduced performance.

Figure 17.3. Case study 2 discusses performance issues with large latency applications.


To determine this, we must calculate the amount of data needed to fill our satellite pipe. Based on the given circuit speed bandwidth of 1.544Mbps and the roundtrip time of 1 second for a satellite circuit, our bandwidthxdelay product is 1544Kb, or close to 200KB. It turns out that Solaris 2.5's default packet buffer size is 256KB. This is large enough to prevent delays; however, Solaris 2.5 is not RFC 1323 compliant. The maximum window size for non-RFC-1323 systems is 64KB. This is more than likely the problem because it would prevent us from making maximum use of the bandwidth of our circuit. To address the problem, we would need to upgrade our Solaris system to a version that does support RFC 1323. Solaris systems from 2.6 on are RFC 1323 compliant.



    Inside Network Perimeter Security
    Inside Network Perimeter Security (2nd Edition)
    ISBN: 0672327376
    EAN: 2147483647
    Year: 2005
    Pages: 230

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net