Performance and security are not necessarily directly competing design goals. Although many of the steps you must perform to secure a network do have performance costs, it is your job to identify the design elements that add the required security while allowing the network to meet its performance goals.
When your users complain of poor network performance, they could be referring to several distinct problems. For example, they could be experiencing long download times from their favorite FTP site, or they might be experiencing slow processing of their commands on a remote server. Different types of performance issues can cause each of these problems. Therefore, we begin our discussion of performance with a few definitions.
Network Bandwidth and Latency
Network bandwidth is a measure of how fast information can flow across a network segment. It is typically measured in bits per second (bps). A network that can transfer 500KB of information in 16 seconds would have a bandwidth of 256Kbps (500 * 1024 * 8 / 16). This is a measure of how much information "fits" in the network in a given second. Bandwidth is shared among the different devices that are hooked up to a network segment. Wide area network (WAN) links are normally limited to two devices, which means that all the bandwidth is available to transfer data between them; however, LANs can have hundreds of hosts competing for the network's bandwidth. On a LAN, the available bandwidth between two hosts might be much lower than the total network bandwidth if many other hosts are trying to transfer data at the same time. Fortunately, LAN bandwidth is cheap. Common LAN technologies have large bandwidth capacities. 100BASE-T Ethernet networks are extremely common, and they are rated for 100Mbps. This is in direct contrast to WAN links. A T1 circuit is relatively slow (1.544Mbps) compared to 100BASE-T Ethernet, and it can cost thousands per month.
Because bandwidth is shared between the devices that need to communicate across a network, when you're determining bandwidth requirements, it is important to factor in the number of simultaneous network conversations that will be occurring. The easiest way to control this is by limiting the total number of computers attached to the network.
Network latency is a measure of how long it takes for a packet to travel from one point in a network to another. It is frequently measured by sending a test packet to a host, which the host then returns to the sender. The roundtrip time is then calculated to determine the latency. Several contributing factors can add latency to a packet:
The ping command is frequently used to measure network performance, but it is important to note that it is a measure of latency and not bandwidth. By default, ping transmits a small packet and then waits to receive it back from the destination device. This correctly determines the roundtrip time between the two devices, but it does not tell you anything about the bandwidth between the two devices. Correctly measuring bandwidth requires that a much larger amount of data be transmitted between the two devices to attempt to completely use up all available bandwidth between them. Ping can be told to transmit larger packets, but this is still insufficient to reach the transmission rates that are normally necessary to saturate the network. Other available tools, such as ttcp, are specifically designed to test network bandwidth. Cisco IOS 11.2 and above implement a version of ttcp that is available as a privileged command. The tool ttcp is also available free on many UNIX distributions.
Response time is the amount of time it takes a response to be received after a request is made. Response time is a function of the latency between the requester and the responder plus the processing time needed by the responder to calculate the response. Interactive protocols that require frequent bidirectional conversations, such as Telnet, are affected by response time. Response time is primarily what determines whether users perceive a service to be fast or slow. Because of this, when a request is received that will take a long time to calculate, some services immediately return an intermediate response to the user to indicate that they are working on the problem.
Throughput is the measure of how much information can be reliably transmitted between two devices on a network. Throughput is principally a function of bandwidth, but it also is affected by protocol overhead. Protocols that are required to transmit large amounts of data in a timely manner are more affected by throughput restrictions. These include applications such as file transfer protocols, video-conferencing, and Voice over IP (VoIP).
Understanding the Importance of Performance in Security
It is worth spending some time discussing why we should care about performance when designing our security infrastructure. Remember, for a network to be secure, it must maintain confidentiality, integrity, and availability to its users. When performance is too low, the network fails to maintain availability. This can be true even when the service is still responding to requests. The performance of network services can directly impact the acceptability of those services. If we offer services securely, but at a service level below the users' tolerance, the services will not be used. Can we really consider the services secure when the users cannot or will not use them?
Of course, different applications have different performance requirements. It is an important part of the security-design process to identify the acceptable performance levels for the services on the network. This requires that you establish metrics to use to measure performance and that you determine acceptable values for each of these metrics. The following are some commonly used metrics:
Keep in mind that acceptable levels for each of your metrics will vary depending on the type and context of the request. For example, it is a commonly held rule of e-business that a visitor typically waits no longer than eight seconds for a web page to download. However, when placing an order, visitors are frequently willing to wait much longer than eight seconds to receive an order confirmation.