Performance and Security

Performance and security are not necessarily directly competing design goals. Although many of the steps you must perform to secure a network do have performance costs, it is your job to identify the design elements that add the required security while allowing the network to meet its performance goals.

Defining Performance

When your users complain of poor network performance, they could be referring to several distinct problems. For example, they could be experiencing long download times from their favorite FTP site, or they might be experiencing slow processing of their commands on a remote server. Different types of performance issues can cause each of these problems. Therefore, we begin our discussion of performance with a few definitions.

Network Bandwidth and Latency

Network bandwidth is a measure of how fast information can flow across a network segment. It is typically measured in bits per second (bps). A network that can transfer 500KB of information in 16 seconds would have a bandwidth of 256Kbps (500 * 1024 * 8 / 16). This is a measure of how much information "fits" in the network in a given second. Bandwidth is shared among the different devices that are hooked up to a network segment. Wide area network (WAN) links are normally limited to two devices, which means that all the bandwidth is available to transfer data between them; however, LANs can have hundreds of hosts competing for the network's bandwidth. On a LAN, the available bandwidth between two hosts might be much lower than the total network bandwidth if many other hosts are trying to transfer data at the same time. Fortunately, LAN bandwidth is cheap. Common LAN technologies have large bandwidth capacities. 100BASE-T Ethernet networks are extremely common, and they are rated for 100Mbps. This is in direct contrast to WAN links. A T1 circuit is relatively slow (1.544Mbps) compared to 100BASE-T Ethernet, and it can cost thousands per month.


Because bandwidth is shared between the devices that need to communicate across a network, when you're determining bandwidth requirements, it is important to factor in the number of simultaneous network conversations that will be occurring. The easiest way to control this is by limiting the total number of computers attached to the network.

Network latency is a measure of how long it takes for a packet to travel from one point in a network to another. It is frequently measured by sending a test packet to a host, which the host then returns to the sender. The roundtrip time is then calculated to determine the latency. Several contributing factors can add latency to a packet:

  • Propagation This is the time it takes for the packet to travel from the start to the end of a particular transmission medium and is largely a function of distance. For example, neglecting other factors, a packet traveling from New York to California is going to have a larger propagation delay than a packet traveling from Manhattan to Queens.

  • Gateway processing This is the time taken by each device between the transmitter and the receiver that must process the packet. As a packet travels between network segments, it might have to pass through routers, switches, firewalls, network address translations (NATs), VPNs, and other types of network devices. Each of these devices takes time to process the packet.

  • Available bandwidth A packet might have to travel across many network segments to reach its destination. The time it takes for the packet to travel across each segment is directly affected by each network segment's available bandwidth.

  • Packet size Larger packets take longer to transmit than smaller packets. This time becomes more pronounced when available bandwidth is low or when a gateway-processing device is required to examine the entire packet.


The ping command is frequently used to measure network performance, but it is important to note that it is a measure of latency and not bandwidth. By default, ping transmits a small packet and then waits to receive it back from the destination device. This correctly determines the roundtrip time between the two devices, but it does not tell you anything about the bandwidth between the two devices. Correctly measuring bandwidth requires that a much larger amount of data be transmitted between the two devices to attempt to completely use up all available bandwidth between them. Ping can be told to transmit larger packets, but this is still insufficient to reach the transmission rates that are normally necessary to saturate the network. Other available tools, such as ttcp, are specifically designed to test network bandwidth. Cisco IOS 11.2 and above implement a version of ttcp that is available as a privileged command. The tool ttcp is also available free on many UNIX distributions.

Response Time

Response time is the amount of time it takes a response to be received after a request is made. Response time is a function of the latency between the requester and the responder plus the processing time needed by the responder to calculate the response. Interactive protocols that require frequent bidirectional conversations, such as Telnet, are affected by response time. Response time is primarily what determines whether users perceive a service to be fast or slow. Because of this, when a request is received that will take a long time to calculate, some services immediately return an intermediate response to the user to indicate that they are working on the problem.


Throughput is the measure of how much information can be reliably transmitted between two devices on a network. Throughput is principally a function of bandwidth, but it also is affected by protocol overhead. Protocols that are required to transmit large amounts of data in a timely manner are more affected by throughput restrictions. These include applications such as file transfer protocols, video-conferencing, and Voice over IP (VoIP).

Understanding the Importance of Performance in Security

It is worth spending some time discussing why we should care about performance when designing our security infrastructure. Remember, for a network to be secure, it must maintain confidentiality, integrity, and availability to its users. When performance is too low, the network fails to maintain availability. This can be true even when the service is still responding to requests. The performance of network services can directly impact the acceptability of those services. If we offer services securely, but at a service level below the users' tolerance, the services will not be used. Can we really consider the services secure when the users cannot or will not use them?

Of course, different applications have different performance requirements. It is an important part of the security-design process to identify the acceptable performance levels for the services on the network. This requires that you establish metrics to use to measure performance and that you determine acceptable values for each of these metrics. The following are some commonly used metrics:

  • Response time

  • Throughput

  • Maximum simultaneous users

  • Minimum availability (for example, 24x7x365)

  • Maximum downtime

  • Mean time between failure (MTBF)

Keep in mind that acceptable levels for each of your metrics will vary depending on the type and context of the request. For example, it is a commonly held rule of e-business that a visitor typically waits no longer than eight seconds for a web page to download. However, when placing an order, visitors are frequently willing to wait much longer than eight seconds to receive an order confirmation.

    Inside Network Perimeter Security
    Inside Network Perimeter Security (2nd Edition)
    ISBN: 0672327376
    EAN: 2147483647
    Year: 2005
    Pages: 230

    Similar book on Amazon © 2008-2017.
    If you may any questions please contact us: