INFRASTRUCTURE DESIGN: CONNECTING THE MODULES

Once the component module requirements are defined, specific connecting media can be specified and accurate bandwidth calculations are possible to correctly scale those media. The need for specialized bandwidth managers can also be assessed.

Media Selection

LAN Media

In the context of on-demand access, the LAN resides in two placesinside the data center and inside the remote office. The data center LAN is potentially very complex, while the remote office LAN will be relatively simple, containing little more than a workgroup media concentration point, client devices (PCs or thin clients ), and LAN peripherals (printers, storage devices, and so on).

  • Ethernet As originally defined (10BaseT, 10Base2), Ethernet was a shared media technology using Carrier Sense Multiple Access/Collision Detection (CSMA/CD) technology and both Layer 2 repeaters and multiport repeaters (hubs). These residual shared Ethernet environments define a large collision domain and, as such, suffer from performance limitations imposed by CSMA/CD. More specifically , collisions are a normal and expected part of half-duplex shared Ethernet, but they effectively limit performance to 35 percent of rated capacity. This means a 10 Mbps Ethernet segment is saturated at 3.5 Mbps throughput. Additionally, half-duplex operations increase the latency due to interframe delays built in to the Ethernet standard to minimize collisions. In a switched infrastructure, modern network cards can operate full-duplex , allowing sustained operation at near 90 percent of capacity, both for send and receive (approaching 20 Mbps throughput).

  • Fast Ethernet Fast Ethernet is also referred to as 100BaseT to indicate that it provides for a transmission standard of 100 Mbps across the LAN. The 100BaseT standard is backward compatible with 10 Mbps Ethernet. Many vendors tout "dual-speed hubs" capable of simultaneous support for both shared Ethernet and shared Fast Ethernet; however, in a mixed speed environment, the Ethernet bus must arbitrate to the rate of the station transmitting. Multispeed hub performance can be worse than a pure 10 Mbps environment due to excessive bus arbitration. Use of dual-speed hubs is strongly discouraged. On the other hand, dual-speed switched Ethernet does not incur this penalty. As of this writing, switched Fast Ethernet is the de facto standard for LAN technology with regard to client connectivity. Cabling must meet EIA/TIA Category 5 standards to guarantee reliable Fast Ethernet connectivity.

  • Gigabit Ethernet This transmission standard provides for sending one billion bits per second across the LAN. It has also been incorporated into the standard Ethernet specification (802.3z) and uses the same Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol, same frame format, and same frame size as its predecessors. Gigabit Ethernet is always switched rather than shared and is normally full-duplex. Generically referred to as 1000Base-X, transmission media can be fiber optic (1000BaseSX [ multimode ] or 1000BaseLX [single mode]), unshielded twisted pair (1000BaseT over 4-pair Category 5 cable), or shielded twisted pair (1000BaseCX over 2-pair 150-ohm shielded cable). Gigabit Ethernet is rapidly emerging as the de facto standard for data center server connectivity to the data center core, and for connecting LAN access layer aggregation points to the core or distribution layer.

  • 10-Gigabit Ethernet The latest extension of the IEEE 802.3 hierarchy, the "10GBASE-" standards provides for sending ten billion bits per second across the LAN. 10GBASE Ethernet defines multiple transport media types: 10GBASE-SR and 10GBASE-SW using short wavelength (850 nm) multimode fiber (MMF) at 2300 meters; 10GBASE-LR and 10GBASE-LW using long wavelength (1310 nm) single-mode fiber (SMF) at 210000 meters (10 km); 10GBASE-ER and 10GBASE-EW using extra long wavelength (1,550 nm) single-mode fiber (SMF), at 2 meters to 40 kilometers; and 10GBASE-LX4, which uses wave division multiplexing (WDM) to send four wavelengths of light carried in a single fiber pair. The 10GBASE-LX4 operates at 1,310 nm over MMF (2300 meters ) or SMF (2 me-ters-10km). Finally, the IEEE 802.3ae standard includes direct integration with SONET via the "WAN interface sublayer" (WIS), which allows 10GBASE equipment to operate at 9.584640 Gbps for direct connection to SONET OC-192 carriers in the STS-192c format.

  • Token Bus/Token Ring Token Bus is similar to Ethernet but uses a different method to avoid contention . Instead of listening to traffic and detecting collisions, it attempts to control the sequence of which nodes use the network at what time. The node holds a "token" and passes it on to the next node when it is finished transmitting. Any node can receive a message, but no node can transmit unless it holds a token. Token Bus networks are laid out in a serial bus fashion with many nodes daisy-chained together. Token Ring, on the other hand, is implemented in a ring topology. The main difference between the two is how the token is handled. In Token Ring, the token becomes part of the packet. With Token Bus, the packet is a separate message that is passed after a node has finished transmitting. Token Ring networks have many of the advantages of Ethernet and even started out with higher possible bandwidth (about 16 Mbps). However, Ethernet is now the unquestioned standard. Token Ring is usually part of a legacy network connecting mainframes, minicomputers, or other IBM equipment.

  • ATM for LAN Asynchronous Transfer Mode was once viewed as an alternative to Ethernet-based technologies in a LAN environment. Manufacturers originally touted "ATM to the desktop" as the future of high-bandwidth access. This never materialized in large-scale desktop deployments for many reasons, not the least of which is cost. ATM network cards and associated network devices are still far more expensive than their Ethernet-based counterparts, and less widely available. Two primary factors have contributed to the loss of interest in ATM in LANs: the rising speed and falling cost of Ethernet-based technologies (10 Gbps Ethernet is available now), and the extreme complexity of internetworking LAN segments using ATM LAN Emulation (LANE).

  • F/CDDI Fiber/Copper Distributed Data Interface is a 100 Mbps LAN topology designed to operate over optical cabling (FDDI) and standard copper cabling (CDDI). Both use a media access protocol similar to Token Ring (token passing) and employ a dual counter-rotating ring topology. As a metropolitan area network (MAN) backbone, FDDI is very attractive and performs exceptionally under high-load conditions. Maximum ring distances for FDDI are up to 200 km for a single-ring topology and 100 km for a dual-ring topology. CDDI rings are limited to 100 m.

WAN Media

The wide area network (WAN) is the vehicle for transporting data across the enterprise. In an on-demand access environment, the design of the WAN infrastructure is crucial to the IT enterprise. It is essential to create a WAN design that is robust, scalable, and highly reliable in order to protect the value of the data that must flow across the WAN. Interconnecting media types for WAN services include

  • Frame relay Frame relay service is available virtually worldwide. It employs virtual circuits (usually permanent virtual circuits [PVCs]) mapped at Layer 2 over T3, T1, FT1, or 56K connections. Multiple PVCs can be carried over a single physical (for example, T1) access facility and aggregate bandwidth of all PVCs can exceed the physical media bandwidth (oversubscription). For example, four 512 Kbps PVCs can be provisioned over a single T1 access line. Individual PVCs can be provisioned with a guaranteed transmission rate called the committed information rate (CIR) and the ability to burst above this rate, on demand, if frame relay network bandwidth is available. Burst traffic is not guaranteed and can be ruthlessly discarded. For WAN connectivity, the combination of all thin-client (RDP, ICA) traffic and packetized voice or video traffic must be less than the CIR to ensure reliable performance. Further, QoS restrictions cannot be applied to traffic rates above CIR. If physical media are significantly oversubscribed (greater than 1.5:1) thin-client performance may be degraded.

  • Point-to-point serial Point-to-point serial service is available in many formats, including 56 Kbps, fractional T1 service (FT1) in multiples of 64 Kbps from 641472 Kbps, and full T1 (1.544 Mbps, 1.536 Mbps usable). Dedicated point-to-point circuits have been around for a long time and are often the most cost-effective means when short distances are involved. These circuits can either be leased from a service provider or local telephone company (Telco) or be completely private if the company owns the copper cable facilities (see Figure 6-12).

    image from book
    Figure 6-12: Frame relay vs. T-1/E-1 point-to-point connections

  • ATM Asynchronous Transfer Mode combines the best features of the older packet-switched networks and the newer frame-switched networks. ATM's small (53 byte) cell -based protocol data unit and advanced inherent management features make ATM the most flexible and predictable technology currently available. Data rates for tariffed ATM services are based on T1, T3, or Synchronous Optical Network (SONET) physical media with ATM virtual circuit equivalents provisioned much like frame relay. SONET optical carrier levels range from OC1 (51.840 Mbps) through OC48 (2.48832 Gbps). ATM delivers variable bandwidth and allows direct integration with other WAN services such as frame relay and xDSL. ATM is also the defined multiplex layer standard for SONET and the basis for future technologies such as broadband ISDN (B-ISDN, see Figure 6-13).

    image from book
    Figure 6-13: ATM data center network connected to frame relay

  • Integrated Services Digital Network (ISDN) ISDN was announced in the late 1970s as a way to provide simultaneous voice and data over the same line. ISDN uses the same basic copper wiring as plain old telephone service (POTS), but its basic rate interface (BRI) offers two 56 Kbps or 64 Kbps bearer channels and one 16 Kbps data channel (2B+D). B-channels carry the data payload (digital data or digitized voice), while the D-channel executes call control and management. For higher-demand environments, the ISDN primary rate interface (PRI) offers 23 standard B-channels and one 64 Kbps D-channel over a single T1 facility. B-channels can be used individually or bonded together. ISDN is a point-to-point technology and provides deterministic, but expensive, bandwidth. ISDN BRI is commonly used as a dial-on-demand backup for dedicated frame relay circuits (see Figure 6-14).

    image from book
    Figure 6-14: ISDN BRI and PRI structure

  • Digital Subscriber Line (DSL) Various flavors of DSL are available in most areas, but without ATM to the data center or a value-added Internet service provider (ISP), the DSL circuits must terminate at a service provider for Internet access only. Telcos provide Asymmetric DSL (ADSL) at various rates based on physical loop distances from their central office (CO) to the customer premises. ADSL is low cost and, as the name implies, has asymmetric bandwidth, meaning less upstream capacity than downstream capacity. Symmetric DSL (SDSL) is normally provided by specialized service providers with their equipment co-located at the telco CO. SDSL can reach greater distances and often higher speeds than ADSL, but at three to six times the monthly cost. IDSL, a form of "unswitched" ISDN in which the connection is permanent between the customer premises and the CO, provides speeds equivalent to ISDN BRI, but without the high usage-based billing. The downside of IDSL is that you have an ISDN line that can't call anywhere . Providers charge for DSL, like frame relay, according to bandwidth, but they seldom provide guaranteed performance as is done for frame relay's CIR. More recent telco offerings (at a higher price) include ADSL with business-class service-level agreements for throughput and availability. DSL is very low cost compared to other options but is generally only usable as Internet accessunless you have ATM to the data center or work with a value-added ISP. Value-added ISPs can terminate multiple circuit types (ADSL, ISDL, and so on) from remote offices and provide consistent bandwidth via any circuit type to the data center, in effect becoming an offsite WAN access and distribution layer.

  • Cable modem Cable modems connect to the existing cable TV (CATV) coaxial network to provide new services such as Internet access to subscribers. Speeds can reach a theoretical 36 Mbps, but end-node technology (such as a network interface card) does not yet exist to take advantage of this speed. Speeds of 2 Mbps to 10 Mbps are more common. The service is asymmetrical in its current implementation, with download speeds that are far faster than upload speeds, and raw bandwidth that's shared among a large number of users. Historically, cable providers did not guarantee service level or repair times, and cable modems were suitable only for very small offices or home offices, or where no cost-effective competing technology was available. Recent improvements in the quality of the cable infrastructure have allowed providers to offer "business class" offerings, including managed access devices, service level guarantees , and static IP addresses.

  • Internet/VPN Though not a "medium" in the same sense as the other technologies discussed here, the Internet does provide an alternative connectivity option. A virtual private network (VPN) uses the Internet as a valid network infrastructure option for connecting small or remote offices and telecommuters.

  • Multi-Protocol Label Switching (MPLS) Although not a "medium" per se, MPLS is a service provider offering that creates a virtual network, at either Layer 2 or Layer 3, within the service provider's backbone network. MPLS delivers highly scalable, differentiated, end-to-end IP services with simple configuration, management, and provisioning for providers and subscribers over virtually any access medium (frame relay, point-to-point, ATM). MPLS as a transport architecture allows enterprises to off-load provisioning and traffic engineering (for QoS and redundancy) to the service provider and provides isolation of the enterprise routing tables from the remainder of the service provider's network.

Planning Network Bandwidth

Planning network bandwidth may seem like an obvious need, but it is often skipped because it is difficult to predict the normal bandwidth utilization of a given device or user on the network. However, by using modeling based on nominal predicted values, bandwidth requirements can be accurately projected . When planning network bandwidth, keep the following guidelines in mind:

  • Point-to-point WAN links are saturated when they reach 7080 percent of rated capacity; in other words, do not plan to push more than 1.2 Mbps of traffic over a point-to-point T1.

  • Frame relay and ATM connections are saturated when they reach 90 percent of rated capacity per virtual circuit. Additionally, exceeding the CIR means transport of data packets is not guaranteed.

  • Allow 25 percent additional bandwidth for any VPN link.

  • Always calculate required voice or video bandwidth first, add thin-client session bandwidth, and then add bandwidth for all remaining services that must use the link (routing protocols, time service, Internet browsing, mail services, Windows domain traffic, printing, and so on). On links that are never primary access for Internet services (Web, FTP, mail, streaming media), 30 percent additional bandwidth above and beyond voice/video and thin client requirements is a good starting figure.

  • WAN bandwidth per thin-client user is nominally 30 Kbps, depending on application usages and graphics. Plan based on concurrent connections, not total user population.

  • Printing inside the thin-client session will add up to an additional 20 Kbps concurrent printing connection. Concurrent printing usually equals less than 20 percent of all concurrent sessions.

  • On links that provide primary access to Internet services, all available bandwidth can be consumed by Internet service traffic. Plan baseline thin-client bandwidth as mentioned previously, adding at least 50 percent for Internet service access, and plan for bandwidth management to protect thin-client bandwidth allocations . (Use Table 6-1 as a reference.)

    Table 6.1: WAN Bandwith Calculation Worksheet

    Concurrent Users

    Bandwith per user

    Base Cotrix Bandwith

    ICA Printing

    ICA Printing Bandwith

    Total Citrix Bandwith

    Primary internet

    Excecc Bandwith

    Required Bandwith

    WAN Media Type

    Load Factor

    Service

    30

    30 Kbps

    900 Kbps

    Yes

    180 Kbps

    1080 Kbps

    No

    30% (324 Kbps)

    1424 Kbps

    Pt-Pt

    70%

    E1 (2.048MB)

    25

    30 Kbps

    750 Kbps

    No

    0 Kbps

    900 Kbps

    No

    30% (300 Kbps)

    1050 Kbps

    Pt-Pt

    70%

    T1 (1.544MB)

    30

    30 Kbps

    900 Kbps

    Yes

    180 Kbps

    1080 Kbps

    No

    30% (324 Kbps)

    1424 Kbps

    VPN

    75%

    2MB ATM VC

    30

    30 Kbps

    900 Kbps

    Yes

    180 Kbps

    1080 Kbps

    No

    30% (324 Kbps)

    1424 Kbps

    Frame relay

    90%

    1.6 MB CIR

    30

    30 Kbps

    900 Kbps

    No

    0 Kbps

    900 Kbps

    No

    30% (300 Kbps)

    1200 Kbps

    Frame relay

    90%

    1344K CIR

    30

    30 Kbps

    900 Kbps

    No

    0 Kbps

    900 Kbps

    Yes

    50% (450 Kbps)

    1350 Kbps

    Frame relay

    90%

    1536K CIR

Bandwidth Management

In most thin-client WAN environments, "calculated" bandwidth should provide optimal performance, but it seldom does. Even strict corporate policies on acceptable use of bandwidth cannot protect thin-client bandwidth when the network administrator downloads a large file or a user finds a new way to access music and media-sharing sites. These unpredictable behaviors can degrade Citrix services to remote users by causing bandwidth starvation or excessive latency. There are several technologies available to more tightly control bandwidth utilization and assure responsive service environments: Layer 2 CoS and queuing, Layer 3 QoS and queuing, router-based bandwidth managers (Cisco's NBAR), and appliance-based bandwidth managers (Packeteer). Each of these has its respective strengths and weaknesses, but all have several common characteristics. In addition, all of these technologies must have a mechanism for differentiating more-important traffic from less-important traffic, a process called classification . Traffic may or may not be " marked " and tagged with its particular priority, but subsequent network devices must be able to recognize the classifications and apply policies or rules to prioritize or constrain specific traffic types. All must have a means for identifying traffic as more important or less important than other traffic.

When applying bandwidth management technologies to WAN traffic flows, the following general rules apply:

  • Do not prioritize any traffic above network management traffic. This usually is a factor only on Layer 2 CoS implementations where data (digitized voice and application frames ) are incorrectly tagged as priority 7. Management and control information (STP, VLAN status messages) must not compete with user traffic.

  • Digitized voice and video have a very high priority. They must have instantaneous bandwidth and the lowest possible latency though Layer 2 and 3 devices (priority queuing).

  • Thin-client user access has a high priority. ICA/RDP traffic has the same priority as character-interactive terminal emulation traffic. Although such traffic is far more graphical than "green screen" applications, performance is perceived the same by users. If a user presses "enter" and doesn't get a timely application response (as fast as a local session), thin-client performance is deemed unacceptable. Not all components of the ICA data stream should be handled equally for traffic prioritization. ICA Priority Packet Tagging (over TCP/IP) applies tags to the various ICA virtual channels within the stream, and these must be correctly interpreted to prioritize ICA traffic flows properly. For example, the virtual channel for seamless windows screen update (CTXTWI) is tagged as a priority "0" and is critical to the user experience, while the printer-mapping virtual channels are tagged as a priority "3" and can be given less preference for network bandwidth access and transport latency.

  • Mission-critical applications such as ERP packages should receive a higher priority than personal productivity applications or Web applications such as browsers and FTP.

  • Average utilization of network resources should be high, thus saving money by avoiding unnecessary upgrades.

  • Rules for bandwidth utilization or bandwidth blocking should be by application, user, and group .

Tip 

More bandwidth, not less, is needed when migrating users to a new network. It is likely that the old and new network will have to run on some of the same network segments while users are moved from the old network to the new data center network. Tasks such as user data migration, interim file server reassignment, and "backhauling" user data to legacy systems not yet on the new network can all add up to an increased bandwidth need. Some of this is unavoidable, but some of the need can be mitigated with careful planning and staging of which systems will be migrated in which order. When an organization does not underestimate bandwidth needs, it enjoys a lower risk of having unhappy users before projects get started.

Layer 2 CoS and Queuing Applying Layer 2 CoS prioritization to LAN traffic has several weaknesses: It is only locally significant (CoS tags are frame-based and not transported across Layer 3 boundaries); granular control, by application or service, is not widely supported; and most applications are incapable of originating traffic with and tagging frames with CoS values. Several vendors provide network interface cards capable of applying CoS and QoS tags to frames or packets, but this feature is on or off and cannot differentiate application layer traffic. Microsoft's Generic Quality of Service (GQoS) API allows software developers to access CoS and QoS features through the Windows 2003 server operating system. However, the API is not widely supported, and only a limited number of Microsoft multimedia applications currently use the GQoS API. Most Layer 2 network devices have one or two input queues per port and up to four output queues. Out of the box, all traffic is routed through the default queue (low priority) on a first-in/ first-out basis. CoS can be applied to frames at the source or upon entry to the switch to redirect the output to use a higher-priority queue. Higher-priority queues are always serviced (emptied) first, reducing latency. In an on-demand access paradigm, there is little to be gained from accelerating frames through the network at Layer 2.

Layer 3 QoS and Queuing Quality of Service at Layer 3 encompasses classifying traffic (via a standard or extended access list), protocol (such as URLs, stateful protocols, or Layer 4 protocol), Input port, IP Precedence or DSCP, or Ethernet 802.1p Class of Service (CoS). Traffic classification using access lists is processor- intensive and should be avoided. Once traffic is classified , it must be marked with the appropriate value to ensure end-to-end QoS treatment is enforced. Marking methods are: three IP Precedence bits in the IP Type of Service (ToS) byte; six Differentiated Services Code Point (DSCP) bits in the IP ToS byte; three MPLS Experimental (EXP) bits; three Ethernet 802.1p CoS bits; and one ATM cell loss probability (CLP) bit. In most IP networks, marking is accomplished by IP Precedence or DSCP. Finally, different queuing strategies are applied to each marked class. Fair queuing (FQ) assigns an equal share of network bandwidth to each application. An application is usually defined by a standard TCP service port (for example, port 80 is HTTP). Weighted fair queuing (WFQ) allows an administrator to prioritize specific traffic by setting the IP Precedence or DSCP value, but the Layer 3 device automatically assigns the corresponding queue. WFQ is the default for Cisco routers on links below 2 Mbps. Priority Queuing (PQ) classifies traffic into one of the predefined queues: high-, medium-, normal-, and low-priority queues. The high-priority traffic is serviced first, then medium-priority traffic, followed by normal- and low-priority traffic. PQ can starve low-priority queues if high-priority traffic flows are always present. Class-based weighted fair queuing (CBWFQ) is similar to WFQ but with more advanced differentiation of output queues. No guaranteed priority queue is allowed. Finally, low-latency queuing (LLQ) is the preferred method for prioritizing thin-client traffic at Layer 3. LLQ can assign a strict priority queue with static guaranteed bandwidth to digitized voice or video, assign multiple resource queues with assured bandwidth and preferential treatment, and allow a default queue for "all other" traffic. Queuing works well in a network with only occasional and transitory congestion. If each and every aspect of a network is precisely designed and it never varies from the design baseline, queuing will provide all of the bandwidth management thin clients require. Absent a perfect network, queuing has the following characteristics and limitations:

  • Queuing requires no special software on client devices.

  • Packets delayed beyond a time-out period in queues get dropped and require retransmission, causing more traffic and more queue depth.

  • Queuing manages only outbound traffic, assuming the inbound traffic has already come in over the congested inbound link. To be effective, queuing methods must match at both ends of the link. When dealing with Internet connections, queuing is generally ineffective .

  • Queuing has no flow-by-flow QoS mechanism.

Router-Based Bandwidth Management Cisco's Network-Based Application Recognition (NBAR) provides intelligent network classification coupled with automation of queuing processes. NBAR is a Cisco IOS classification engine that can recognize a wide variety of applications, including ICA, Web-based applications, and client/server applications. Additional features allow user-specified application definitions (by port and protocol). Current-generation NBAR packet descriptor language modules (PDLM) can be added to the default NBAR profile and allow NBAR to prioritize ICA traffic according to virtual channel priority packet tagging. Once the application is recognized, NBAR can invoke the full range of QoS classification, marking, and queuing features, as well as selectively drop packets from the network. Although it is "application aware," NBAR relies on concurring devices to collectively implement QoS policies, and remains an "outbound" technology.

Appliance-Based Bandwidth Managers (TCP Rate Control) TCP rate control provides a method to manage both inbound and outbound traffic by manipulating the internal parameters in the TCP sliding window. TCP rate control evenly distributes packet transmissions by controlling TCP acknowledgments to the sender. This causes the sender to throttle back, avoiding packet tossing when there is insufficient bandwidth. As packet bursts are eliminated in favor of a smoothed traffic flow, overall network utilization is driven up as high as 80 percent. In a network without rate control, typical average utilization is around 40 percent. TCP rate control operates at Layer 4, performing TCP packet and flow analysis, and above Layer 4, analyzing application-specific data. TCP rate control has the following advantages:

  • Works whether applications are aware of it or not.

  • Reduces packet loss and retransmissions.

  • Drives network utilization up as high as 80 percent.

  • Provides bandwidth management to a specific rate (rate-based QoS).

  • Provides flow-by-flow QoS.

  • Provides both inbound and outbound control.

  • Prevents congestion before it occurs.

On the other hand, TCP rate control has the following limitations:

  • Not built into any routers yet.

  • Only works on TCP/IP; all other protocols get queued.

  • Currently available from only a few vendors.

Packet prioritization using TCP rate control is a method of ensuring that general WAN traffic does not interfere with critical or preferred data. Using packet prioritization, thin-client traffic can be given guaranteed bandwidth, which results in low perceived latency and speedy application performance, and contributes to a high level of user satisfaction in the on-demand access environment.

Packeteer created the category of hardware-based TCP rate control appliances with its PacketShaper product. Other manufacturers, including Sitara and NetReality offer competing technologies, but Packeteer products were selected for an in-depth discussion.

In a simple deployment, a PacketShaper (shown in Figure 6-15) is an access layer device that sits between the router or firewall and the LAN infrastructure, and proactively manages WAN traffic to ensure that critical applications receive the bandwidth they require. For Citrix environments, the bandwidth manager resides at remote sites with large enough bandwidth requirements to justify the expense. A PacketShaper is always placed inside the site router so that it can manage the traffic flow before routing. In a large network, there is also value in placing a PacketShaper at the data center to control Internet services bandwidth and protect Internet-based remote users from being degraded by main site users surfing the Web. In this configuration, individual traffic flows cannot be managed; however, good traffic (thin clients, IPsec) can be given somewhat preferential treatment, and less-critical traffic flows can be throttled to ensure bandwidth remains available for thin-client flows. Though it is not possible to manage individual sessions this way, it is possible to create partitions for particular types of traffic. The flow-by-flow management happens in the PacketShapers at the edge of the network. There are several PacketShaper models available, and they are priced by the amount of bandwidth they are capable of managing. Packeteer has recently added new features, including the ability to manage enterprise-wide devices from a central policy center.

image from book
Figure 6-15: Network with a Packeteer PacketShaper
  • Bandwidth per session With the PacketShaper it is possible to set a policy that will guarantee 20 Kbps of bandwidth for each ICA session. This has a few important effects. First, each session is protected from every other session. A user browsing animated Web pages over ICA still gets only 20 Kbps and perceives decreased performance. Another user accessing office applications or e-mail would notice no difference, and that user's sessions would seem responsive. Second, no user would ever get a session with less than 20 Kbps. In the cases where the network was near saturation and insufficient WAN bandwidth was available, the PacketShaper would have stopped the session from being created rather than allowing creation in a degraded environment (see Figure 6-16).

    image from book
    Figure 6-16: Denied session request

  • Partitioning Partitions allow the administrator to logically "carve up" the available bandwidth and assign portions to each application or type of traffic, as shown in Figure 6-17. For example, in a frame relay circuit with a port speed of 1.544 Mbps, you might assign 80 percent of bandwidth to ICA traffic, 10 percent to HTTP, and 10 percent to LPR/LPD for printing. If any portion is not being fully utilized, the PacketShaper can allow the other partitions to share its available bandwidth.

    image from book
    Figure 6-17: Bandwidth partitioning

  • Prioritization Prioritization is the simplest of the three options. Prioritization allows you to assign a number between 1 and 7 to a traffic flow, 1 being the highest. In this case, as utilization of the available bandwidth increases , the Packet-Shaper uses its own algorithms to make sure Priority 1 traffic gets more "slices" of bandwidth than Priority 3, as shown in Figure 6-18.

    image from book
    Figure 6-18: Bandwidth prioritization

Tip 

Of the three methods discussed in this section, session-based policies and partitions are recommended for ICA traffic. A session-based policy that guarantees 20 Kbps but allows bursts of up to 50 Kbps is ideal for ICA. However, such a policy can be implemented only when the PacketShaper can control the inbound and outbound traffic, which means it cannot be done over the Internet. In such a case, a partition policy can be used. Depending on the size of the network pipe, it could, for example, be guaranteed that 50 percent of the bandwidth is available to ICA. The remaining bandwidth could be left "unmanaged," or partitions could be defined for the most common, known protocols such as HTTP and Telnet. Priority-based packet shaping with ICA should be avoided simply because it makes it harder to predict the behavior of a PacketShaper. This is because a priority is not absolute and relies on some fairly complex algorithms to shape the traffic. Partitions and session policies are more rigid, and therefore more predictable and easier to administer.

A limitation of packet prioritization is that print traffic (and resulting print output speed) may be reduced because bandwidth is guaranteed to ICA traffic. Users may find this delay unsatisfactory. If so, one may choose to increase WAN bandwidth to allow more room for print traffic. Printing is a complex issue in this environment and is discussed in more detail in Chapter 18. Another potential problem with packet prioritization is that Internet browsing speed may be reduced because of the guaranteed bandwidth reserved for ICA traffic. Our experience has shown that Internet browsing that includes rapid screen refresh rates appears to substantially increase ICA packet bandwidth requirements sometimes to as much as 50 Kbpsalthough Citrix has made great strides in fixing this through improved client caching and graphics compressions techniques. Few companies consider Web browsing to be mission-critical (quite the opposite it seems), so this might not be a problem.

Packeteer in Action Figure 6-19 shows a sample report output from a Packeteer unit configured to monitor a small business Internet link. The customer relies on Citrix to provide applications to remote branch offices via the Internet. The main site (data center) has a 1.5 Mbps SDSL circuit to a local Internet service provider (ISP). The first graph shows poor response time for the customer's ERP/financial application (NAVISION) deployed over Citrix. Although server response times are somewhat suspect, network latency drives the total response time well above the recommended threshold of 500 ms.

image from book
Figure 6-19: Packeteer analysis report

The second graph shows that " bursty " HTTP traffic is consuming virtually all of the available WAN bandwidth, and that the bursts coincide with delays in Citrix response times. Graph three shows total (link) bandwidth consumption, and the final chart shows that HTTP consumes 48 percent of all available bandwidth, with HTTP and WinMedia accounting for nearly two- thirds of all bandwidth. The Packeteer's TCP rate control can "partition" the Internet pipe to ensure HTTP cannot deny Citrix access to needed bandwidth. As an added benefit, the Packeteer analysis proved that the ISP was only providing 1 Mbps of available bandwidth, not the 1.5 Mbps circuit the customer paid for. The ISP agreed to rebate $2,500.00 in fees for substandard services.



Citrix Access Suite 4 for Windows Server 2003. The Official Guide
Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition
ISBN: 0072262893
EAN: 2147483647
Year: 2004
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net