Quality of Service (QoS)

QoS relates to the use of design criteria, the selection of protocols, the determination of architectures, the identification of approaches, the choice of network restoration techniques, the design of node buffer management, and other network aspects. QoS ensures that end-to-end goals for congestion/availability, delay, jitter, throughput, and loss be reliably met over a specified time horizon and traffic load between any two chosen points in the network. These parameters are defined as follows:[17]

  • Congestion  A network condition where traffic bottles up in queues to the point that it noticeably and negatively impacts the operation of the application.

  • Service availability  The reliability of users' connection through the network.

  • Delay  The time taken by a packet to travel through the network from one end to another.

  • Delay jitter  The variation in the delay encountered by similar packets following the same route through the network.

  • Throughput  The rate at which packets go through the network.

  • Packet loss rate  The rate at which packets are dropped, get lost, or become corrupted (some bits are changed in the packet) while going through the network.

The industry has been working on the QoS issue for a decade now, but relatively little deployment of QoS-enabled networks has been seen on extranets, intranets, carrier networks, or on the Internet. There is no dearth of QoS literature on the topic.[18] Obviously, the protocol cornucopia leaves something to be desired; otherwise, we would have seen a statistically significant penetration of these protocols in the tens of thousands of networks that are currently deployed. This material is based on various industry sources as well as a book on Internet technologies published by Minoli and Schmidt in 1999[19] that included an extensive treatment of QoS. Additional references that covered QoS are Schmidt and Minoli, Multiprotocol over ATM Building State of the Art ATM Intranets Utilizing RSVP, NHRP, LANE, Flow Switching, and WWW Technology.[20] A number of analytical design techniques for broadband networks were described in Minoli, Broadband Network Analysis and Design.[21]

No fewer than five approaches have evolved for QoS in recent years as follows:

  • Asynchronous Transfer Mode (ATM)-based QoS approaches

  • Overengineering the network, without using any special QoS discipline

  • Utilization of high-throughput gigarouters with advanced queue management, without using any special QoS discipline

  • Per-flow QoS technology, the IETF's Integrated Services (intserv) Working Group recommendations

  • Class-based QoS technology, the IETF's Differentiated Services (diffserv) Working Group recommendations

Some of these approaches reflect different philosophies regarding QoS. One school of thought believes in overprovisioning (assuming that the bandwidth exceeds demand); a second school of thought looks to traffic engineering (steering traffic away from congestion); a third school of thought looks to advanced queuing techniques where there is true contention for the resource (being considered scarce). Internet folks often take an approach of overprovisioning without much mathematically sophisticated analysis. Incumbent carriers often prefer robust (but complex) controls; however, they have focused more on Permanent Virtual Connection (PVC) networks (such as X.25 PVCs, Frame Relay PVCs, and ATM PVCs) rather than on switched/connectionless environments. In this chapter we will briefly look at the approach of advanced queue management and focus the discussion on intserv and diffserv.

QoS Basics

QoS is defined as those mechanisms that give network administrators the ability to manage traffic's bandwidth, delay, jitter, loss, and congestion throughout the network.[22] To realize true QoS, a QoS-endowed architecture must be applied end to end, not just at the edge of the network or at select network devices.[23] The solution must encompass a variety of technologies that can interoperate in such a way as to deliver scalable, feature-rich services throughout the network. The services must provide an efficient use of resources by facilitating the aggregation of large numbers of IP flows where needed while at the same time providing fine-tuned granularity to those premium services defined by service level agreements (SLAs) in general

and real-time requirements in particular. The architecture must also provide the mechanisms and capabilities to monitor, analyze, and report detailed network status, since the need to continuously undertake traffic engineering, network tuning, and provisioning of new facilities is not going to go away given that the growth of the demand on the network will continue to be in the double-digit percentage points for years to come. Armed with this knowledge, network administrators or network monitoring software can react quickly to changing conditions, ensuring the enforcement of QoS policies. Finally, the architecture must also provide mechanisms to defend against the possibility of theft, to prevent denial of service, and to anticipate equipment failure.[24]

In general terms, QoS services in packet-based networks can be achieved in two possible ways:

  • Using out-of-band signaling mechanisms to secure allocations of shared network resources. This includes signaling for different classes of services in ATM and Resource Reservation Protocol (RSVP). It should be immediately noted, however, that RSVP only reserves, and does not provide, bandwidth. As such, it augments existing unicast/multicast routing protocols, IP in particular. In turn, IP will have to rely on Packet over SONET (POS), ATM (say via Classical IP Over ATM [CIOA]), or Generalized Multiprotocol Label Switching (GMPLS, optical switch control) to obtain bandwidth. This approach is used in the intserv model described later.

  • Using in-band signaling mechanisms where carriers and ISPs can provide a priority treatment to packets of a certain type. This could be done, for example, with the Type of Service (TOS) field in the IPv4 header, the Priority field in the IPv6 header, or the Priority field in the Virtual LAN (VLAN) IEEE 802.1Q/1p header. The MPLS label is another way to identify to the router/IP switch that special treatment is required. If routers, switches, and end systems all used or recognized the appropriate fields, if the queues in the routers or switches were effectively managed according to the priorities, and if adequate resources (buffers, links, backup routes, and so on) were provided in the network, then this method of providing QoS guarantees could be called the simplest. This is because no new protocols would be needed, the carrier's router can be configured in advance to recognize labels of different types of information flows, and relatively little state needs to the kept in the network. This approach is used in the diffserv model described later.

Specific tools available to the designer of an IP/MPLS network that is intended to support VoIP include the following:

  • intserv/RSVP  A bandwidth reservation mechanism targeted to enterprise networks (because of size considerations). Also, it is being targeted to MPLS label distribution and MPLS QoS.

  • diffserv  This associates a DSCP (diffserv code point) for every packet and defines per hop behaviors (PHBs).

  • MPLS  This defines label-switched paths (especially in the core of the network for aggregating traffic flows) that have different characteristics (link utilization, link capacity, the number of link hops, and so on). It utilizes the approach of mapping diffserv PHB in an access network to MPLS flows in a core network.

  • Traffic management mechanisms  This includes traffic shaping, marking, dropping, and queue handling. It also includes priority- and class-based queuing with disciplines such as Random Early Detection (RED) and other methods.

As noted earlier, two philosophical approaches satisfy the service requirements of applications:

  • Overprovisioning or overallocation of resources that meet or exceed peak load requirements.

  • Managing and controlling the allocation of network and computing resources.

Depending on the deployment, overprovisioning can be viable if it is a simple matter of upgrading to faster LAN switches and network interface cards (NICs), adding memory, adding a central processing unit (CPU) or disk, and so on. However, it may not be viable or cost-effective in many other cases, such as when dealing with relatively expensive long-haul WAN links. Overprovisioned resources remain underused and are utilized only during short peak periods. Better management consists of optimizing existing resources such as limited bandwidth, CPU cycles, and so on. VoIP stakeholders (carriers and intranet planners) have an economic incentive to deploy viable QoS capabilities so that an acceptable grade of service can be provided to the end users.[25,26]

QoS Approaches

Per-flow QoS  The IETF Integrated Services (intserv) Working Group has developed the mechanisms with link-level, per-flow QoS control, while RSVP is used for signaling. intserv services are guaranteed and controlled load services; these have been renamed by the International Telecommunication Union-Telecommunications (ITU-T) IP traffic control (Y.iptc) to 'delay sensitive statistical bandwidth capability' and 'delay insensitive statistical bandwidth capability,' respectively. (ITU Y.itcp effort uses intserv services and diffserv expedited forwarding.)

The intserv architecture[27] defines QoS services and reservation parameters to be used to obtain the required QoS for an Internet flow. RSVP[28] is the signaling protocol used to convey these parameters from one or multiple senders towards a unicast or multicast destination. RSVP assigns QoS with the granularity of a single application's flows.[29] The work group is now also looking at new RSVP extensions.

Signaling traffic is exchanged between routers belonging to a core area. After a reservation has been established, each router must classify each incoming IP packet to determine whether it belongs to a QoS flow or not, and, in the former case, to assign the needed resources to the flow. The intserv classifier is based on a MultiField classification, because it checks five parameters in each IP packet, namely the source IP address, destination IP address, protocol ID, source transport port, and destination transport port. The classifier function generates a FLOWSPEC object.

intserv addresses the following categories of applications:

  • Elastic applications  No constraints for delivery are used as long as packets reach their destination. There is no specific demand on the delay bounds or bandwidth requirements. Examples are web browsing and e-mail.

  • Real-Time Tolerant (RTT) applications  These applications demand weak bounds on the maximum delay over the network. Occasional packet loss is acceptable. An example is Internet radio applications; these use buffering, hiding the packet losses from the application.

  • Real-Time Intolerant (RTI) applications  This class of applications demands tight bounds on latency and jitter. An example is a VoIP application; here excessive delay and jitter are hardly acceptable.

To service these classes, intserv, utilizing the various mechanisms at the routers, supports the following classes of service:

  • Guaranteed service  This service is meant for RTI applications. This service 'guarantees'

    • Bandwidth for the application traffic

    • Deterministic upper bound on delay

  • Controlled load service  This is intended to service the RTT traffic. The average delay is guaranteed, but the end-to-end delay experienced by some arbitrary packet cannot be determined deterministically, such as H.323 traffic.

RSVP can support an intserv view of QoS; it can also be used as a signaling protocol for MPLS for distributing labels (although a distinct label distribution protocol is also available to MPLS). In the mid-1990s, RSVP was developed to address network congestion by enabling routers to decide in advance whether they could meet the requirements of an application flow and then reserve the desired resources if they were available. RSVP was originally designed to install the forwarding state associated with resource reservations for individual traffic flows between hosts.[30] The physical path of the flow across a service provider's network was determined by conventional destination-based routing (such as the Routing Information Protocol [RIP], Open Shortest Path First [OSPF], or the Interior Gateway Protocol [IGP]). By the late 1990s, RSVP became a proposed standard and has since been implemented in a variety of IP networking equipment. However, RSVP has not been widely used in service provider/carrier networks because of operator concerns about its scalability and the overhead required to support potentially millions of host-to-host flows.

An informational IETF document[31] discusses issues related to the scalability posed by the signaling, classification, and scheduling mechanisms. An important consequence of this problem is that intserv-level QoS can be provided only within peripheral areas of a large network, preventing its extension inside core areas and the implementation of end-to-end QoS. IETF RSVP-related Work Groups have undertaken some work to overcome these problems. The RSVP Work Group has recently published the RFC2961 that describes a set of techniques to reduce the overhead of RSVP signaling; however, this RFC does not deal with the classification problem still to be addressed. The Baker, Iturralde, Le Faucheur, and Davie[32] draft discusses the possibility of aggregating RSVP sessions into a larger one. The aggregated RSVP session uses a diffserv code point (DSCP) for its traffic.[29]

Class-based QoS  The IETF Differentiated Services (diffserv) Working Group has developed a class-based QoS. Packets are marked at the network 'edge.' Routers use markings to decide how to handle packets. There are four services:

  • Best efforts  Normal Internet traffic

  • Seven precedence levels  Prioritized classes of traffic

  • Expedited forwarding (EF)  Leased-line-like service

  • Assured forwarding (AF)  Four queues with three drop classes

This approach requires edge policing, but this technology is not yet defined.

In a diffserv domain (RFC-2475), all the IP packets crossing a link and requiring the same diffserv behavior are said to constitute a behavior aggregate (BA). At the ingress node of the diffserv domain, the packets are classified and marked with a diffserv code point (DSCP) that corresponds to their BA. At each transit node, the DSCP is used to select the per hop behavior (PHB) that determines the scheduling treatment and, in some cases, the drop probability for each packet.

At face value, diffserv appears to be able to scale more easily than intserv; also it is simpler. Packet purists (will probably) argue that diffserv is the best approach because there is very little if any state information kept along the route, while folks more in the carriers' camp (will probably) argue that intserv is a better approach because resource reservations and allocations can be better managed in the network in terms of being able to engineer networks and maintain SLAs. It is within reason to assume that if the design is properly supported by statistically valid and up-to-date demand information,[33] and resources are quickly added when needed, either approach would probably provide reasonable results.

One is not able to generalize as to which of these techniques is better for delay-sensitive traffic such as VoIP, because the decision will have to be based on the type of network architecture one chooses to implement and the size of the network both in terms of network elements (NEs) and lines supported. One cannot argue that a metric wrench is better than a regular wrench. If one is working on a European-made engine, then the metric wrench is obviously best; if one is working on a U.S.-built engine, then regular wrenches are the answer.

For example, in a small network where the end-to-end hop diameter is around three to seven hops, a reservation scheme (specifically intserv) would seem fine (the U.S. voice network kind of fits this range). A network with a large diameter where paths may be 8 to 15 hops may find a reservation scheme too burdensome and a node-by-node distributed approach (specifically, diffserv) may be better (the Internet kind of fits this range). The same kind of argument also applies when looking at the total number of nodes (separate and distinct from the network diameter). If the network in question is the national core network with 10 to 20 core nodes, the reservation/intserv may be fine; if the network in question covers all the tiers of a voice network with around 400 to 500 interacting nodes, the diffserv approach may be better. These are just general observations: the decision regarding the best method must be made based on careful network-specific analysis and product availability.

MPLS-based QoS  Prima facie, the use of MultiProtocol Label Switching (MPLS) affords a packet network the possibility for an improved level of QoS control compared with pure IP. MPLS developers have proposed both a diffserv-style and an interv-style approach to QoS in MPLS. QoS controls are critical for multimedia application in intranets, dedicated (WAN) IP networks, virtual private networks (VPNs), and a converged Internet. Services such as VoIPoMPLS, VoMPLS, MPLS VPNs, Layer 2 VPN (L2VPN), Differentiated Services Traffic Engineering (DS-TE), and draft-martini typically require service differentiation in particular and QoS support in general. It is important to realize, however, that MPLS per se is not a QoS solution: it still needs a distinct mechanism to support QoS. The issue of QoS in an MPLS network was treated at length in Minoli, Delivering Voice over MPLS Networks.[34]

In the diffserv-style case, the EXPerimental (EXP) bits of the header are used to trigger scheduling and/or drop behavior at each label-switching router (LSR). This solution, based on Francois Le Faucheur's, 'MPLS Support of Differentiated Services,'[35] enables the MPLS network administrator to select how diffserv BAs are mapped onto label-switched paths (LSPs) so that he or she can best match the diffserv, traffic engineering, and protection objectives within his or her particular network. The proposed solution enables the network administrator to decide whether different sets of BAs are to be mapped onto the same LSP or mapped onto separate LSPs. The MPLS solution relies on the combined use of two types of LSPs:

  • LSPs that can transport multiple ordered aggregates, so that the EXP field of the MPLS shim header conveys to the LSR the PHB to be applied to the packet (covering both information about the packet's scheduling treatment and its drop precedence).

  • LSPs that only transport a single ordered aggregate, so that the packet's scheduling treatment is inferred by the LSR exclusively from the packet's label value while the packet's drop precedence is conveyed in the EXP field of the MPLS shim header or in the encapsulating link-layer-specific selective drop mechanism (ATM, Frame Relay, or 802.1).

Some developers[1] have proposed a solution that efficiently combines the application-oriented intserv QoS with the power of MPLS label switching. The proposal is contained in Tommasi, Molendini, and Tricco's 'Integrated Services Across MPLS Domains Using CR-LDP Signaling.'[36] The cited document defines the following intserv-like QoS services in MPLS domains targeting certain problems:

  • Providing a user-driven MPLS QoS path setup. An application uses the standard intserv reservation application programming interface (API) to allocate network resources. intserv reservation (signaled using RSVP) are then mapped at the Ingress LSR of the MPLS domain into proper constraint-based routed LSPs (CR-LSPs).

  • Reducing the constraint-based routing Label Distribution Protocol (CR-LDP) signaling overhead providing caching and aggregation of CR-LSPs. Both manual configuration of the bandwidth/signaling trade-off as well as automatic load discovery mechanisms are allowed.

The key element of this solution is the MPLS Ingress LSR that acts like an MPLS/intserv QoS gateway. The CR-LDP protocol enables the definition of a LSP with QoS constraints[37] that is to perform QoS classification using a single valued label (not a MultiField one). The main limitation of this current approach is that end hosts cannot use it because they cannot support CR-LDP signaling. On the other hand, intserv has been designed to enable applications to signal QoS requirements on their own (reservation APIs are available and many operating systems enable applications to use them.)

The basic idea of the 'Integrated Services Across MPLS Domains Using CR-LDP Signaling, Internet Draft' is to combine the application-oriented intserv QoS with the power of MPLS label switching, that is, to define intserv-like QoS services in MPLS domains. Using these mechanisms, end-to-end QoS is reached without service disruptions between MPLS domains and intserv areas. Here the MPLS Ingress LSR acts like an MPLS/intserv QoS gateway. At the same time, the number and the effects of the changes to the current CR-LDP specifications are minimal. Most of the integration work is included in the Ingress LSR at the sender side of the MPLS domain's border.

Traffic Management/Queue Management  As noted earlier, two approaches have been used over time to allocate resources. The first is the out-of-band reservation model (intserv/RSVP and ATM), requiring applications to signal their traffic requirements to the serving switch. This in turn sets up a path from the source to the destination with reserved resources such as bandwidth and buffer space that either guarantees the desired QoS service or assures with reasonable certainty that the desired service will be provided. The second approach is in-band precedence priority. Here packets are marked or tagged according to priority, such as diffserv DSCP, IP Precedence TOS, and IEEE 802.1Q/1p. A router takes aggregated traffic, segregates the traffic flows into classes, and provides preferential treatment of classes. Routers then read these markings and treat the packets accordingly. Both of these approaches require advanced traffic and queue management, especially the in-band priority.

Typically, delays and QoS degradation are accumulated at points in the network where queues exist. Queues arise when the 'server' capacity (an outgoing link or a CPU undertaking a task such as a sort or table lookup) is less than the aggregated demand for service 'brought along' by the incoming 'jobs' (packets). Because of the way that internetworking technology has been developed in the past 15 years, queues are typically found at routing points rather than at switching points. Furthermore, the distribution of the delay (and, hence, jitter) increases as the number of queues that have to be traversed increases, as shown in Table 2-10.

Table 2-10. Table Increasing variance as the number of queues increases

click to expand

Managing resources and supporting QoS routers require sophisticated queue management. QoS mechanisms for controlling resources that achieve more predictable delays include

  • Classification

  • Conditioning, specifically policing/shaping traffic (such as Token Bucket)

  • Queuing management (such as Random Early Detection [RED])

  • Queue/packet scheduling (such as Weighted Fair Queuing [WFQ])

  • Bandwidth reservation via signaling and path establishment (such as RSVP, H.225, MPLS CR-LDP)

Routers can implement the following mechanisms to deal with QoS:[25,26]

  • Admission control  Accepting or rejecting access to a shared resource. This a key component for intserv and ATM networks. This ensures that resources are not oversubscribed and hence are more expensive and less scalable.

  • Congestion management  Prioritizing and queuing traffic access to a shared resource during congestion periods (as done in diffserv).

  • Congestion avoidance  Instead of waiting for congestion to occur, use measures to prevent it. Algorithms such as Weighted Random Early Detection (WRED) make use of the Transmission Control Protocol's (TCP) congestion-avoidance algorithms to reduce traffic injected into the network and prevent congestion.

  • Traffic shaping  Reducing the burstiness of ingress network traffic by smoothing the traffic and then forwarding it to the egress link.

Basic elements of a router include some or all of the following:[25,26]

  • Packet classifier  This functional component is responsible for identifying flows and matching them with a filter. The filter is composed of parameters such as source and destination, IP address, port, protocol, and TOS field. The filter is also associated with information that describes the treatment of this packet. Aggregate ingress traffic flows are compared against these filters. Once a packet header is matched with a filter, the QoS profile is used by the meter, marker, and policing/shaping functions.

  • Metering  The metering function compares the actual traffic flow against the QoS profile definition.

  • Marking  Marking is related with metering in that when the metering function compares the actual measured traffic against the agreed QoS profile, the traffic is handled appropriately.

  • Policing/shaping  The policing functional component uses the metering information to determine if the ingress traffic should be buffered or dropped. Shaping means packets are dispensed at a constant rate, buffering packets in order to achieve a constant output rate. A common algorithm used here is the token bucket algorithm to shape the egress traffic as well as police ingress traffic.

  • Queue manager/scheduler  This is a capability that handles the packets that are in the router's (set of) queue(s), based on the priority management and traffic-handling machinery just described.

[17]Paul Arindam, 'QoS in Data Networks: Protocols and Standards,' www.cis.ohio-state.edu/~jain/cis788-99/qos_protocols/index.html.

[18]For example, see www.cis.ohio-state.edu/~jain/refs/ipq_book.htm.

[19]Daniel Minoli and Andrew Schmidt, Internet Architectures (New York: Wiley, 1999).

[20]Andrew Schmidt and Daniel Minoli, Multiprotocol over ATM Building State of the Art ATM Intranets Utilizing RSVP, NHRP, LANE, Flow Switching, and WWW Technology (New York: Prentice Hall, 1998). Dan Minoli and Andrew Schmidt, Network Layer Switched Services (New York: Wiley, 1998) (includes LANE, MPOA, IP switching, tag switching).

[21]Daniel Minoli, Broadband Network Analysis and Design (Norwood, MA: Artech House, 1993).

[22]J. Zeitlin, 'Voice QoS in Access Networks - Tools, Monitoring, and Troubleshooting', Next-Generation Networks Conference (NGN) Proceedings, Boston, MA, November 2001.

[23]However, if IP is actually deployed at the core of the network in support of VoIP, as discussed in Chapter 11, the QoS can also initially be targeted for the core.

[24]Robert Pulley and Peter Christensen, 'A Comparison of MPLS Traffic Engineering Initiatives,' NetPlane Systems, Inc., www.netplane.com.

[25,26]Deepak Kakadia, 'Tech Concepts: Enterprise QoS Policy-Based Systems and Network Management,' Sun Microsystems, www.sun.com/software/bandwidth/wp-policy.

[27]Grade of service relates to an overall level of service delivery (similar to an SLA-oriented view), while QoS refers to the achievement of specific network parameters within defined ranges (for example, 0.100 < delay < 0.200 seconds).

[28]R. Braden, D. Clark, and S. Shenker, 'Integrated Services in the Internet Architecture: An Overview,' IETF RFC 1633, June 1994.

[29]R. Braden, (ed.), L. Zhang, S. Berson, S. Herzog, and S. Jamin, 'Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specification,' IETF RFC 2205, September 1997.

[30]F. Tommasi, S. Molendini, and A. Tricco, University of Lecce, 'Integrated Services Across MPLS Domains Using CR-LDP Signaling,' Internet Draft, http://search.ietf.org/ . . . /draft-tommasi-mpls-intserv-01.txt, May 2001.

[31]Chuck Semeria, 'RSVP Signaling Extensions for MPLS Traffic Engineering,' White Paper, Juniper Networks, Inc., www.juniper.net, 2000.

[32]A. Mankin, (ed.), F. Baker, B. Braden, S. Bradner, M. O'Dell, A. Romanow, A. Weinrib, and L. Zhang, 'Resource ReSerVation Protocol (RSVP) - Version 1 Applicability Statement Some Guidelines on Deployment,' September 1997.

[29]R. Braden, (ed.), L. Zhang, S. Berson, S. Herzog, and S. Jamin, 'Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specification,' IETF RFC 2205, September 1997.

[33]F. Baker, C. Iturralde, F. Le Faucheur, and B. Davie, 'Aggregation of RSVP forIPv4 and IPv6 Reservations,' work in progress, draft-ietf-issll-rsvp-aggr-04, April 2001.

[34]The $700 billion debt created by the telecom industry in 2000 and the abundance of carrier failures in 2001 strongly argue for mathematically sound, statistically significant primary market research; it also argues for mathematically sound forecasting of demand and analytical decision-making regarding engineering and deployment.

[35]Daniel Minoli, Delivering Voice over MPLS Networks (New York: McGraw-Hill, 2002).

[1]R. Braden, (ed.), L. Zhang, S. Berson, S. Herzog, and S. Jamin, 'Resource ReSerVation Protocol (RSVP) - Version 1 Functional Specification,' IETF RFC 2205, September 1997.

[36]http://search.ietf.org/internet-drafts/draft-ietf-mpls-diff-ext-09.txt.

[37]CR stands for constraint-based routing.

[25,26]Deepak Kakadia, 'Tech Concepts: Enterprise QoS Policy-Based Systems and Network Management,' Sun Microsystems, www.sun.com/software/bandwidth/wp-policy.

[25,26]Deepak Kakadia, 'Tech Concepts: Enterprise QoS Policy-Based Systems and Network Management,' Sun Microsystems, www.sun.com/software/bandwidth/wp-policy.



Hotspot Networks(c) Wi-Fi for Public Access Locations
Hotspot Networks(c) Wi-Fi for Public Access Locations
ISBN: N/A
EAN: N/A
Year: 2005
Pages: 88

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net