Service Providers

Service Providers

There is a wide range of service providers in the Internet space. One way they differ is in their coverage areas. Some providers focus on serving a local area, others are regionally based, and others offer national or global coverage. Service providers also vary in the access options that they provide. All ISPs offer plain old telephone service (POTS), and some offer ISDN, xDSL, Frame Relay, ATM, cable modem service, satellite, and wireless as well. Providers also differ in the services that they support. Almost all providers support e-mail (but not necessarily at the higher-tier backbone level). Some also offer FTP hosting, Web hosting, name services, VPNs, VoIP, application hosting, e-commerce, and streaming media. Providers could service a very wide variety of applications, and as a result, there is differentiation on this front as well. Two other important issues are customer service and the number of hops a provider must take in order to get to the main point of interconnection into the Internet.

It is pretty easy to become an ISP: pick up a router, lease a 56Kbps/64Kbps line, and you're in business. This is why there are some 10,000 such providers, of varying sizes and qualities, worldwide. There is a service provider pecking order. Research backbones have the latest technology. Top-tier providers focus on business-class services; lower-tier providers focus on rock-bottom pricing. Consequently, there are large variations in terms of available capacity, the performance you can expect, the topology of the network, the levels of redundancy, the numbers of connections with other operators, and the level of customer service and the extent of its availability (that is, whether it's 24/7 or whether it's a Monday-through-Friday, 9-to-5 type of operation). Ultimately, of course, ISPs vary greatly in terms of price.

Figure 9.8 shows an idealized model of the service provider hierarchy. At the top of the heap are research backbones. For example, Internet 2 replaces what the original Internet was for the academic network. Some 85% of traffic within the academic domain stays within the academic domain, so there's good reason to have a separate backbone for the universities and educational institutions involved in research and learning. Internet 2 will, over time, contribute to the next commercialized platform. It acts as a testbed for many of the latest and greatest technologies, so the universities stress test Internet 2 to determine how applications perform and which technologies suit which applications or management purposes best. Other very sophisticated technology platforms exist, such as the Abilene Project and the Interplanetary Internet (IPNSIG), the first Internet being constructed in space for purposes of deep space communications. The objective of IPNSIG is to define the architecture and protocols necessary to permit interoperation of the Internet resident on earth with other remotely located internets resident on other planets or spacecraft in transit.

Figure 9.8. Service provider hierarchy

graphics/09fig08.gif

In the commercial realm, the highest tier is the NSP. NSPs are very large backbones, global carriers that own their own infrastructures. The top providers at this level are AT&T, Worldcom, UUnet, Sprint, Verizon, Cable & Wireless, and Qwest. The NSPs can be broken down into three major subsets:

         National ISPs These ISPs have national coverage. They include the incumbent telecom carriers and the new competitive entrants.

         Regional ISPs These ISPs are active regionally throughout a nation, and these service providers own their equipment and lease their lines from the incumbent telco or competitive operator.

         Retail ISPs These ISPs have no investment in the network infrastructure whatsoever. They're basically using their brand name and outsourcing all the infrastructure to perhaps an NSP or a high-tier ISP, but they're building from a known customer database that's loyal and that provides an opportunity to offer a branded ISP service.

These various levels of NSPs interconnect with one another in several ways. First, they can connect at the National Science Foundation-supported NAPs. These NAPs are used to provide connection into the Internet 2 project, largely in the United States. This occurs between the largest of the NSPs as well as the research backbones. Second, there are commercial interconnection and exchange points, which people refer to as NAPs as well, although they are also called metropolitan area exchanges (MAEs) and interconnection and exchange points. These NAPs are typically run by consortiums of ISPs, telcos, entrepreneurs, and others seeking to be in the business of the Internet; these consortiums build public exchange points for traffic between the various Internet backbones. Third, service providers can connect using bilateral arrangements (called private peering) between one another to pass traffic over each others' backbones. These NAPs are discussed in more detail in the following section, but for now, suffice it to say that local service providers, the lower-tier ISPs, typically connect to NAPs through the top-tier ISPs, so they are a greater number of hops away from the point of interconnection, which can have a big impact on the performance of time-sensitive applications.

Evaluating Service Providers

The following sections describe some of the important characteristics you should expect in the different levels of service providers.

Evaluating NSPs

NSPs provide at least a national backbone, but it's generally international. Performance depends on the total network capacity, the amount of bandwidth, and the total number of customers contending for that bandwidth. If you divide the total capacity by the number of direct access customers, you get an idea of how much bandwidth is available on average per customer; however, that doesn't take into account the additional traffic that consumer or dialup traffic may add. If you were to perform such an exercise based on your ISP, you'd probably be shocked at how little bandwidth is actually available to each individual user.

The NSPs should operate at the higher speeds, ranging today from OC-3 to OC-48, and looking to move on to OC-192 and beyond. They should have engineered the network such that during peak periods the network has at least 30% to 40% spare bandwidth. They should have nodes in all the major cities. NSPs that own their facilities that is, facilities-based ISPs have an added advantage of being able to provide backhauled circuits at no extra charge, whereas those that are leasing facilities from interexchange carriers or telcos incur additional charges to the overall operation. There should be redundancy applied to the circuits, routers, and switches. Most NSPs have at least two redundant paths from each node. Some may have anywhere from three to five redundant paths. An NSP should practice diversity in the local loops and long-haul circuits that is, it should get them from different carriers so that in the event of a disaster, there is a fallback plan. Facilities-based carriers often offer redundancy, but rarely do they provide diversity. NSPs should have generators at each node. Remember that it doesn't matter how much redundancy you build into your equipment or your circuits if you lose power. You need redundant power, and this is a major differentiator for the very high-tier operators.

NSPs should have implemented BGP in order to filter, or block, any faulty messages from the backbone that can replicate themselves and cause major brownouts or blackouts on the Internet. NSPs should connect to one another through multiple NAPs and interexchange points and also have multiple private peering agreements, again to cover all their bases. They should provide redundant paths to the NAPs as well as to their peers, and they should be able to articulate a very clear reason for their architectural decisions (Why are they using IP routing and connectionless networks? Why does their core consist of ATM? Are they using ATM for its QoS benefits?). They might want to speak to issues of speed, overhead, or QoS, but you want to work with an NSP that actually has a clear-cut architectural reason for its decision.

When comparing backbones, look at the backbone speeds. Look at the underlying transport technology and what that means in terms of QoS. Look at the number of nodes, the number of redundant paths per node, the availability of power backup at each node, the availability of BGP filtering, the number of NAPs or ISPs that they interconnect through, the number of private peering arrangements, and a disclosure of who those peers are. Look at whether performance guarantees are offered, whether you have the potential for any online monitoring, and the access price.

Evaluating ISPs

The top-tier ISPs have the greatest coverage in a region of a particular country. They are associated with multiple high-speed links, generally in the T-1/T-3, E-1/E-3 range, up to perhaps the OC-3 level. They have redundant routers and switches at each node, and they have multiple high-speed connections into their NAPs (and more and more NAPs are becoming discriminating, as discussed later in this chapter, in the section "NAPs"). They require two connections, for example, into the Internet, so you have redundancy, and the larger NAPs may also require that for purposes of transit links, the ISP have arrangements with alternative international providers. Their main focus is to provide high levels of service, to address the business community.

The lower-tier ISPs are generally associated with providing local access to a mountain village that has a ski community in the winter or to a remote lake where there is summer traffic, or to any other neighborhood. There is generally only one rather low-speed connection, either 56Kbps or 64Kbps, or perhaps up to T-1/E-1, and it leads into a top-tier ISP; these lower-tier ISPs generally do not have direct connections into the NAPs or high-tier ISPs. Lower-tier ISPs focus on offering the lowest prices possible, so they offer the least amount of redundancy of any providers no power backups and fairly minimal capacities.

As you can see in the idealized model of the Internet shown in Figure 9.9, information flows from a user through a local ISP, through a regional ISP, to a national ISP, through transit ISPs, and back down the hierarchy. Some companies operate on a vertically integrated basis, so they may be represented as local ISPs to their consumers but also offer national backbones to their business customers. Therefore, the relationships can be a little less defined than the figure indicates, but it helps provide a summary view of how this works.

Figure 9.9. Information flow in an idealized model of the Internet

graphics/09fig09.gif

An issue in ISP selection is the level of coverage (How many countries are served? How many cities are served within that country? What's the total number of backbone connections into the countries that it serves?). Another consideration is the number of exchange points you have to go through and what type and number of peering relationships are in practice. You also need to consider the total amount of bandwidth and therefore what the level of oversubscription is if everyone tries to use it at the same time. And you need to consider the transit delays being experienced; ideally we want to see networks evolve to latencies of less than 80 milliseconds, but today 800 to 1,000 milliseconds is much more realistic. Similarly, you want less than 5% packet loss, but today at peak hours you'll see up to 30% or 40% packet loss. Data packets most often are retransmitted to correct for these losses. However, with real-time traffic, such as voice or video, packet retransmission would add too much delay, with the result that conversations can be rendered unintelligible. Again, you need to think about redundancy the number of lines into the network, the level of network diversity, and the amount of power backup involved.

Evaluating Emerging Service Providers

In the past couple years, there have been some exciting developments in niched applications for service providers. This section talks about content delivery networks, application service providers (ASPs), management service providers (MSPs), online service providers (OSPs), and virtual ISPs (VISPs). Each of these serves a different purpose. Some of them have an immediate future, some of them perhaps may last a little bit longer, and some of them are quite unknown.

Content Delivery Networks Content delivery networks can be structured to support exactly the kind of application you need them to support. For example, you may use Web-delivered training, which is designed to have the types of requirements that streaming media applications have. Content delivery services are becoming essential to the development of e-commerce, making them a requirement for business-class Web sites. They are delivery services that are focused on streaming audio, video, and media, as well as the supporting e-commerce applications. Currently, the major clients for content delivery networks are ISPs and content providers because they stand to reduce their need for bandwidth and get better profit margins on services to customers.

Content delivery and hosting service providers aim to deliver Web-site content to customers faster by storing the content in servers located at the edges of Internet networks, rather than in the network's central location. This reduces the number of hops, thereby reducing the latencies and improving performance. This works because of a combination of technologies, including caching and load balancing, which means spreading the content across multiple servers so that at peak periods, you don't overload the server but can still provide access on a balanced basis. Content delivery providers will also use enhanced Internet routing and probably proprietary algorithms that facilitate the administration of QoS. For instance, Enron has a proprietary intelligent call agent that knows how to prioritize streaming content over bursty data applications that may be supported. As another example, IBasis supports VoIP, and when the Internet gets congested, the routing algorithm switches the traffic over to the circuit-switched environment, thereby guaranteeing the high quality that the customers are expecting for the telephony. Also, these content delivery services will be able to deliver content to a user from the nearest servers, or at least from a server located at the edge of the network.

All the changes in the content delivery networks, of course, are driven by the need for faster and faster online transactions. Humans get addicted to speed. We have a physiological speed center in our brain, and each time we complete tasks at a certain pace, we resynchronize that speed center. Consequently, as we've used the Web over the years, improvements in network infrastructure have allowed us to experience faster downloads. Customer loyalty can be increasingly affected by time frames as small as a second.

There are a number of content-delivery providers in the market. Akamai Technologies has more than 4,000 servers and is growing. Digital Island plans to install 8,000 servers by 2002, increasing its current capacity by a factor of 30. Enron recently signed a deal with Blockbuster, and this is an indicator of the importance of the content as part of the overall picture.

ASPs ASPs have been receiving a great deal of press. There is not really one major business model for ASPs; there's quite a niching opportunity. An ASP is a supplier that makes applications available on a subscription basis. ASPs provide hosting, monitoring, and management of a variety of well-known software applications on a world-class data center infrastructure. A great deal of money is being spent in the ASP arena, and ASPs are increasingly becoming application infrastructure providers (AIPs), which is discussed later in this section.

ASPs are most useful when the majority of an application's users reside outside the enterprise network. The more external touch points there are, the more sense it makes to use an ASP. An ASP is viable for e-commerce, customer relations management, human resources, and even e-mail and listserv applications. An ASP is not really viable for productivity tools, such as word processing or spreadsheets. And an ASP is not really good for financial applications, which benefit from being maintained in-house, because of the more stringent requirements for security and confidentiality of data.

You might want to use an ASP if there's a need for additional bandwidth; if you lack technical resources in-house for reliable 24/7 application support; when you feel that a third party could do the job better; when you need a large and readily available applications inventory; when scalability demands dynamic increases; or when you're seeking performance reliability.

With ASPs, you pay setup fees; on a low-end Web server these fees start at around US$2,000 and on a low-end database server they start at around US$10,000. Setup fees on a high-end Oracle cluster, for instance, could run from US$5,000 for the Web servers to US$40,000 for database servers. These customers are generally ISPs or major international corporations. Ongoing ASP fees also range anywhere from US$2,000 to US$40,000 per month for the software licensing, the applications, and the equipment maintenance, along with perhaps the broadband connectivity. So an ASP paying US$1 million to buy the license for an application may charge its customers between US$200,000 and US$500,000 per year for three years (the typical life span of most of the applications contracts). ASPs that concentrate on small- to medium-size businesses typically offer application hosting in the range of US$200 to US$500 per month, sometimes with an additional US$5 to US$30 per month charge per user for enterprise applications. Thus, ASPs need to invest a lot of money into automating customer setup and maintenance, which then helps them reduce cost to the customer and ensure an attractive margin. Enterprises may not have the resources to realize these types of cost benefits.

Although small and medium-sized businesses may today appear to be the best targets for using ASPs, predictions show that very large enterprise clients will be the most fruitful by 2004. It is likely that large enterprises will use ASPs for e-commerce applications and for internal applications, such as e-mail, data management, office automation, and basic business applications. The emerging "skills drought" may drive larger companies to ASPs as well.

The ASP model comprises no single entity or service. Instead, it's a complex supply chain that includes the following:

         Independent software vendors (ISVs) ISVs develop the applications that the ASPs then put up for sale or for rent.

         AIPs AIPs manage the data center servers, databases, switches, and other gears on which the applications run. It's reminiscent of the gold miner analogy: It wasn't the gold miners who got the gold; it was the guys who sold the picks and shovels. The AINs are the segment of the ASP market that will be seeing the greatest run of success at first.

         MSPs MSPs take over the actual management and monitoring of the network. (They are discussed in more detail later in this section.)

         NSPs NSPs are network access providers.

         Value-added resellers (VARs) VARs deal with distribution and sales.

         Systems integrators (SIs) Like VARs, SIs deal with distribution and sales.

         E-business infrastructure providers (eBIPs) This group saves small businesses time and money with Web-based solutions for human resources, accounting, marketing, group collaboration, and other services. Some of these eBIPs offer their services for free, making money from ads and partnerships with the VARs. Others charge affordable fees that range from $30 to $200 per month. Online business center sites that offer targeted small-business content and community are good partnership candidates for such eBIPs.

ASPs form a complex supply chain. Today, more than 650 companies call themselves ASPs, and a great number of these are telcos. A good resource for information on ASPs is the Application Service Provider Industry Consortium (ASPIC) at www.aspindustry.org.

MSPs MSPs specialize in providing software for network management and for monitoring applications, network performance, and network security. MSPs also take over the actual management and monitoring of the network. They operate similarly to ASPs, in that they use a network to deliver services that are billed to their clients. They differ from ASPs in that they very specifically address network management rather than business process applications. MSPs deliver system management services to IT departments and other customers that manage their own technology assets.

The appeal of the MSP model is that it eliminates the need for companies and individuals to buy, maintain, or upgrade information technology infrastructure management systems, which typically require a major capital expense, are highly technical in terms of the expertise they mandate, and require a considerable investment of time. This model appeals in particular to enterprises that manage e-commerce applications, such as ASPs and ISPs, whose expertise lies in the applications or network infrastructure they provide to customers not necessarily in their management. It also appeals to small- and medium-size companies that prefer not to invest in large IT staffs. As with the ASP model, using specialists to deploy and maintain complex technology enables companies to focus on their own core competencies and to immediately tap into high-quality expertise as needed. There are variations in the model: some MSPs provide tools and services, whereas others provide services only; and some target corporations, whereas others are designed for consumers.

The MSP Association (www.mspassociation.com) was formed in June 2000, and it aims to be at the forefront of creating new standards for network management and for defining the best practices for the network management market. Its first working group has the job of defining what the whole MSP market looks like. Market analysts expect the demand for MSPs to grow exponentially as an attractive alternative to internally run IT management applications.

OSPs The OSP is a provider that organizes the content for you and provides intuitive user navigation. An example of an OSP is America Online. You can liken an OSP to a department store in a shopping mall in the United States. You enter the mall through strategically placed doors. At that point, you can browse all the different content providers, or shops, in that shopping mall. You may get a little assistance through a directory board, but you must take the initiative to go to each of these locations to determine whether the content you seek is really there. A shopping mall usually has a major tenant, a large department store, that provides its own doors from the parking lot. When you enter through the department store's doors, the department store tries to hold you captive with its content. It has some of everything else you could find in all the individual boutiques throughout the mall, but at the department store, there's a more exclusive or tailored selection. It's been organized into meaningful displays, and assistants are available to guide you through the store to your selected content. So, an ISP is the shopping mall. It gives you access to all the Web sites out there, but you have to use search engines to narrow your search. An OSP, such as America Online, is like the department store, which gives you a more cozy and exclusive space in which to browse the same content.

VISPs The VISP offers outsourced Internet service, running as a branded ISP. It is a turnkey ISP product aimed at affinity groups and mass marketers that want to add Internet access to their other products and services. Services of a VISP could include billing, Web-site maintenance, e-mail and news services, customized browsers, and a help desk. VISPs today include AT&T, Cable & Wireless, GTE, IConnect, and NaviNet. Early customers include Surfree.com and LibertyBay.com.

The Service Provider Value Chain

Figure 9.10 is a simplistic diagram of the current value chain, from a technology standpoint. The lower-tier ISP is getting fees, essentially subscription fees and perhaps hosting fees, from a retail end user and that is the lower-tier ISP's cash flow in. The ISP's cash flow out is the fees that it pays for connection into the higher-tier ISP, as well as the money associated with the links it's leasing from the telecom operator. The higher-tier ISP is getting fees for access in from the lower-tier ISP, and it's also getting subscription fees from higher-end business customers. The higher-tier ISP's outflow is the money it pays to connect into the backbone provider, as well as the money it may be paying for its communication links from a network operator. The backbone provider, then, is getting money from the higher-tier ISPs, as well as from customers that want to host their content that is, from their Web farms or media servers.

Figure 9.10. The current value chain

graphics/09fig10.gif

Regulatory Decisions

Regulatory decisions could cause some major shifts in the next several years. These decisions will affect who ends up being considered the major backbone provider by virtue of how it regulates this environment. Decisions that are made on unbundling the local loops will have a major impact on the availability of broadband access. The Internet is not going to grow without growth in broadband access, so these decisions play a very important role. Decisions will be made about things such as privacy and whose laws apply in e-commerce. When I buy something in the United Kingdom from the United States, does the U.K. law or U.S. law apply? Further decisions will affect content sensitivity, censorship, and the acceptance of digital signatures as being valid. Interconnection issues need to be decided as well (for example, in Figure 9.10, how moneys are exchanged between the backbone providers).

So, remember that you need to consider more than just the progress and the technology. There are some big human and political issues to think about as well.

The backbones until now have largely practiced Sender Keep All (SKA). Those in an SKA arrangement assume that there is an even exchange of traffic between the two peers, and hence they don't pay each other any money. This is likely to change. The vast majority of content still currently resides in the United States, and that's made some of the U.S. ISPs rather cocky. They tell their overseas colleagues, "If you want a transit link into my backbone, you have to pick up the entire cost of that transit link because basically your people are coming to get our content. There's nothing we really want on your side." One of two things will happen. Either this will become regulated or market forces will take over and we will allow these other countries to say, "We'll develop our own content, and now when your people want to get at it, you can cover the cost of access into our backbone."

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net