Internet Basics

Internet Basics

Figure 9.1 is an astounding graph that speaks to the pace of Internet development. It shows the number of years it took a number of technologies to reach 50 million users worldwide. As you can see, whereas it took 74 years for the telephone to reach 50 million users, it took the World Wide Web only 4.

Figure 9.1. Internet pace: Years to reach 50 million users worldwide

graphics/09fig01.gif

What forces are propelling our interest in the Internet? One main force is that usage is increasing dramatically; today some 250 million people worldwide have Internet access, and that number is growing by leaps and bounds. The Internet is very useful and easy to use, and for a growing number of people in the developed world, it is now the first place to look for information. As one colleague recently told me, in the past week, besides the numerous times he had used the Internet to get information for my work, he'd used the Internet to look up hotels for a weekend break, to determine what concerts are on in Dublin, to check the specification of a car, to transfer funds between bank accounts, to find the address of an old friend, and to obtain sheet music. Electronic commerce (e-commerce) is also growing, in both the business-to-consumer and business-to-business sectors. Another contributor is the major shift toward the use of advanced applications, including pervasive computing, which introduces a wide range of intelligent appliances that are ready to communicate through the Internet, as well as applications that include the more captivating visual and sensory streams. Finally, the availability of broadband, or high-speed access technologies, further drives our interest in and our ability to interact with Web sites that involve the use of these advanced applications and offer e-commerce capabilities.

A Brief History of the Internet

To help understand the factors that contributed to the creation of the Internet, let's look very briefly at the history of the Internet. In 1969 the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense initiated a project to develop a distributed network. There were several reasons for doing this. First, the project was launched during the Cold War era, when there was an interest in building a network that had no single point of failure, and that could sustain an attack yet continue to function. Second, four supercomputer centers were located in four universities throughout the United States, and we wanted to connect them together so that we could engage in some more intensive processing feats. So, the Internet started as a wide area, packet-switching network called the ARPANET.

Toward the mid-1970s, ARPA was renamed the Defense Advanced Research Projects Agency (DARPA), and while it was working on the distributed, or packet-switched, network, it was also working on local area networks (LANs), paging networks, and satellite networks. DARPA recognized that there was a need for some form of internetworking protocol that would allow open communications between disparate networks. So, Internet Protocol (IP) was created to support an open-architecture network that could link multiple disparate networks via gateways what we today refer to as routers.

Jonathan Postel and the Internet

Jonathan Postel played a pivotal role in creating and administering the Internet. He was one of a small group of computer scientists who created the ARPANET, the precursor to the Internet. For more than 30 years he served as editor of the Request for Comments (RFC) series of technical notes that began with the earliest days of the ARPANET and continued into the Internet. Although intended to be informal, RFCs often laid the foundation for technical standards governing the Internet's operation. Nearly 2,500 RFCs have been produced.

Also for 30 years, Postel handled the administrative end of Internet addresses, under the auspices of the Internet Assigned Numbers Authority (IANA), a U.S. government -financed entity. As part of the effort to hand over administration of the Internet to an international private corporation, Postel delivered a proposal to the government for transforming IANA into a nonprofit corporation with broad representation from the commercial and academic sectors. That organization is today known as the Internet Corporation for Assigned Names and Numbers (ICANN).

In 1980, Transmission Control Protocol/Internet Protocol (TCP/IP) began to be implemented on an experimental basis, and by 1983, it was required in order for a subnetwork to participate in the larger virtual Internet.

The original Internet model was not based on the telephone network model. It involved distributed control rather than centralized control, and it relied on cooperation among its users, which initially were largely academicians and researchers. With the original Internet, there's no regulation, no monopoly, and no universal service mandate (although these issues are being considered seriously now).

Today, no one agency is in charge of the Internet, although the Internet Society (ISOC) is a nonprofit, nongovernmental, international organization that focuses on Internet standards, education, and policy issues. ISOC is an organization for Internet professionals that serves as the organizational home of the Internet Engineering Task Force (IETF), which oversees various organizational and coordinating tasks. ISOC is composed of a board of trustees, the Internet Architecture Board, the IETF, the Internet Research Task Force, the Internet Engineering Steering Group, and the Internet Research Steering Group.

The IETF is an international community of network designers, operators, vendors, and researchers, whose job is to evolve the Internet and smooth its operation by creating technical standards through consensus. Other organizations that are critical to the functioning of the Internet include American Registry for Internet Numbers (ARIN) in the United States, Asia Pacific Network Information Center (APNIC) in Asia-Pacific, and RIPE NCC (Reseaux IP Europeens Network Coordination Center) in Europe. These organizations manage and sell IP addresses and autonomous system numbers. IANA manages and assigns protocol and port number, and ICANN (formed in 1998) is responsible for managing top-level domain names and the root name servers. ICANN also delegates control for domain name registry below the top-level domains. (Domain names and the role of IP addresses are discussed later in this chapter.)

Prevailing Conditions and Exchange Points

Since the beginning of the Internet's history, we've been trying to prevent having a single point of failure. We have distributed nodes throughout the network so that if one node goes down or a series of links goes down, there can still be movement between the other devices, based on a wide variety of alternative nodes and links.

But we're doing a turnaround now because these very interconnection points that provide interconnection between ISPs can also act as vulnerable points for the network and even for a nation. If the exchange points are taken down within a given country, the Internet activity within that country may cease or fail altogether, with great economic consequences. Always remember that, in the end, the prevailing conditions dictate whether an architecture is truly good, reliable, and high performance.

What the Internet Is and How It Works

To understand the Internet, it's important to first understand the concept of a computer network (see Figure 9.2). A network is formed by interconnecting a set of computers, typically referred to as hosts, in such a way that they can interoperate with one another. Connecting these hosts involves two major components: hardware (that is, the physical connections) and software. The software can be run on the same or dissimilar host operating systems, and it is based on standards that define its operation. These standards, referred to as protocols, provide the formats for passing packets of data, specify the details of the packet formats, and describe how to handle error conditions. The protocols hide the details of network hardware and permit computers of different hardware types, connected by different physical connections, to communicate, despite their differences. (Protocols are discussed in detail later in this chapter.)

Figure 9.2. Network components

graphics/09fig02.gif

In the strictest sense, the Internet is an internetwork composed of a worldwide collection of networks, routers, gateways, servers, and clients, that use a common set of telecommunications protocols the IP family to link them together (see Figure 9.3). The term client is often used to refer to a computer on the network that takes advantage of the services offered by a server. It also refers to a user running the client side of a client/server application. The term server describes either a computer or a software-based process that provides services to network users or Web services to Internet users.

Figure 9.3. An internetwork

graphics/09fig03.gif

Networks connect servers and clients, allowing the sharing of information and computing resources. Network equipment includes cable and wire, network adapters, hubs, switches, and various other physical connectors. In order for the network to be connected to the Internet, the network must send and retrieve data by using TCP/IP and related protocols. Networks can also be connected to form their own internets: Site-to-site connections are known as intranets, internal networks that are generally composed of LANs interconnected by a WAN that uses IP. Connections between partnering organizations, using IP, are known as extranets.

The Internet is a complex, highly redundant network of telecommunications circuits connected together with internetworking equipment, including routers, bridges, and switches. In an environment consisting of several network segments with different protocols and architectures, the network needs a device that not only knows the address of each segment but can also determine the best path for sending data and filtering broadcast traffic to the local segment. The Internet moves data by relaying traffic in packets from one computer network to another. If a particular network or computer is down or busy, the network is smart enough to reroute the traffic automatically. This requires computers (that is, routers) that are able to send packets from one network to another. Routers make decisions about how to route the data or packets, they decide which pipe is best, and then they use that best pipe. Routers work at the network layer, Layer 3, of the OSI model, which allows them to switch and route packets across multiple networks. Routers read complex network addressing information in the packet; they can share status and routing information with one another and use this information to bypass slow or malfunctioning connections.

Routing is the main process that the Internet host uses to deliver packets. The Internet uses a hop-by-hop routing model, which means that each host or router that handles a packet examines the destination address in the packet's IP header, computes the next hop that will bring the packet one step closer to its destination, and delivers that packet to the next hop, where the process is repeated. To make this happen, routing tables must match destination addresses with next hops, and routing protocols must determine the content of these tables. Thus, the Internet and the public switched telephone network (PSTN) operate quite differently from one another. The Internet uses packet switching, where there's no dedicated connection and the data is fragmented into packets. Packets can be delivered via different routes over the Internet and reassembled at the ultimate destination. Historically, "back-office" functions such as billing and network management have not been associated with Internet. But the Internet emphasizes flexibility the capability to route packets around congested or failed points.

Recall from Chapter 5, "The PSTN," that the PSTN uses circuit switching, so a dedicated circuit is set up and taken down for each call. This allows charging based on minutes and circuits used, which, in turn, allows chain-of-supply dealings. The major emphasis of the PSTN is on reliability. So, the Internet and the PSTN have different models and different ways of managing or routing traffic through the network, but they share the same physical foundation in terms of the transport infrastructure, or the types of communication links they use. (Chapter 4, "Establishing Communications Channels," discusses packet switching and circuit switching in detail.)

Internet Protocols

The Internet is a collection of networks that are interconnected logically as a single, large, virtual network. Messages between computers are exchanged by using packet switching. Networks can communicate with one another because they all use an internetworking protocol. Protocols are formal descriptions of messages to be exchanged and of rules to be followed in order for two or more systems to exchange information in a manner that both parties will understand. The following sections examine the Internet's protocols: TCP/IP, User Datagram Protocol (UDP), Internet Control Message Protocol (ICMP), Internet Group Management Protocol (IGMP), Address Resolution Protocol (ARP)/Reverse Address Resolution Protocol (RARP), routing protocols, and network access protocols.

TCP/IP

The IETF has technical responsibility for TCP/IP, which is the most popular and widely used of the internetworking protocols. All information to be transmitted over the Internet is divided into packets that contain a destination address and a sequence number. Packets are relayed through nodes in a computer network, along the best route currently available between the source and destination. Even though the packets may travel along different routes and may arrive out of sequence, the receiving computer is able to reassemble the original message. Packet size is kept relatively small at 1,500 bytes or less so that in the event of an error, retransmission is efficient. To manage the traffic routing and packet assembly/disassembly, the networks rely on intelligence from the computers and software that control delivery.

TCP/IP, referred to as the TCP/IP suite in Internet standards documents, gets its name from its two most important protocols, TCP and IP, which are used for interoperability among many different types of computers. A major advantage of TCP/IP is that it is a nonproprietary network protocol suite that can connect the hardware and operating systems of many different computers.

TCP Network applications present data to TCP. TCP divides the data into packets and gives each packet a sequence number that is not unique, but which is nonrepeating for a very long time. These packets could represent text, graphics, sound, or video anything digital that the network can transmit. The sequence numbers help to ensure that the packets can be reassembled correctly at the receiving end. Thus, each packet consists of content, or data, as well as the protocol header, the information that the protocol needs to do its work. TCP uses another piece of information to ensure that the data reaches the right application when it arrives at a system: the port number, which is within the range 1 to 65,535. Port numbers identify running applications on servers, applications that are waiting for incoming connections from clients. Port numbers identify one listening application from another. Ports 1 to 1,023 are reserved for server applications, although servers can use higher port numbers as well. Numbers between 1 and 1,023 are reserved for "well-known" applications (for example, Web servers run on port 80, FTP runs on port 21). Also, many recent protocols have been assigned well-known port numbers above 1,023. Ports with higher numbers, called "ephemeral" ports, are dynamically assigned to client applications as needed. A client obtains a random ephemeral port when it opens a connection to a well-known server port.

Data to be transmitted by TCP/IP has a port from which it is coming and a port to which it is going, plus an IP source and a destination address. Firewalls can use these addresses to control the flow of information. (Firewalls are discussed in Chapter 11, "Next-Generation Network Services.")

TCP is the protocol for sequenced and reliable data transfer. It breaks the data into pieces and numbers each piece so that the receipt can be verified and the data can be put back in the proper order. TCP provides Layer 4 (transport layer) functionality, and it is responsible for virtual circuit setup, acknowledgments, flow control, and retransmission of lost or damaged data. TCP provides end-to-end, connection-oriented, reliable, virtual circuit service. It uses virtual ports to make connections; ports are used to indicate where information must be delivered in order to reach the appropriate program, and this is how firewalls and application gateways can filter and direct the packets.

IP IP handles packet forwarding and transporting of datagrams across a network. With packet forwarding, computers can send a packet on to the next appropriate network component, based on the address in the packet's header. IP defines the basic unit of data transfer, the datagram, also referred to as the packet, and it also defines the exact format of all data as it travels across the Internet. IP works like an envelope in the postal service, directing information to its proper destination. With this arrangement, every computer on the Internet has a unique address. (Addressing is discussed later in this chapter.)

IP provides software routines to route and to store and forward data among hosts on the network. IP functions at Layer 3 (the network layer), and it provides several services, including host addressing, error notification, fragmentation and reassembly, routing, and packet timeout. TCP presents the data to IP in order to provide basic host-to-host communication. IP then attaches to the packet, in a protocol header, the address from which the data comes and the address of the system to which it is going.

Under the standards, IP allows a packet size of up to 64,000 bytes, but we don't transmit packets that large because they would cause session timeouts and big congestion problems. Therefore, IP packets are segmented into 1,500-byte-maximum chunks.

IP always does its best to make the delivery to the requested destination host, but if it fails for any reason, it just drops the packet. As such, upper-level protocols should not depend on IP to deliver the packet every time. Because IP provides connectionless, unreliable service and because packets can get lost or arrive out of sequence, or the messages may take more than 1,500 bytes, TCP provides the recovery for these problems.

UDP

Like TCP, UDP is a Layer 4 protocol that operates over IP. UDP provides end-to-end, connectionless, unreliable datagram service. It is well suited for query-response applications, for multicasting, and for use with Voice over IP (VoIP). (VoIP is discussed in Chapter 11.) Because UDP does not request retransmissions, it minimizes what would otherwise be unmanageable delay; the result is that sometimes the quality is not very good. For instance, if you encounter losses or errors associated with a voice packet, the delays that would be associated with retransmitting that packet would render the conversation unintelligible. In VoIP, when you lose packets, you do not request retransmissions. Instead, you hope that the user can recover from the losses by other means. Unlike TCP, UDP does not provide for error correction and sequenced packet delivery; it is up to the application itself to incorporate error correction if required.

ICMP

ICMP provides error handling and control functions. It is tightly integrated with IP. ICMP messages, delivered in IP packets, are used for out-of-band messages related to network operation or misoperation. Because ICMP uses IP, ICMP packet delivery is unreliable. ICMP functions include announcing network errors, announcing network congestion, assisting in troubleshooting, and announcing timeouts.

IGMP

Another Layer 3 protocol is Internet Group Management Protocol (IGMP), whose primary purpose is to allow Internet hosts to participate in multicasting. The IGMP standard describes the basics of multicasting IP traffic, including the format of multicast IP addresses, multicast Ethernet encapsulation, and the concept of a host group (that is, a set of hosts interested in traffic for a particular multicast address). IGMP enables a router to determine which host groups have members on a given network segment, but IGMP does not address the exchange of multicast packets between routers.

ARP and RARP

At Layer 3 you also find ARP/RARP. ARP determines the physical address of a node, given that node's IP address. ARP is the mapping link between IP addresses and the underlying physical (MAC) address. RARP enables a host to discover its own IP address by broadcasting its physical address. When the broadcast occurs, another node on the LAN answers back with the IP address of the requesting node.

Routing Protocols

Routing protocols are protocols that allow routers to communicate with each other. They include Routing Information Protocol (RIP), Interior Gateway Protocol (IGP), Open Shortest Path First (OSPF), Exterior Gateway Protocol (EGP), and Border Gateway Protocol (BGP).

There are several processes involved in router operation. First, the router creates a routing table to gather information from other routers about the optimum paths. As discussed in Chapter 8, "Local Area Networking," the routing tables can be static or dynamic; dynamic routing tables are best because they adapt to changing network conditions. Next, when data is sent from a network host to a router en route to its destination, the router breaks open the data packet and looks at the destination address to determine the most efficient path between two endpoints. To identify the most efficient path, the router uses algorithms to evaluate a number of factors (called metrics), including distance and cost. Routing protocols consider all the various metrics involved when computing the best path.

Distance-Vector Versus Link-State Protocols Two main types of routing protocols are involved in making routing decisions:

         Distance-vector routing protocols These routing protocols require that each router simply inform its neighbors of its routing table. For each network path, the receiving router picks the neighbor advertising the lowest cost, and then the router adds this into its routing table for readvertisement. Common distance-vector routing protocols are RIP, Internetwork Packet Exchange (IPX) RIP, AppleTalk Routing Table Management Protocol (RTMP), and Cisco's Interior Gateway Routing Protocol (IGRP).

         Link-state routing protocols Link-state routing protocols require that each router maintain at least a partial map of the network. When a network link changes state up to down or vice versa a notification is flooded throughout the network. All the routers note the change and recompute the routes accordingly. This method is more reliable, easier to debug, and less bandwidth-intensive than distance-vector routing, but it is also more complex and more computer- and memory-intensive. OSPF, Intermediate System to Intermediate System (IS-IS), and Network Link Services Protocol (NLSP) are link-state routing protocols.

Interior and Exterior Routing Interior routing occurs within an autonomous system, which is a collection of routers under a single administrative authority that uses a common interior gateway protocol for routing packets. Most of the common routing protocols, such as RIP and OSPF, are interior routing protocols. The autonomous system number is a unique number that identifies an autonomous system in the Internet. Autonomous system numbers are managed and assigned by ARIN (North America), APNIC (Asia-Pacific), and RIPE NCC (Europe). Exterior routing protocols, such as BGP, use autonomous system numbers to uniquely define an autonomous system. The basic routable element is the IP network or subnetwork, or the Classless Interdomain Routing (CIDR) prefix for newer protocols. (CIDR is discussed a little later in the chapter.)

OSPF, which is sanctioned by the IETF and supported by TCP, is intended to become the Internet's preferred interior routing protocol. OSPF is a link-state protocol with a complex set of options and features. Link-state algorithms control the routing process and enable routers to respond quickly to changes in the network. Link-state routing makes use of the Dijkstra algorithm (which determines routes based on path length and is used in OSPF) to determine routes based on the number of hops, the line speed, the amount of traffic, and the cost. Link-state algorithms are more efficient and create less network traffic than do distance-vector algorithms, which can be crucial in environments that involve multiple WAN links.

Exterior routing occurs between autonomous systems and is of concern to service providers and other large or complex networks. Whereas there may be many different interior routing schemes, a single exterior routing scheme manages the global Internet, and it is based on the exterior routing protocol BGP version 4 (BGP-4). The basic routable element is the autonomous system. Routers determine the path for a data packet by calculating the number of hops between internetwork segments. Routers build routing tables and use these tables along with routing algorithms.

Network Access Protocols

Network access protocols operate at Layer 2. They provide the underlying basis for the transport of the IP datagrams. The original network access protocol was Ethernet, but IP can be transported transparently over any underlying network, including Token Ring, FDDI, Fibre Channel, Wireless, X.25, ISDN, Frame Relay, or ATM.

Both Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP) were designed specifically for IP over point-to-point connections. PPP provides data-link layer functionality for IP over dialup/dedicated links. In other words, whenever you dial in to your ISP, you negotiate a PPP session, and part of what PPP does is to provide a mechanism to identify and authenticate the user that is dialing up.

Internet Addressing

To make the Internet an open communications system, a globally accepted method of identifying computers was needed, and IP acts as the formal addressing mechanism for all Internet messaging.

Each host on the Internet is assigned a unique 32-bit Internet address, called the IP address, which is placed in the IP header and which is used to route packets to their destinations. IP addresses are assigned on a per-interface basis, so a host can have several IP addresses if it has several interfaces (note that a single interface can have multiple addresses, too). Therefore, an IP address refers to an interface, not to the host. A basic concept of IP addressing is that some of the bits of the IP address can be used for generalized routing decisions because these bits indicate which network (and possibly which subnet) the interface is a member of. Addressing is performed on the basis of network/subnet and host; routing is performed based on the network/subnet portion of the address only. When a packet reaches its target network, the host portion of the address is then examined for final delivery.

The current generation of IP is called IP version 4 (IPv4). IP addresses have two parts: The first is the network ID and the second is the host ID. Under IPv4, there are five classes, which differ in how many networks and hosts are supported (see Figure 9.4):

Figure 9.4. IPv4 32-bit addressing

graphics/09fig04.gif

         Class A With Class A, there can be a total of 126 networks, and on each of those networks there can be 16,777,214 hosts. Class A address space is largely exhausted, although there is some address space reserved by IANA.

         Class B Class B addresses provide for 16,384 networks and each of which can have 65,534 hosts. Class B space is also largely exhausted, with a few still available, albeit at a very high cost.

         Class C Class C allows 2,097,152 networks, each of which can have 254 hosts.

         Class D Class D belongs to a special aspect of the Internet called the multicast backbone (MBONE). Singlecast, or unicast, means going from one transmitter to one receiver. Multicast implies moving from one transmitter to multiple receivers. Say, for instance, that you are in San Francisco and you want to do a videoconferencing session that involves three offices located in London. In the unicast mode, you need three separate IP connections to London from the conferencing point in San Francisco. With multicast, however, you need only one IP connection. A multicast router (mrouter) would enfold your IP packets in special multicast packets and forward those packets on to an mrouter in London; in London that mrouter would remove the IP packets, replicate those packets, and then distribute them to the three locations in London. The MBONE system therefore conserves bandwidth over a distance, relieves congestion on transit links, and makes it possible to address a large population in a single multicast.

         Class E Class E is reserved address space for experimental purposes.

The digits in an IP address tell you a number of things about the address. For example, in the IP address 124.29.88.7, the first set of digits, 124, is the network ID, and because it falls in the range of numbers for Class A, we know that this is a Class A address. The remaining three sets, 29.88.7, are the host ID. In the address 130.29.88.7, the first two sets, 130.29, comprise the network ID and indicate that this is a Class B address; the second two sets in this address, 88.7, comprise the host ID. Figure 9.5 shows an example of IP addressing.

Figure 9.5. An example of IP network addressing

graphics/09fig05.gif

Network IDs are managed and assigned by ARIN, APNIC, and RIPE NCC. Host IDs are assigned locally by the network administrator. Given a 32-bit address field, we can achieve approximately 4.3 billion different addresses with IPv4. That seems like a lot, but as we began to experience growth in the Internet, we began to worry about the number of addresses left. In the early 1990s, the IETF began to consider the potential of IP address space exhaustion. The result was the implementation of CIDR, which eliminated the old class-based style of addressing. The CIDR address is a 32-bit IP address, but it is classless. The CIDR addressing scheme is hierarchical. Large national and regional service providers are allocated large blocks of contiguous Internet addresses, which they then allocate to other smaller ISPs or directly to organizations. Networks can be broken down into subnetworks, and networks can be combined into supernetworks, as long as they share a common network prefix. Basically, with CIDR a route is no longer an IP address broken down into network and host bits according to its class; instead, the route becomes a combination of an address and a mask. The mask indicates how many bits in the address represent the network prefix. For example, the address 200.200.14.20/23 means that the first 23 bits of the binary form of this address represent the networks. The bits remaining represent the host. In binary form, the prefix 23 would like this: 255.255.254.0. Table 9.1 lists the most commonly used masks represented by the prefix, and the number of host addresses available with a prefix of the type listed. CIDR defines address assignment and aggregation strategies designed to minimize the size of top-level Internet routing tables. The national or regional ISP needs only to advertise its single supernet address, which represent an aggregation of all the subnets within that supernet. Routers in the Internet no longer give any credence to class it's entirely based on the CIDR prefix. CIDR does require the use of supporting routing protocols, such as RIP version 2, OSPF version 2, Enhanced Interior Gateway Routing Protocol (EIGRP), and BGP-4.

Table 9.1. CIDR Masking Scheme

Mask as Dotted-Decimal Value

Mask as Prefix Value

Number of Hosts

255.255.255.224

/27

32

255.255.255.192

/26

64

255.255.255.128

/25

128

255.255.255.0 (Class C)

/24

256

255.255.254.0

/23

512

255.255.252.0

/22

1,024

255.255.248.0

/21

2,048

255.255.242.0

/20

4,096

255.255.240.0

/19

8,192

255.255.224.0

/18

16,384

255.255.192.0

/17

32,768

255.255.0.0 (Class B)

/16

65,536

255.254.0.0

/15

131,072

255.252.0.0

/14

262,144

255.248.0.0

/13

524,288

Subnetting is a term you may have heard in relationship to addressing. It once referred to the subdivision of a class-based network into subnetworks. Today, it generally refers to the subdivision of a CIDR block into smaller CIDR blocks. Subnetting allows single routing entries to refer either to the larger block or to its individual constituents, and this permits a single general routing entry to be used through most of the Internet, with more specific routes being required only for routers in the subnetted block.

Researchers are predicting that even with CIDR and subnetting in place, we will run out of IPv4 address space by the year 2010. Therefore, several years ago the IETF began developing an expanded version of IP called IPv6 (originally called IPng for IP Next Generation). IPv6 uses a 128-bit address, which allows a total of 340 billion billion billion billion unique addresses, which equates to approximately 70 IP addresses for every square inch of the earth's surface, including oceans. That should hold us for a while and be enough for each and every intelligent appliance, man, woman, child, tire, pet, and curb to have its own IP address. Along with offering a greatly expanded address space, IPv6 also allows increased scalability through multicasting and includes increased Quality of Service (QoS) capabilities. (QoS is discussed in detail in Chapter 10, "Next-Generation Networks.")

IPv6 Address Allocation and Assignment

ARIN has published a draft policy document on IPv6 address allocation and assignment that is available at www.arin.net.

The IPv6 specification includes a flow label to support real-time traffic and automated connectivity for plug-and-play use. In addition, IPv6 provides improved security mechanisms. It incorporates Encapsulated Security payload (ESP) for encryption, and it includes an authentication header, to make transactions more secure. Although IPv6 offers many benefits, it requires a major reconfiguration of all the routers out there, and hence we haven't seen the community jump at the migration from IPv4 to IPv6. But in order to meet the demands of the growing population of not just human users but smart machines that are tapping into the Internet, the transition will be necessary. An experimental network called the 6Bone network is being used as an environment for IPv6 research. So far more than 400 networks in more than 40 countries are connected to the 6Bone IPv6 network.

The Domain Name System

The Domain Name System (DNS) is a distributed database system that operates on the basis of a hierarchy of names. DNS provides translation between easy-for-humans-to-remember host names (such as www.telecomwebcentral.com or www.lidoorg.com) and the physical IP addresses, which are harder for humans to remember. It identifies a domain's mail servers and a domain's name servers. When you need to contact a particular URL, the host name portion of the URL must be resolved to the appropriate IP address. Your Web browser goes to a local name server, maintained either by your ISP, your online service provider, or your company. If the IP address is a local one that is, it's on the same network as the one you are on then the name server will be able to resolve that URL with the IP address right away. In this case, the name server sends the true IP address to your computer, and because your Web browser now has the real address of the place you're trying to locate, it contacts that site, and the site sends the information you've requested.

If the local name server determines that the information you have requested is not on the local network, it must get the information from a name server on the Internet. The local name server contacts the root domain server, which contains a list of the top-level domain name servers managed by ICANN. The root domain server tells the local server which top-level domain name server contains the domain specified in the URL. The top-level domain name server then tells the local server which primary name server and secondary name server have the information about the requested URL. The local name server can then contact the primary name server. If the information can't be found in the primary name server, then the local name server contacts the secondary name server. One of those name servers will have the proper information, and it will then pass that information back to the local name server. The local name server sends the information back to your browser, which then uses the IP address to contact the proper site.

Top-Level Domains

For some time, there have been seven generic top-level domains:

.com

commercial

.gov

government

.mil

military

.edu

education

.net

for network operation

.org

nonprofit organizations

.int

international treaty organizations

Seven new top-level domains have been operational since 2001:

.aero

air-transport industry

.biz

businesses

.coop

cooperatives

.info

any use

.museum

museums

.name

individuals

.pro

Accountants, lawyers, and physicians

There are also country code top-level domains. Each of these is a two-letter country code (for example, .au, .ca), and there are 245 country code top-level domains, including a .us domain for the United States. So if you wanted to protect your domain name in the .com domain, for example, you would actually have to register it in 246 domains .com and then .com with the appropriate two-letter country code after that and if you really want to get serious about branding, you'd probably want to register another 246 each in the .net, .org, and .biz domains! Of course, very few organizations actually do this.

The Importance of Domain Names

Many new domain names are registered every minute, and it seems that all the simple one- and two-word .com names have already been taken. Therefore, there's a call for new domains to be added. Originally IANA, which was funded by the U.S. government, administrated the DNS. Since 1993, Network Solutions had been the sole provider of direct domain name registration services in the open generic top-level domains, and registration authority over the country code top-level domains has been relegated to the individual countries and bodies within them.

In September 1998 ICANN was formed to take over. ICANN is now introducing competition into the administration of the DNS through two attempts. One is a policy for the accreditation of registrars, and the other is a shared registry system for the .com, .net, and .org domains. In 2001 ICANN operationalized seven new top-level domains, and it must still negotiate with the winning applicants the terms under which they will operate this registry. The future of ICANN is still a bit tenuous; it is a bit political, to say the least.

A story illustrates what value domain names have. There's a small Pacific Islands country, with a population of 10,600 people, known as Tuvalu, and it was assigned the country code .tv. Naturally, .tv is a very appealing domain. It has reference to entertainment, streaming media, and screaming multimedia, and it also has a global context: Once you register something as .tv, you would no longer be able to alter it by appending another country code because it already is a country code. Of course, many entrepreneurs developed an interest in Tuvalu, and many companies approached the country, trying to acquire its domain name; Tuvalu auctioned the name. A company called .tv bought the name for roughly US$1 million quarterly adjustable for inflation with a US$50 million cap over 10 years. In addition, Tuvalu holds a 20% stake in the company. This auctioning of the country's domain name produced four times the country's GDP. Needless to say, the island is richly developing its transportation, educational, and health care facilities.

On the .tv domain, some domain names are quite expensive, with bidding starting at US$250,000 for broadband.tv, for instance. On the other hand, some creative and descriptive domains haven't yet been registered, and you'd be able to acquire those for as little as US$50. A lot of money is tied up in domain names, and the process of creating new domains will further challenge identifying the best branding strategy.

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net