Introduction

[Previous] [Next]



Welcome to the Microsoft Encyclopedia of Networking, a survey of computer networking concepts, technologies, and services. This work is intended to be a comprehensive, accurate, and timely resource for students, system engineers, network administrators, IT implementers, and computing professionals from all walks of life. Before I outline its scope of coverage, however, I’ll ask a simple question that surprisingly has no easy answer: What is networking?

What Is Networking?

In the simplest sense, networking means connecting computers so that they can share files, printers, applications, and other computer-related resources. The advantages of networking computers together are pretty obvious:

  • Users can save their important files and documents on a file server, which is more secure than storing them on their workstations because a file server can be backed up in a single operation.

  • Users can share a network printer, which costs much less than having a locally attached printer for each user’s computer.

  • Users can share groupware applications running on application servers, which enables users to share documents, send messages, and collaborate directly.

  • The job of administering and securing a company’s computer resources is simplified since they are concentrated on a few centralized servers.

This definition of networking focuses on the basic goals of networking computers: increased manageability, security, efficiency, and cost-effectiveness over non-networked systems. We could also focus on the different types of networks:

  • Local area networks (LANs), which can range from a few desktop workstations in a small office/home office (SOHO) to several thousand workstations and dozens of servers deployed throughout dozens of buildings on a university campus or in an industrial park

  • Wide area networks (WANs), which might be a company’s head office linked to a few branch offices or an enterprise spanning several continents with hundreds of offices and subsidiaries

  • The Internet, the world’s largest network and the “network of networks”

We could also focus on the networking architectures in which these types of networks can be implemented:

  • Peer-to-peer networking, which might be implemented in a workgroup consisting of computers running Microsoft Windows 98 or Windows 2000 Professional

  • Server-based networking, which might be based on the domain model of Microsoft Windows NT, the domain trees and forests of Active Directory in Windows 2000, or another architecture such as Novell Directory Services (NDS) for Novell NetWare

  • Terminal-based networking, which might be the traditional host-based mainframe environment; the UNIX X Windows environment; the terminal services of Windows NT 4, Server Enterprise Edition; Windows 2000 Advanced Server; or Citrix MetaFrame

Or we could look at the networking technologies used to implement each networking architecture:

  • LAN technologies such as Ethernet, ARCNET, Token Ring, Banyan Vines, Fast Ethernet, Gigabit Ethernet, and Fiber Distributed Data Interface (FDDI)

  • WAN technologies such as Integrated Services Digital Network (ISDN), T1 leased lines, X.25, frame relay, Synchronous Optical Network (SONET), Digital Subscriber Line (DSL), and Asynchronous Transfer Mode (ATM)

  • Wireless communication technologies, including cellular systems such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Personal Communications Services (PCS), and infrared systems based on the standards developed by the Infrared Data Association (IrDA)

We could also consider the hardware devices that are used to implement these technologies:

  • LAN devices such as repeaters, concentrators, bridges, hubs, switches, routers, and Multistation Access Units (MAUs)

  • WAN devices such as modems, ISDN terminal adapters, Channel Service Units (CSUs), Data Service Units (DSUs), packet assembler/disassemblers (PADs), frame relay access devices (FRADs), multiplexers (MUXes), and inverse multiplexers (IMUXes)

  • Equipment for organizing, protecting, and troubleshooting LAN and WAN hardware, such as racks, cabinets, surge protectors, line conditioners, uninterruptible power supplies (UPS’s), KVM switches, and cable testers

  • Cabling technologies such as coaxial cabling, twinax cabling, twisted-pair cabling, fiber-optic cabling, and associated equipment such as connectors, patch panels, wall plates, and splitters

  • Unguided media technologies such as infrared communication, wireless cellular networking, and satellite networking, and their associated hardware

  • Data storage technologies such as RAID, network-attached storage (NAS), and storage area networks (SANs), and the technologies used to connect them, such as Small Computer System Interface (SCSI) and Fibre Channel

  • Technologies for securely interfacing private corporate networks with unsecured public ones, such as firewalls, proxy servers, and packet-filtering routers

  • Technologies for increasing availability and reliability of access to network resources, such as clustering, caching, load balancing, and fault-tolerant technologies

  • Network management technologies such as the Simple Network Management Protocol (SNMP) and Remote Network Monitoring (RMON)

On a more general level, networking also involves the standards and protocols that underlie the technologies and hardware mentioned, including the Open Systems Interconnection (OSI) networking model of the International Organization for Standardization (ISO); the X-series, V-series, and G-series standards of the International Telecommunication Union (ITU); Project 802 of the Institute of Electrical and Electronics Engineers (IEEE); the Requests for Comments (RFCs) of the Internet Engineering Task Force (IETF); and others from the World Wide Web Consortium (W3C), the ATM Forum, and the Gigabit Ethernet Alliance.

Other standards and protocols include the following:

  • LAN protocols such as NetBEUI, IPX/SPX, TCP/IP, and AppleTalk

  • WAN protocols such as Serial Line Internet Protocol (SLIP), Point-to-Point Protocol (PPP), Point-to-Point Tunneling Protocol (PPTP), and Layer 2 Tunneling Protocol (L2TP)

  • Protocols developed within mainframe computing environments, such as Systems Network Architecture (SNA), Advanced Program-to-Program Communications (APPC), Synchronous Data Link Control (SDLC), and High-level Data Link Control (HDLC)

  • Routing protocols such as the Routing Information Protocol (RIP), Interior Gateway Routing Protocol (IGRP), Open Shortest Path First (OSPF) Protocol, and Border Gateway Protocol (BGP)

  • Internet protocols such as the Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Network News Transfer Protocol (NNTP), and the Domain Name System (DNS)

  • Electronic messaging protocols such as X.400, Simple Mail Transfer Protocol (SMTP), and Post Office Protocol version 3 (POP3)

  • Directory protocols such as X.500 and Lightweight Directory Access Protocol (LDAP)

  • Security protocols such as Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), X.509 digital certificates, Kerberos v5, and the various PKCS standards

  • Serial interface standards such as RS-232, RS-422/423, RS-485, V.35, and X.21

We could dig still deeper into the technologies and talk about the fundamental engineering concepts that underlie networking services and technologies, including

  • Impedance, attenuation, shielding, near-end crosstalk (NEXT), and other characteristics of cabling systems

  • Signals and how they can be multiplexed using time-division, frequency-division, statistical, and other multiplexing techniques

  • Bandwidth, throughput, latency, jabber, jitter, backbone, handshaking, hop, dead spots, dark fiber, and late collisions

  • Balanced vs. unbalanced signals, baseband vs. broadband transmission, data communications equipment (DCE) vs. data terminal equipment (DTE), circuit switching vs. packet switching, connection-oriented vs. connectionless communication, unicast vs. multicast and broadcast, point-to-point vs. multipoint links, direct sequencing vs. frequency hopping methods, and switched virtual circuit (SVC) vs. permanent virtual circuit (PVC)

We could also look at who provides networking technologies (especially WAN technologies):

  • Internet service providers (ISPs), application service providers (ASPs), integrated communications providers (ICPs), and so on

  • The central office (CO) of the local telco (through an existing local loop connection), a cable company, or a wireless networking provider

  • Local exchange carriers (LECs) or Regional Bell Operating Companies (RBOCs) through their points of presence (POPs) and Network Access Points (NAPs)

  • Telecommunications service providers that supply dedicated leased lines, circuit-switched connections, or packet-switching services

We could also look at the vendor-specific software technologies that make computer networking possible (and useful):

  • Powerful network operating systems (NOS’s) such as Windows NT and Windows 2000, Novell NetWare, various flavors of UNIX, and free operating systems such as Linux and FreeBSD

  • Specialized operating systems such as Cisco Systems’ Internetwork Operating System (IOS), which runs on Cisco routers

  • Directory systems such as the domain-based Windows NT Directory Services (NTDS) for Windows NT, Active Directory in Windows 2000, and Novell Directory Services (NDS) for Novell NetWare

  • File systems such as NTFS on Windows platforms and distributed file systems such as the Network File System (NFS) developed by Sun Microsystems

  • Programming languages and architectures for distributed computing, such as the C and Java languages, ActiveX and Jini technologies, Hypertext Markup Language (HTML) and Extensible Markup Language (XML), the Distributed Component Object Model (DCOM) and COM+, and Remote Procedure Calls (RPCs) and other forms of interprocess communication (IPC)

  • Tools and utilities for integrating technologies from different vendors in a heterogeneous networking environment, such as Gateway Services for NetWare (GSNW), Services for Macintosh, Services for UNIX on the Windows NT and Windows 2000 platforms, and Microsoft SNA Server for connectivity with mainframe systems

On a more detailed level, we could look at the tools and utilities that you can use to administer various NOS’s and their networking services and protocols, including the following:

  • Microsoft Management Console (MMC) and its administrative snap-ins in Windows 2000 or the utilities in the Administrative Tools program group in Windows NT

  • TCP/IP command-line utilities such as ping, ipconfig, traceroute, arp, netstat, nbtstat, finger, and nslookup

  • Platform-specific command-line utilities such as the Windows commands for batch administration (including the at command, which you can use with ntbackup to perform scheduled network backups of Windows NT servers)

  • Cross-platform scripting languages that can be used for system and network administration, including JavaScript, VBScript, and Perl

We could also look at applications that are network-aware, such as the Microsoft BackOffice network applications suite that includes Microsoft Exchange Server, Microsoft SQL Server, Microsoft SNA Server, and Microsoft Proxy Server. We could look at some of the terminology and technologies associated with these applications, how the software is licensed, and the GUI or command-line tools used to administer them.

As you can see, there’s more to networking than hubs and cables. In fact, the field of computer networking is almost overwhelming in its scope and complexity, and one could spend a lifetime studying only one small aspect of it. But it hasn’t always been this way. Let’s take a look at how we got to this point.

The History of Networking

Because networking is such a broad and complex field, no single event represents its point of origin. We can think of the 1960s as the early period, however, because that’s when the digital computer began to significantly affect the lives of ordinary individuals and the operations of businesses and governments. For example, during that decade the Internal Revenue Service (IRS) began to use mainframe computers to process tax returns. In this section, we’ll survey the development of networking and related communication technologies and standards from the 1960s through the 1990s.

1960s

In the 1960s, computer networking was essentially synonymous with mainframe computing and telephony services, and the distinction between local and wide area networks did not yet exist. Mainframes were typically “networked” to a series of dumb terminals with serial connections running on RS-232 or some other electrical interface. If a terminal in one city needed to connect with a mainframe in another city, a 300-baud long-haul modem would use the existing analog Public Switched Telephone Network (PSTN) to form the connection. The technology was primitive indeed, but it was an exciting time nevertheless.

The quality and reliability of the PSTN increased significantly in 1962 with the introduction of pulse code modulation (PCM), which converted analog voice signals into digital sequences of bits. DS0 (Digital Signal Zero) became the basic 64-Kbps channel, and the entire hierarchy of the digital telephone system was soon built on this foundation. Next, a device called the channel bank was introduced. It took 24 separate DS0 channels and combined them using time-division multiplexing (TDM) into a single 1.544-Mbps channel called DS1 or T1. (In Europe, 30 DS0 channels were combined to make E1.) When the backbone of the Bell system became digital, transmission characteristics improved due to higher quality and less noise. This was eventually extended all the way to local loop subscribers using ISDN. The first commercial touch-tone phone was also introduced in 1962.

The first communication satellite, Telstar, was launched in 1962. This technology did not immediately affect the networking world because of the latency of satellite links compared to undersea cable communications, but it eventually surpassed transoceanic underwater telephone cables (which were first deployed in 1965 and could carry 130 simultaneous conversations) in carrying capacity. In fact, in 1960 scientists at Bell Laboratories transmitted a communication signal coast to coast across the United States by bouncing it off the moon. Unfortunately, the moon wouldn’t sit still! By 1965, the first commercial communication satellites (such as Early Bird) were deployed.

Interestingly, in 1961 the Bell system proposed a new telecommunications service called TELPAK, which it claimed would lead to an “electronic highway” for communication, but it never pursued the idea. Could this have been a portent of the “information superhighway” of the 1990s?

The year 1969 brought an event whose full significance was not realized until more than two decades later: the development of the ARPANET packet-switching network. ARPANET was a project of the U.S. Department of Defense’s Advanced Research Projects Agency (ARPA), which became DARPA in 1972. Similar efforts were underway in France and the United Kingdom, but the U.S. project evolved into the present-day Internet. (France’s MINTEL packet-switching system, which was based on the X.25 protocol and which aimed to bring data networking into every home, did take off in 1984 when the French government started giving away MINTEL terminals; by the early 1990s, more than 20 percent of the country’s population was using it.) The original ARPANET connected computers at Stanford, the University of California at Los Angeles (UCLA), the University of California at Santa Barbara (UCSB), and the University of Utah, with the first node installed at UCLA’s Network Measurements Center. A year later, Harvard, the Massachusetts Institute of Technology (MIT), and a few other institutions were added, but few of those involved realized that this technical experiment would someday profoundly affect business and society.

1969 also saw the publication of the first RFC document. The informal RFC process evolved into the primary means of directing the evolution of the Internet. The first RFC document specified the Network Control Protocol (NCP), which became the first transport protocol of ARPANET.

That same year, Bell Laboratories developed the UNIX operating system, a multitasking, multi-user NOS that became popular in academic computing environments in the 1970s. A typical UNIX system in 1974 was a PDP-11 minicomputer with dumb terminals attached. In a configuration with 768 KB of magnetic core memory and a couple of 200-MB hard disks, the cost of such a system would have been around $40,000. (I remember working on a UNIX system in the Cyclotron lab of my university’s physics department, feeding in bits of punched tape and watching lights flash. It was awesome.)

Standards for computer networking also evolved during the 1960s. In 1962, IBM introduced the first 8-bit character encoding system, called Extended Binary-Coded Decimal Interchange Code (EBCDIC). A year later, the competing American Standard Code for Information Interchange (ASCII) was introduced. ASCII ultimately won out over EBCDIC even though EBCDIC was 8-bit while ASCII was only 7-bit. ASCII was formally standardized by the American National Standards Institute (ANSI) in 1968. ASCII was first used in serial transmission between mainframe hosts and dumb terminals in mainframe computing environments, but it was eventually extended to all areas of computer and networking technologies.

Other developments in the 1960s included the release in 1964 of IBM’s powerful System/360 mainframe computing environment, which was widely implemented in government, university, and corporate computing centers. In 1966, IBM introduced the first disk storage system, which employed 50 two-foot-wide metal platters and had a storage capacity of 5 MB. IBM created the first floppy disk in 1967. In 1969, Intel released a RAM chip that stored 1 KB of information, which at the time was an amazing feat of engineering.

1970s

While the 1960s were the decade of the mainframe, the 1970s gave rise to Ethernet, which today is by far the most popular LAN technology. Ethernet was born in 1973 in Xerox’s research lab in Palo Alto, California. (An earlier experimental network called ALOHAnet was developed in 1970 at the University of Hawaii.) The original Xerox networking system was known as X-wire and worked at 2.94 Mbps. X-wire was experimental and was not used commercially, although a number of Xerox Alto workstations for word processing were networked together in the White House using X-wire during the Carter administration. In 1979, Digital Equipment Corporation (DEC), Intel, and Xerox formed the DIX consortium and developed the specification for standard 10-Mbps Ethernet, or thicknet, which was published in 1980. This standard was revised and additional features were added in the following decade.

The conversion of the backbone of the Bell telephone system to digital circuitry continued during the 1970s and included the deployment in 1974 of the first digital data service (DDS) circuits (then called the Dataphone Digital Service). DDS formed the basis of the later deployment of ISDN and T1 lines to customer premises. AT&T installed its first digital switch in 1976.

In wide area networking, a new telecommunications service called X.25 was deployed toward the end of the decade. This system was packet-switched, in contrast to the circuit-switched PSTN, and later evolved into public X.25 networks such as GTE’s Telenet Public Packet Distribution Network (PDN), which later became Sprintnet. X.25 was widely deployed in Europe, where it still maintains a large installed base.

In 1970, the Federal Communications Commission (FCC) announced the regulation of the fledgling cable television industry. (Cable TV remained primarily a broadcast technology for delivering entertainment to residential homes until the mid-1990s, when technologies began to be developed to enable it to carry broadband Internet access services to residential subscribers.)

Despite all the technological advances, however, telecommunications services in the 1970s remained unintegrated, with voice, data, and entertainment carried on different media. Voice was carried by telephone, which was still analog at the customer premises; entertainment was broadcast using radio and television technologies; and data was usually carried over RS-232 or Binary Synchronous Communication (BSC) serial connections between dumb terminals and mainframes (or, for remote terminals, long-haul modem connections over analog telephone lines).

The 1970s were also notable for the birth of ARPANET, the precursor to the Internet, which was first deployed in 1969 and grew throughout the decade as additional hosts were added at various universities and government institutions. By 1971, the network had 19 nodes, mostly consisting of a mix of PDP-8, PDP-11, IBM S/360, DEC-10, Honeywell, and other mainframe and minicomputer systems linked together. The initial design of ARPANET called for a maximum of 265 nodes, which seemed like a distant target in the early 1970s. The initial protocol used on this network was NCP, but it was replaced in 1982 by the more powerful TCP/IP protocol suite. In 1975, the administration of ARPANET came under the authority of the Defense Communications Agency.

ARPANET protocols and technologies continued to evolve using the informal RFC process. In 1972, the Telnet protocol was defined in RFC 318, followed by FTP in 1973 (RFC 454). ARPANET became an international network in 1973 when nodes were added at the University College of London in the United Kingdom and at the Royal Radar Establishment in Norway. ARPANET even established an experimental wireless packet-switching radio service in 1977, which two years later became the Packet Radio Network (PRNET).

Meanwhile, in 1974 the first specification for the Transmission Control Protocol (TCP) was published. Progress on the TCP/IP protocols continued through several iterations until the basic TCP/IP architecture was formalized in 1978, but it wasn’t until 1983 that ARPANET started using TCP/IP as its primary networking protocol instead of NCP.

1977 saw the development of UNIX to UNIX Copy (UUCP), a protocol and tool for sending messages and transferring files on UNIX-based networks. An early version of the USENET news system using UUCP was developed in 1979. (NNTP came much later, in 1987.)

In 1979, the first commercial cellular phone system began operation in Japan. This system was analog in nature, worked in the 800-MHz and 900-MHz frequency bands, and was based on a concept developed in 1947 at Bell Laboratories.

An important standard to emerge in the 1970s was the public-key cryptography scheme developed in 1976 by Whitfield Diffie and Martin Hellman. This scheme underlies the Secure Sockets Layer (SSL) protocol developed by Netscape Communications, which is now the predominant scheme for ensuring privacy and integrity of financial and other transactions over the World Wide Web (WWW). Without this scheme, popular e-business sites such as Amazon.com would have a hard time attracting customers.

In miscellaneous developments in the 1970s, IBM researchers invented the relational database in 1970, a set of conceptual technologies that has become the foundation of today’s distributed application environments. In 1971, IBM demonstrated the first speech recognition technologies, which have since led to automated call handling systems in customer service centers. IBM developed the concept of the virtual machine in 1972 and created the first sealed disk drive (the Winchester) in 1973. IBM introduced SNA for networking in its mainframe computing environment in 1974. In 1971, Intel released its first microprocessor, a 4-bit processor called the 4004 that ran at a clock speed of 108 kHz. The online service CompuServe was launched in 1979.

The first personal computer, the Altair, went on the market as a kit in 1975. The Altair was based on the Intel 8080, an 8-bit processor, and came with 256 bytes of memory, toggle switches, and LED lights. While the Altair was basically for hobbyists, the Apple II from Apple Computer, which was introduced in 1977, was much more. A typical Apple II system, which was based on the Motorola 6502 8-bit processor, had 4 KB of RAM, a keyboard, a motherboard with expansion slots, built-in BASIC in ROM, and color graphics. The Apple II quickly became the standard desktop system in schools and other educational institutions. A physics classroom I taught in had one all the way into the early 1990s (limited budget!). However, it wasn’t until the introduction of the IBM Personal Computer (PC) in 1981 that the full potential of personal computers began to be realized, especially in businesses.

In 1975, Bill Gates and Paul Allen licensed their BASIC programming language to MITS, the manufacturer of the Altair. BASIC was the first computer language program specifically written for a personal computer. Gates and Allen coined the name “Micro-soft” for their business partnership, and they officially registered it as a trademark the following year. Microsoft went on to license BASIC to other personal computing platforms such as the Commodore PET and the TRS-80.

1980s

In the 1980s, the growth of client/server LAN architectures continued while that of mainframe computing environments declined. The advent of the IBM PC in 1981 and the standardization and cloning of this system led to an explosion of PC-based LANs in businesses and corporations around the world, especially with the release of the IBM PC AT system in 1984. The number of PCs in use grew from 2 million in 1981 to 65 million in 1991. Novell, which came on the scene in 1983, became a major player in file and print servers for LANs with its Novell NetWare NOS.

However, the biggest development in the area of LAN networking in the 1980s was the evolution and standardization of Ethernet. While the DIX consortium worked on standard Ethernet in the late 1970s, the IEEE began its Project 802 initiative, which aimed to develop a single, unified standard for all LANs. When it became clear that this was impossible, 802 was divided up into a number of working groups, with 802.3 focusing on Ethernet, 802.4 on Token Bus, and 802.5 on Token Ring technologies and standards. The work of the 802.3 group culminated in 1983 with the release of the IEEE 802.3 10Base5 Ethernet standard, which was called thicknet because it used thick coaxial cable and which was virtually identical to the work already done by DIX. In 1985, this standard was extended as 10Base2 to include thin coaxial cable, commonly called thinnet.

Throughout most of the 1980s, coaxial cable was the primary form of premise cabling in Ethernet implementations. However, a company called SynOptics Communications developed a product called LattisNet for transmitting 10-Mbps Ethernet over twisted-pair wiring using a star-wired topology connected to a central hub or repeater. This wiring was cheaper than coaxial cable and was similar to the wiring used in residential and business telephone wiring systems. LattisNet was such a commercial success that the 802.3 committee approved a new standard 10BaseT for Ethernet over twisted-pair wiring in 1990. 10BaseT soon superseded the coaxial forms of Ethernet because of its ease of installation and because it could be installed in a hierarchical star-wired topology that was a good match for the architectural topology of multistory buildings.

In other Ethernet developments, fiber-optic cabling, which was first developed in the early 1970s by Corning, found its first commercial networking application in Ethernet networking in 1984. (The technology itself was standardized as 10BaseFL in the early 1990s.) Ethernet bridges became available in 1984 from DEC and were used both to connect separate Ethernet LANs to make large networks and to reduce traffic bottlenecks on overloaded networks by splitting them into separate segments. Routers could be used for similar purposes, but bridges generally offered better price and performance during the 1980s, as well as reduced complexity. Again, market developments preceded standards as the IEEE 802.1D Bridge Standard, which was initiated in 1987, was not standardized until 1990.

The development of the Network File System (NFS) by Sun Microsystems in 1985 resulted in a proliferation of diskless UNIX workstations with built-in Ethernet interfaces that also drove the demand for Ethernet and accelerated the deployment of bridging technologies for segmenting LANs. Also around 1985, increasing numbers of UNIX machines and LANs were connected to ARPANET, which until that time had been mainly a network of mainframe and minicomputer systems. The first UNIX implementation of TCP/IP came in v4.2 of Berkeley’s BSD UNIX, from which other vendors such as Sun Microsystems quickly ported their versions of TCP/IP. Although PC-based LANs became popular in business and corporate settings during the 1980s, UNIX continued to dominate in academic and professional high-end computing environments as the mainframe environment declined.

IBM introduced its Token Ring networking technology in 1985 as an alternative LAN technology to Ethernet. IBM had submitted its technology to the IEEE in 1982 and it was standardized by the 802.5 committee in 1984. IBM soon supported the integration of Token Ring with its existing SNA networking services and protocols for IBM mainframe computing environments. The initial Token Ring specifications delivered data at 1 Mbps and 4 Mbps, but it dropped the 1-Mbps version in 1989 when it introduced a newer 16-Mbps version. Interestingly, no formal IEEE specification exists for 16-Mbps Token Ring—vendors simply adopted IBM’s technology for the product. Since then, advances in the technology have included high-speed 100-Mbps Token Ring and Token Ring switching technologies that support virtual LANs (VLANs). Nevertheless, Ethernet remains far more widely deployed than Token Ring.

Also in the field of local area networking, the American National Standards Institute (ANSI) began standardizing the specifications for Fiber Distributed Data Interface (FDDI) in 1982. FDDI was designed to be a high-speed (100 Mbps) fiber-optic networking technology for LAN backbones on campuses and industrial parks. The final FDDI specification was completed in 1988, and deployment in campus LAN backbones grew during the late 1980s and the early 1990s.

In 1983, the ISO developed an abstract seven-layer model for networking called the Open Systems Interconnection (OSI) reference model. Although some commercial networking products were developed based on OSI protocols, the standard never really took off, primarily because of the predominance of TCP/IP. Other standards from the ISO and ITU that emerged in the 1980s included the X.400 electronic messaging standards and the X.500 directory recommendations.

A major event in the telecommunications and WAN field in 1984 was the divestiture of AT&T as the result of a seven-year antitrust suit brought against AT&T by the U.S. Justice Department. AT&T’s 22 Bell operating companies were formed into seven new RBOCs. This meant the end of the Bell system, but the RBOCs soon formed Bellcore as a telecommunications research establishment to replace the defunct Bell Laboratories. The United States was divided into Local Access and Transport Areas (LATAs), with intra-LATA communication handled by local exchange carriers (the Bell Operating Companies or BOCs) and inter-LATA communication handled by inter-exchange carriers (IXCs) such as AT&T, MCI, and Sprint.

The result of the breakup for wide area networking was increased competition, which led to new technologies and lower costs. One of the first effects was the offering of T1 services to subscribers in 1984. Until then, this technology had been used only for backbone circuits for long-distance communication. New hardware devices were offered to take advantage of the increased bandwidth, especially high-speed T1 multiplexers, or muxes, that could combine voice and data into a single communication stream. 1984 also saw the development of digital Private Branch Exchange (PBX) systems by AT&T, bringing new levels of power and flexibility to corporate subscribers.

The Signaling System #7 (SS7) digital signaling system was first deployed within the PSTN in the 1980s, first in Sweden and later in the United States. SS7 made new telephony services such as caller ID, call blocking, and automatic callback available to subscribers.

The first trials of ISDN, a fully digital telephony technology that runs on existing copper local loop lines, began in Japan in 1983 and in the United States in 1987. (All major metropolitan areas in the United States have since been upgraded to make ISDN available to those who want it, but ISDN has not caught on as a WAN technology as much as it has in Europe.)

In the 1980s, fiber-optic cabling emerged as a networking and telecommunications medium. In 1988, the first fiber-optic transatlantic undersea cable was laid and increased the capacity of the transatlantic communication system manyfold.

The 1980s also saw the standardization of SONET technology, a high-speed physical layer (PHY) fiber-optic networking technology developed from time-division multiplexing (TDM) digital telephone system technologies. Before the divestiture of AT&T in 1984, local telephone companies had to interface their own TDM-based digital telephone systems with proprietary TDM schemes of long-distance carriers, and incompatibilities created many problems. This provided the impetus for creating the SONET standard, which was finalized in 1989 through a series of CCITT (anglicized as International Telegraph and Telephone Consultative Committee) standards called G.707, G.608, and G.709. By the mid-1990s, almost all long-distance telephone traffic in the United States used SONET on trunk lines as the physical interface.

The 1980s brought the first test implementations of Asynchronous Transfer Mode (ATM) high-speed cell-switching technologies, which could use SONET as the physical interface. Many concepts basic to ATM were developed in the early 1980s at the France-Telecom laboratory in Lannion, France, particularly the PRELUDE project, which demonstrated the feasibility of end-to-end ATM networks running at 62 Mbps. The 53-byte ATM cell format was standardized by the CCITT in 1988, and the new technology was given a further push with the creation of the ATM Forum in 1991. Since then, use of ATM has grown significantly in telecommunications provider networks and has become a high-speed backbone technology in many enterprise-level networks around the world. However, the vision of ATM on users’ desktops has not been realized because of the emergence of cheaper Fast Ethernet and Gigabit Ethernet LAN technologies, and because of the complexity of ATM itself.

The convergence of voice, data, and broadcast information remained a distant vision throughout the 1980s and was even set back because of the proliferation of networking technologies, the competition between cable and broadcast television, and the slow adoption of residential ISDN. New services did appear, however, especially in the area of commercial online services such as America Online (AOL), CompuServe, and Prodigy, which offered consumers e-mail, bulletin board systems (BBS’s), and other services.

A significant milestone in the development of the Internet occurred in 1982 when the networking protocol of ARPANET was switched from NCP to TCP/IP. On January 1, 1983, NCP was turned off permanently—anyone who hadn’t migrated to TCP/IP was out of luck. ARPANET, which connected several hundred systems, was split into two parts, ARPANET and MILNET.

The first international use of TCP/IP took place in 1984 at CERN, a physics research center in Geneva, Switzerland. TCP/IP was designed to provide a way of networking different computing architectures in heterogeneous networking environments. Such a protocol was badly needed because of the proliferation of vendor-specific networking architectures in the preceding decade, including “homegrown” solutions developed at many government and educational institutions. TCP/IP made it possible to connect diverse architectures such as UNIX workstations, VMS minicomputers, and CRAY supercomputers into a single operational network. TCP/IP soon superseded proprietary protocols such as Xerox Network Systems (XNS), ChaosNet, and DECnet. It has since become the de facto standard for internetworking all types of computing systems.

CERN was primarily a research center for high-energy particle physics, but it became an early European pioneer of TCP/IP and by 1990 was the largest subnetwork of the Internet in Europe. In 1989, a CERN researcher named Timothy Berners-Lee developed the Hypertext Transfer Protocol (HTTP) that formed the basis of the World Wide Web (WWW). And all of this developed as a sidebar to the real research that was being done at CERN—slamming together protons and electrons at high speeds to see what fragments appear!

Also important to the development of Internet technologies and protocols was the introduction of the Domain Name System (DNS) in 1984. At that time, ARPANET had more than 1000 nodes, and trying to remember them by their numerical IP address was a headache. NNTP was developed in 1987, and Internet Relay Chant (IRC) was developed in 1988.

Other systems paralleling ARPANET were developed in the early 1980s, including the research-oriented Computer Science NETwork (CSNET), and the Because It’s Time NETwork (BITNET), which connected IBM mainframe computers throughout the educational community and provided e-mail services. Gateways were set up in 1983 to connect CSNET to ARPANET, and BITNET was similarly connected to ARPANET. In 1989, BITNET and CSNET merged into the Corporation for Research and Educational Networking (CREN).

In 1986, the National Science Foundation NETwork (NSFNET) was created. NSFNET networked together the five national supercomputing centers using dedicated 56-Kbps lines. The connection was soon seen as inadequate and was upgraded to 1.544-Mbps T1 lines in 1988. In 1987, NSF and Merit Networks agreed to jointly manage the NSFNET, which had effectively become the backbone of the emerging Internet. By 1989, the Internet had grown to more than 100,000 hosts, and the Internet Engineering Task Force (IETF) was officially created to administer its development. In 1990, NSFNET officially replaced the aging ARPANET and the modern Internet was born, with more than 20 countries connected.

Cisco Systems was one of the first companies in the 1980s to develop and market routers for Internet Protocol (IP) internetworks, a business that today is worth billions of dollars and is a foundation of the Internet. Hewlett-Packard was Cisco’s first customer for its routers, which were originally called gateways.

In wireless telecommunications, analog cellular was implemented in Norway and Sweden in 1981. Systems were soon rolled out in France, Germany, and the United Kingdom. The first U.S. commercial cellular phone system, which was named the Advanced Mobile Phone Service (AMPS) and operated in the 800-MHz frequency band, was introduced in 1983. By 1987, the United States had more than 1 million AMPS cellular subscribers, and higher-capacity digital cellular phone technologies were being developed. The Telecommunications Industry Association (TIA) soon developed specifications and standards for digital cellular communication technologies.

A landmark event that was largely responsible for the phenomenal growth in the PC industry (and hence the growth of the client/server model and local area networking) was the release of the first version of Microsoft’s text-based, 16-bit MS-DOS operating system in 1981. Microsoft, which had become a privately held corporation with Bill Gates as president and chairman of the board and Paul Allen as executive vice president, licensed MS-DOS 1 to IBM for its PC. MS-DOS continued to evolve and grow in power and usability until its final version, MS-DOS 6.22, which was released in 1993. One year after the first version of MS-DOS was released in 1981, Microsoft had its own fully functional corporate network, the Microsoft Local Area Network (MILAN), which linked a DEC 206, two PDP-11/70s, a VAX 11/250, and a number of MC68000 machines running XENIX. This setup was typical of heterogeneous computer networks in the early 1980s.

In 1983, Microsoft unveiled its strategy to develop a new operating system called Microsoft Windows with a graphical user interface (GUI). Version 1 of Windows, which shipped in 1985, used a system of tiled windows and could work with several applications simultaneously by switching between them. Version 2 was released in 1987 and supported overlapping windows and support for expanded memory.

Microsoft launched its SQL Server relational database server software for LANs in 1988. In its current version 7, SQL Server is an enterprise-class application that competes with other major database platforms such as Oracle and DB2. IBM and Microsoft jointly released their OS/2 operating system in 1987 and released OS/2 1.1 with Presentation Manager a year later.

In miscellaneous developments, IBM researchers developed the Reduced Instruction Set Computing (RISC) processor architecture in 1980. Apple Computer introduced its Macintosh computing platform in 1984 (the successor of its Lisa system), which introduced a windows-based GUI that became the precursor to Microsoft Windows. Apple also introduced the 3.5-inch floppy disk in 1984. CD-ROM technology was developed by Sony and Philips in 1985. (Recordable CD-R technologies were developed in 1991.) IBM released its AS/400 midrange computing system in 1988, which continues to be popular to this day.

1990s

The 1990s were a busy decade in every aspect of networking, so we’ll only touch on the highlights here. Ethernet continued to dominate LAN technologies and largely eclipsed competing technologies such as Token Ring and FDDI. In 1991, Kalpana Corporation began marketing a new form of bridge called a LAN switch, which dedicated the entire bandwidth of a LAN to a single port instead of sharing it among several ports. Later called Ethernet switches or Layer 2 switches, these devices quickly found a niche in providing dedicated high-throughput links for connecting servers to network backbones.

The rapid growth of computer networks and the rise of bandwidth-hungry applications created a need for something faster than 10-Mbps Ethernet, especially on network backbones. The first full-duplex Ethernet products, offering speeds of 20 Mbps, became available in 1992. In 1995, work began on a standard for full-duplex Ethernet; it was finalized in 1997. A more important development was Grand Junction Networks’ commercial Ethernet bus, introduced in 1992, which functioned at 100 Mbps. Spurred by this commercial advance, the 802.3 group produced the 802.3u 100BaseT Fast Ethernet standard for transmission of data at 100 Mbps over both twisted-pair copper wiring and fiber-optic cabling.

Although the jump from 10-Mbps to 100-Mbps Ethernet took almost 15 years, a year after the 100BaseT Fast Ethernet standard was released work began on a 1000-Mbps version of Ethernet popularly known as Gigabit Ethernet. Fast Ethernet was beginning to be deployed at the desktop, and this was putting enormous strain on the FDDI backbones that were deployed on many commercial and university campuses. FDDI also operated at 100 Mbps (or 200 Mbps if fault tolerance was discarded in favor of carrying traffic on the redundant ring), so a single Fast Ethernet desktop connection could theoretically saturate the capacity of the entire network backbone.

ATM, a broadband cell-switching technology used primarily in WANs and in telecommunications environments, was considered as a possible successor to FDDI for backboning Ethernet networks, and LAN emulation (LANE) was developed to carry LAN traffic such as Ethernet over ATM. However, ATM is more difficult to install and maintain than Ethernet, and a number of companies saw extending Ethernet speeds to 1000 Mbps as a way to provide network backbones with much greater capacity using technology that most network administrators were already familiar with. As a result, the 802 group called 802.3z developed a Gigabit Ethernet standard called 1000BaseX, which it released in 1998. Gigabit Ethernet is now widely deployed, and work is underway on extending Ethernet technologies to 10 Gbps. A competitor of Gigabit Ethernet for high-speed collapsed backbone interconnects, called fibre channel, was conceived by an ANSI committee in 1988 and has become a viable alternative.

The 1990s have seen huge changes in the landscape of telecommunications providers and their services. “Convergence” became a major buzzword, signifying the combining of voice, data, and broadcast information into a single medium for delivery to businesses and consumers through broadband technologies such as Broadband ISDN (B-ISDN), variants of DSL, and cable modem systems. Voice over IP (VoIP) became the avowed goal of many vendors, who promised businesses huge savings by routing voice telephone traffic over IP networks. The technology works, but the bugs are still being ironed out and deployments are still slow.

The Telecommunications Act of 1996 was designed to spur competition in all aspects of the U.S. telecommunications market by allowing the RBOCs access to long-distance services. The result has been an explosion in technologies and services, with mergers and acquisitions changing the nature of the provider landscape. The legal fallout from all this is still settling.

The first public frame relay packet-switching services were offered in North America in 1992. Companies such as AT&T and Sprint installed a network of frame relay nodes across the United States in major cities, where corporate networks could connect to the service through their local telco. Frame relay began to eat significantly into the deployed base of more expensive dedicated leased lines such as the T1 or E1 lines that businesses used for their WAN solutions, resulting in lower prices for these leased lines and greater flexibility of services. In Europe, frame relay has been deployed much more slowly, primarily because of the widespread deployment of packet-switching networks such as X.25.

The cable modem was introduced in 1996, and by the end of the decade broadband residential Internet access through cable television systems had become a strong competitor with telephone-based systems such as Asymmetric Digital Subscriber Line (ADSL) and G.Lite, another variant of DSL.

In 1997, the World Trade Organization (WTO) ratified the Information Technology Agreement (ITA), which mandated that participating governments eliminate all tariffs on information technology products by the next millennium. Other WTO initiatives promise to similarly open up telecommunications markets worldwide.

The decade saw a veritable explosion in the growth of the Internet and the development of Internet technologies. As mentioned earlier, ARPANET was replaced in 1990 by NSFNET, which by then was commonly called the Internet. At the beginning of the 1990s, the Internet’s backbone consisted of 1.544-Mbps T1 lines connecting various institutions, but in 1991 the process of upgrading these lines to 44.735-Mbps T3 circuits began. By the time the Internet Society (ISOC) was chartered in 1992, the Internet had grown to an amazing 1 million hosts on almost 10,000 connected networks. In 1993, the NSF created Internet Network Information Center (InterNIC) as a governing body for DNS. In 1995, the NSF stopped sponsoring the Internet backbone and NSFNET went back to being a research and educational network. Internet traffic in the United States was routed through a series of interconnected commercial network providers.

The first commercial Internet service providers (ISPs) emerged in the early 1990s when the NSF removed its restrictions against commercial traffic on the NSFNET. Among them were Performance Systems International (PSI), UUNET, MCI, and Sprintlink. (The first public dial-up ISP was actually The World, whose URL was www.world.std.com.) In the mid-1990s, commercial online networks such as AOL, CompuServe, and Prodigy provided gateways to the Internet to subscribers. Later in the decade, Internet deployment grew exponentially, with personal Internet accounts proliferating by the tens of millions around the world, new technologies and services developing, and new paradigms evolving for the economy and business. It’s almost too early to write about these things with suitable perspective—maybe I’ll wait until the next edition.

Many Internet technologies and protocols have come and gone quickly. Archie, an FTP search engine developed in 1990, is hardly used today. The WAIS protocol for indexing, storing, and retrieving full-text documents, which was developed in 1991, has been eclipsed by Web search technologies. Gopher, which was created in 1991, grew to a worldwide collection of interconnected file systems, but most Gopher servers have been turned off. Veronica, the Gopher search tool developed in 1992, is obviously obsolete as well. Jughead later supplemented Veronica but has also become obsolete. (There was never a Betty.)

The most obvious success story among Internet protocols has been HTTP, which, together with HTML and the system of URLs for addressing, has formed the basis of the Web. Timothy Berners-Lee and his colleagues created the first Web server (whose fully qualified DNS name was info.cern.ch) and Web browser software using the NeXT computing platform that was developed by Apple pioneer Steve Jobs. This software was ported to other platforms, and by the end of the century more than 2 million registered Web servers were running.

Lynx, a text-based Web browser, was developed in 1992, and I personally know that it was still used in some rural areas with slow Internet connections as late as 1996. Mosaic, the first graphical Web browser, was developed in 1993 by Marc Andreessen for the UNIX X Windows platform while he was a student at the National Center for Supercomputing Applications (NCSA). At that time, there were only about 50 known Web servers, and HTTP traffic amounted to only about 0.1 percent of the Internet’s traffic. Andreessen left school to start Netscape Communications, which released its first version of Netscape Navigator in 1994. Microsoft Internet Explorer 2 for Windows 95 was released in 1995 and rapidly became Netscape Navigator’s main competition. In 1995, Bill Gates announced Microsoft’s wide-ranging commitment to support and enhance all aspects of Internet technologies through innovations in the Windows platform, culminating in 1998 in Internet Explorer being completely integrated into the Windows 98 operating system. Another initiative in this direction was Microsoft’s announcement in 1996 of its ActiveX technologies, a set of tools for active content such as animation and multimedia for the Internet and the PC.

In wireless telecommunications, the work of the TIA resulted in 1991 in the first standard for digital cellular communication, the TDMA Interim Standard 54 (IS-54). Digital cellular was badly needed because the analog cellular subscriber market in the United States had grown to 10 million subscribers in 1992 and 25 million subscribers in 1995. The first tests of this technology, based on Time Division Multiple Access (TDMA) technology, took place in Dallas, Texas, and in Sweden, and were a success. This standard was revised in 1994 as TDMA IS-136, which is commonly referred to as Digital Advanced Mobile Phone Service (D-AMPS).

Meanwhile, two competing digital cellular standards also appeared. The first was the CDMA IS-95 standard for CDMA cellular systems based on spread spectrum technologies, which was first proposed by QUALCOMM in the late 1980s and was standardized by the TIA as IS-95 in 1993. Standards preceded implementation, however; it wasn’t until 1996 that the first commercial CDMA cellular systems were rolled out.

The second system was the GSM standard developed in Europe. (GSM originally stood for Groupe Spéciale Mobile.) GSM was first envisioned in the 1980s as part of the movement to unify the European economy, and the final air interface was determined in 1987 by the European Telecommunications Standards Institute (ETSI). Phase 1 of GSM deployment began in Europe in 1991. Since then, GSM has become the predominant system for cellular communication in over 60 countries in Europe, Asia, Australia, Africa, and South America, with over 135 mobile networks implemented. However, GSM implementation in the United States did not begin until 1995.

In the United States, the FCC began auctioning off portions of the 1900-MHz frequency band in 1994. Thus began the development of the higher-frequency Personal Communications System (PCS) cellular phone technologies, which were first commercially deployed in the United States in 1996.

Establishment of worldwide networking and communication standards continued apace in the 1990s. For example, in 1996 the Unicode character set, a character set that can represent any language of the world in 16-bit characters, was created, and it has since been adopted by all major operating system vendors.

In client/server networking, Novell in 1994 introduced Novell NetWare 4, which included the new Novell Directory Services (NDS), then called NetWare Directory Services. NDS offered a powerful tool for managing hierarchically organized systems of network file and print resources and for managing security elements such as users and groups.

In other developments, the U.S. Air Force launched the twenty-fourth satellite of the Global Positioning System (GPS) constellation in 1994, making possible precise terrestrial positioning using handheld satellite communication systems. RealNetworks released its first software in 1995, the same year that Sun Microsystems announced the Java programming language, which has grown in a few short years to rival C/C++ in popularity for developing distributed applications. Amazon.com was launched in 1995 and has become a colossus of cyberspace retailing in a few short years. Microsoft WebTV was introduced in 1997 and is beginning to make inroads into the residential Internet market.

Finally, the 1990s were, in a very real sense, the decade of Microsoft Windows. No other technology has had as vast an impact on ordinary computer users as Windows, which brought to homes and workplaces the power of PC computing and the opportunity for client/server computer networking. Version 3 of Microsoft Windows, which was released in 1990, brought dramatic increases in performance and ease of use over earlier versions, and Windows 3.1, released in 1992, quickly became the standard desktop operating system for both corporate and home users. Windows for Workgroups 3.1 quickly followed that same year. It integrated networking and workgroup functionality directly into the Windows operating system, allowing Windows users to use the corporate computer network for sending e-mail, scheduling meetings, sharing files and printers, and performing other collaborative tasks. In fact, it was Windows for Workgroups that brought the power of computer networks from the back room to users’ desktops, allowing them to perform tasks previously only possible for network administrators.

In 1992, Microsoft released the first beta version of its new 32-bit network operating system, Windows NT. In 1993 came MS-DOS 6, as Microsoft continued to support users of text-based computing environments. That was also the year that Windows NT and Windows for Workgroups 3.11 (the final version of 16-bit Windows) were released. In 1995 came the long-awaited release of Windows 95, a fully integrated 32-bit desktop operating system designed to replace MS-DOS, Windows 3.1, and Windows for Workgroups 3.11 as the mainstream desktop operating system for personal computing. Following in 1996 was Windows NT 4, which included enhanced networking services and a new Windows 95–style user interface. Windows 95 was superseded by Windows 98, which included full integration of Web services.

And finally, at the turn of the millennium came the long-anticipated successor to Windows NT, the Windows 2000 family of operating systems, which includes Windows 2000 Professional, Windows 2000 Server, Windows 2000 Advanced Server, and the soon-to-be-released Windows 2000 Datacenter Server. Together with Windows CE and embedded Windows NT, the Windows family has grown to encompass the full range of networking technologies, from embedded devices and personal digital assistants (PDAs) to desktop and laptop computers to heavy duty servers running the most advanced, powerful, scalable, business-critical enterprise-class applications.

The Scope of This Encyclopedia

Now that you have a good idea of what networking is and how it has evolved, you can probably guess the scope of this encyclopedia. Most of the information in this encyclopedia can be grouped into three broad categories:

  • Information about networking concepts, technologies, standards, and hardware—including such things as Ethernet, Token Ring, bandwidth, latency, hubs, switches, OSI, and X.500. This is obviously the core of the encyclopedia.

  • Information about the Internet and its standards, protocols, services, architecture, and implementation—including such things as HTTP, SMTP, Web servers, Web browsers, TCP/IP, BGP, application service provider (ASP), and so on.

  • Information about Microsoft innovations and technologies, primarily those relating to the Windows NT and Windows 2000 network operating systems and BackOffice network applications. These include such things as Active Directory, Windows Clustering, Internet Information Services (IIS), NTFS permissions, and so on. Specific operating systems, applications, and networking innovations from other vendors are also covered.

Also included are numerous entries from areas that are somewhat peripheral to networking, including programming technologies such as Java and ActiveX, cryptography standards such as DES and PKCS, and Microsoft certification programs. Networking professionals are likely to encounter many of these terms in their work and reading, and they might be important to networking professionals in the coming decade.

Who This Work Is For

This work is intended both for novices pursuing the Microsoft Certified Systems Engineer (MCSE) certification and for experienced networking professionals who want to add to their knowledge of networking, the Internet, and Microsoft technologies. MCSE is the primary networking certification for Microsoft Certified Professionals (MCPs). MCSEs are qualified to plan, implement, maintain, and support information systems in a wide range of computing environments using Windows NT, Windows 2000, and BackOffice. In 1999, the number of individuals worldwide with the MCSE designation numbered 143,000.

Shipments of Windows NT Server have grown to more than 2 million a year, making Windows NT Server the world’s most popular network operating system. With this growth expected to continue with the release of Windows 2000 Server, the need for education and training of both new and existing MCSEs is great. Of course, many other professionals will find this work useful, including consultants, IT planners, system integrators, teachers, service providers, and those who market and sell Microsoft solutions, products, and services.

About the Entries

The entries in this work are, of course, in alphabetical order and range from short entries of a few lines to ones several pages long. Longer entries typically have an explanatory “How It Works” section, which elucidates key concepts. Drawings or screenshots are included in many entries to help explain key concepts. Where applicable, examples are provided under the heading “Examples.” Additional material not related to the core explanation is consigned to a “Note” section, and practical recommendations or suggestions about implementing networking technologies are listed in “Tip” sections. Cross-references to related entries are identified by the heading “See also” at the end of an entry. Links to URLs where you can find further information are listed under the heading “On the Web.”

I have made every effort to minimize technical jargon and have strived for accuracy and timeliness. However, technologies change so quickly in the networking field that a portion of the material will inevitably become outdated very soon. Microsoft Press provides updates and corrections for its books and other products on the Web at http://mspress.microsoft.com/support/.

If you have comments, question, or ideas regarding this encyclopedia, please send them to Microsoft Press via e-mail at Mspinput@microsoft.com or at the following postal address:

Microsoft Press
Attn: Microsoft Encyclopedia of Networking Editor
One Microsoft Way
Redmond, WA 98052-6399

Please note that product support is not offered through the above addresses.

I wish you well in all your networking endeavors and hope that you find this a useful and professional reference work.

Mitch Tulloch, MCT, MCSE www.mtit.com



Microsoft Encyclopedia of Networking
Microsoft Encyclopedia of Networking
ISBN: 0735613788
EAN: 2147483647
Year: 2000
Pages: 37

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net