K

DAB

Stands for Digital Audio Broadcasting, a specification for broadband digital radio.

See Also Digital Audio Broadcasting (DAB)

DACL

Stands for discretionary access control list, an access control list (ACL) that can be configured by administrators.

See Also discretionary access control list (DACL)

daemon

The UNIX equivalent of a Microsoft Windows 2000, Windows XP, or Windows .NET Server service.

Overview

A daemon is a program associated with the UNIX operating system that runs in the background and performs some task without instigation from the user. An example of a daemon is the telnet daemon, which runs continuously in the background, waiting for a connection request from a telnet client. The telnet daemon facilitates the remote connection and makes it possible for the user to control the machine.

Daemon. How the nfsd daemon works.

Another example is the HTTPd daemon for the Apache Web server, which waits for Hypertext Transfer Protocol (HTTP) requests from Web browser clients and fulfills them.

A third example of a daemon is the nfsd daemon, which supports the remote file access aspect of the Network File System (NFS) in a UNIX environment. The nfsd daemon runs in the background on UNIX servers, waiting for remote procedure calls (RPCs) from NFS clients.

Daemons typically use RPCs for communication with clients. Because NFS is implemented as daemon processes at the user level instead of at the kernel level, NFS is thread-safe for execution, allowing multiple NFS processes to run as independent threads of execution.

Notes

The Microsoft equivalent of daemon is service. For example, the Workstation service of Windows 2000 would be known in UNIX as a daemon instead of a service.

See Also service

daily copy

A type of tape backup in which only files and folders that have changed on that day are backed up.

Overview

When a daily copy is performed during a backup, files and folders modified on that day are backed up but their archive attribute is not marked.

Daily copies are not a common type of backup operation. They are typically used only if a user wants to take home copies of the files she has been working on during the day. Few administrators would be willing to schedule and run the system backup software just to make copies of these files, so users taking advantage of this backup type usually have a locally attached backup device along with a similar device attached to their systems at home.

Daily copy backups are likely to be performed on media such as Iomega Zip or Jaz disks rather than on tape.

Notes

Daily copy backups are supported by the Microsoft Windows 2000 Backup utility.

See Also backup ,backup type

D-AMPS

Stands for Digital Advanced Mobile Phone Service, the digital version of the Advanced Mobile Phone Service (AMPS) cellular communications system.

See Also Digital Advanced Mobile Phone Service (D-AMPS)

DAO

Stands for Data Access Objects, a Microsoft technology that enables you to use a programming language to access and manipulate data stored in both local and remote databases.

See Also Data Access Objects (DAO)

DAP

Stands for Directory Access Protocol, a protocol for accessing information in a directory service based on the X.500 recommendations.

See Also Directory Access Protocol (DAP)

dark fiber

Any fiber-optic cabling or fiber device such as a repeater that is installed but not currently in use.

Overview

When no light is being transmitted through fiber-optic cabling, it is called dark fiber since it is, in effect, dark. When a carrier or cabling company first provisions fiber, it is called dark fiber. Then once all the components of the system are installed, including connectors, amplifiers, repeaters, switches, routers, and such, the light can be turned on and the fiber is no longer dark. Dark fiber is thus simply unused fiber or fiber that is not yet ready to be used.

Before dark fiber is activated, the system is usually tested using an optical time domain reflectometer (which measures and analyzes a fiber link) and other measuring devices to determine whether the system has integrity, and to measure its bandwidth and attenuation parameters.

Implementation

When an enterprise needs a fiber connection for wide area usage, it can take three possible approaches:

Notes

Although the term dark fiber at first sounds like there is a problem in the fiber-optic cabling system, this is not the case. However, various problems can occur in a fiber-optic cabling system that can cause it to remain dark once the system is turned on. These can include the following:

See Also dense wavelength division multiplexing (DWDM) ,fiber-optic cabling

Data Access Objects (DAO)

A Microsoft technology that enables you to use a programming language to access and manipulate data stored in both local and remote databases.

Overview

Data Access Objects (DAO) lets you access and manage databases, along with their structure and objects, by providing a framework called an object model that uses code to create and manipulate different kinds of databases.

DAO supports two different interfaces, which are known as workspaces:

Prospects

DAO and RDO are both available now, but these technologies are being superseded by Microsoft ActiveX Data Objects (ADO) and Remote Data Service (RDS). All these components can be found in the Microsoft Data Access Software Development Kit.

See Also ActiveX Data Objects (ADO) ,open database connectivity (ODBC)

data alarm

A device for alerting network administrators to network problems relating to serial transmission.

Overview

Data alarms typically monitor serial lines such as RS-232 connections for the presence or absence of certain signals. For example, you can monitor the connection between a print server and its attached printer or between an access server and a modem or a Channel Service Unit/Data Service Unit (CSU/DSU).

Data alarm. Implementing a data alarm for alerting administrators to problems with serial links.

A data alarm can be a simple device that monitors one serial line for the presence or absence of data. If the data flow stops, a flashing LED or audible alarm signals the problem to the administrator. More complex data alarms can support multiple serial lines or other serial interfaces such as RS-449 and V.35, can have programmable functions and menu-driven commands, and can monitor other devices, such as Time to Live (TTL) devices. These more complex devices can be configured to dial a remote station when a problem arises and to generate a report of the condition or even activate an alphanumeric pager.

Notes

Some vendors use the term data alarm to describe a device that senses network problems associated with the flow of data in other networks such as Ethernet.

See Also network management

database

In its simplest form, a file used to store records of information, with each record containing multiple data fields. More generally, any application used to manage structured information.

Overview

Databases are an essential component of every business, representing the back-end of a company's information structure. Databases are used to store a broad range of information including inventory, customer contacts, sales records and invoices, catalogs, and so on. Without databases, modern businesses could not operate with the scope and range that they have, as they would be overwhelmed with information they could not manage.

Databases allow information to be stored, updated, manipulated, and queried. Sales figures for a given month can be extracted from a database using a standard programming language for building queries called Structured Query Language (SQL), versions of which are built into products from every database vendor.

Implementation

The most popular type of database is the relational database, in which the records are stored in tables that are related to each other using primary and foreign keys. A primary key is the field in each record that uniquely defines the record. (For example, a part number might be used as the primary key in a table that holds the price of each item a company sells.) A foreign key is a field in another table that matches the first table's primary key, creating a relationship between the two. An application for creating and managing relational databases is called a relational database management system (RDBMS).

Records are like the rows of a table. Each record is a collection of information about some physical system or logical object. Field names are like the column names of a table. Each field name represents a property or attribute of the system or object. Databases are widely used by businesses for storing information about inventory, orders, shipping, accounting, and so forth.

Microsoft SQL Server is Microsoft Corporation's enterprise-level RDBMS. SQL Server databases are stored on devices. Each computer running SQL Server has four system databases plus one or more user databases installed on it. The system databases are as follows:

Marketplace

Major players in the enterprise database market include Microsoft Corporation with its SQL Server, Oracle Corporation with its Oracle 9i, IBM with its DB2, and others. These database products frequently compete for mindshare in the enterprise arena by surpassing each other in TCP-C benchmark figures.

Competing with these major players are open-source databases such as MySQL from NuSphere Corporation and PostgreSQL from Great Bridge, which run on the Linux and Berkeley Software Distribution (BSD) platforms and build upon the stability of those platforms.

Given the ubiquity and importance of databases to business, a wide spectrum of tools is available for planning, implementing, managing, tuning, monitoring, and troubleshooting database platforms. For example, pocketDBA has a module for the wireless Palm OS handheld device which allows an administer to remotely monitor and manage Oracle databases using a Palm. Similar applications will soon be available for the PocketPC platform as well.

Notes

The term database can have different meanings for different vendors. In Oracle products, for example, database refers to the entire Oracle DBMS environment. In SQL Server, databases provide a logical separation of data, applications, and security mechanisms, but in Oracle this separation is achieved using tablespaces .

See Also SQL Server

database owner (DBO)

In Microsoft SQL Server, the user account that created the database and is responsible for managing administrative tasks related to the database.

Overview

Each SQL Server database is considered to be a self- contained administrative domain and is assigned a database owner (DBO) who is responsible for managing the permissions for the database and performing tasks such as backing up and restoring the database's information. The DBO also applies to any database object, including tables, indexes, views, functions, or stored procedures.

Essentially, the DBO can do anything within the database. By default, the SA (system administrator) account is also a DBO account for any database on a computer running SQL Server. The DBO has full permissions inside a database that it owns.

Notes

To avoid the complexity of managing separate DBO accounts for each SQL Server database, you might want to perform all administration tasks-both server-wide and specific to the database-using only the SA account.

See Also SQL Server

data communications equipment (DCE)

Any device that supports data transmission over a serial telecommunications link.

Overview

Typically, data communications equipment (DCE) refers to analog and digital modems, Integrated Services Digital Network (ISDN) terminal adapters, Channel Service Units (CSU), multiplexers, and similar devices. The purpose of a DCE is to provide termination for the telecommunications link and to provide an interface for connecting data terminal equipment (DTE) to the link.

The term DCE specifically refers to serial transmission, which generally occurs over links such as a local loop Plain Old Telephone Service (POTS) connection, an ISDN line, or a T1 line. An example of a DCE is an analog modem, which provides a connection between a computer (the DTE) and the local loop POTS phone line (the serial transmission line). A DCE accepts a stream of serial data from a DTE and converts it to a form that is suitable for the particular transmission line medium being used. The DCE also works in reverse, converting data from the transmission line to a form the DTE can use.

On a Cisco router, the DCE interface is typically an RJ-45 jack on the back of the router.

See Also data terminal equipment (DTE) ,Integrated Services Digital Network (ISDN) ,serial transmission ,T-carrier

Data Encryption Standard (DES)

The former U.S. government standard for encryption, now replaced by Advanced Encryption Standard (AES).

Overview

In 1972 the National Bureau of Standards called for proposals for an encryption standard to enable secure transmission of government documents by electronic means. IBM responded to the call with a 128-bit key algorithm called Lucifer, which was accepted in 1976, reduced to 56-bit key length, renamed Data Encryption Algorithm (DEA), and then further developed by the National Security Agency (NSA). In 1977, DEA was officially adopted as the Data Encryption Standard (DES) and became the official encryption standard of the U.S. government. DES is formally defined by Federal Information Processing Standard FIPS 46-1.

Implementation

DES is a symmetric encryption scheme in which both the sender and the receiver need to know the secret key in order to communicate securely. DES is based on a 56-bit key (actually a 64-bit key with 8 parity bits stripped off) that allows for approximately 7.2 x 1016 possible keys. When a message is to be encrypted using DES, one of the available keys is chosen and applied in 16 rounds of permutations and substitutions to each 64-bit block of data in the message. Because DES encrypts information 64 bits at a time, it is known as a block cipher.

Issues

DES was originally designed for hardware encryption (dedicated devices that encrypt information at high speeds). The newer AES standard is more flexible in its application, and performs well in a variety of different environments from small-footprint devices such as smart cards to ordinary desktop computers running standard operating systems.

While the large number of keys available makes DES fairly secure, it was known early on that DES was crackable in theory. In 1997, Whitfield Diffie and Martin Hellman (developers of Diffie-Hellman public key encryption) proposed a DES cracking machine that could in principle be built with existing hardware at a cost of about $20 million. Then in 1997 a DES key was successfully cracked using the idle processing cycles of 14,000 computers cooperating over the Internet. The next year a DES cracking machine costing only $210,000 was created by the Electronic Frontier Foundation (EFF). The EFF machine was capable of cracking a DES key in about four days. Despite these accomplishments, DES is still viewed in practice as a secure encryption algorithm because so far no other method for cracking DES keys than simple brute force (trying every possible key) has been found.

Notes

A more secure variant of DES, Triple DES, encrypts each message using three different 56-bit keys in succession. Triple DES thus extends the DES key to 168 bits in length, providing for a total of approximately 6.2 x 1057 different keys. Unfortunately Triple DES is a relatively slow encryption mechanism that is not suitable for some situations, such as cell phones having limited processing power.

Another symmetric encryption scheme is IDEA (International Data Encryption Algorithm), which uses a 128-bit key and performs 8 cipher rounds on 64 bit blocks of information. Other symmetric encryption schemes include Blowfish, CAST, RC2, RC4, and others. The most common examples of asymmetric encryption schemes are Diffie-Hellman and Rivest- Shamir-Adleman encryption (RSA).

See Also Advanced Encryption Standard (AES) ,encryption

datagram

A term sometimes used as a synonym for packet, but most often meaning a packet that is sent across a network using connectionless services, where the delivery does not depend on the maintenance of specific connections between computers.

Overview

Networking protocol suites such as Transmission Control Protocol/Internet Protocol (TCP/IP) generally support both connection-oriented and connectionless delivery services. In TCP/IP, the Transmission Control Protocol (TCP) is responsible for providing connection-oriented services that guarantee delivery of Internet Protocol (IP) packets. On the other hand, the User Datagram Protocol (UDP) handles connectionless services that guarantee only "best-effort" delivery of datagrams. For networking services that use connectionless datagrams, higher-layer protocols must ensure delivery. Datagrams are generally small packets sent over the network to perform functions such as announcements.

See Also connectionless protocol ,connection-oriented protocol ,packet ,Transmission Control Protocol/Internet Protocol (TCP/IP) ,User Datagram Protocol (UDP)

data integrity

The correctness and consistency of data stored in a database.

Overview

Maintaining integrity is essential, because a database is only useful if its contents can be retrieved and manipulated as expected. For example, without data integrity, data could be input into the system and then be inaccessible. Data integrity must be enforced on the database server. The following items are among those that should be verified:

Database systems employ many features to ensure data integrity. For example, Microsoft SQL Server 7 makes use of data types, constraints, rules, defaults, declarative referential integrity (DRI), stored procedures, and triggers. All these play a role in keeping the integrity of the database intact.

See Also database ,SQL Server

data isolator

A device that protects serial equipment from voltage surges.

Overview

If two pieces of data terminal equipment (DTE) are connected by a long serial line, voltage differences with respect to ground between the devices can cause surges over the line that can damage the devices. This can be a problem in a mainframe environment when you connect terminals to asynchronous mainframe hosts using long RS-232 cables. The problem is especially troublesome when the cabling has to run outdoors between buildings or when nearby generators or other equipment induce voltages.

Data isolator. Using a data isolator.

The solution to these problems is to insert a data isolator between the mainframe host and the terminal. This isolator provides electrical isolation between the two devices, somewhat like an opto isolator for fiber-optic cabling. Data isolators typically use transformers to electrically isolate the two connected circuits from voltage surges. Data isolators can support high data trans- fer speeds, and they come with a variety of interfaces, such as RS-232, RS-422, and Time to Live (TTL) connections.

See Also data terminal equipment (DTE) ,serial transmission

data line protector

A device that provides surge protection for network cables carrying data.

Overview

Data line protectors prevent voltage spikes and surges from damaging costly hubs, switches, routers, and other devices. They are essentially surge protectors that are placed inline between stations on the network and concentrating hubs or other devices in the wiring closet. Data line protectors are available from different vendors for virtually every kind of networking connection, including RJ-45 connections for Ethernet networks, RJ-11 connections for telephone lines, and RS-232 connections for serial lines.

You connect a data line protector directly to one of the two connected devices, and then you attach the ground wire to a good ground connection so that there will be a path for voltage surges to flow down. For Ethernet networks using unshielded twisted-pair (UTP) cabling, data line protectors are available with multiple ports that are attached directly to the hub or the switch. Additional 10BaseT surge protectors can also be installed directly on the stations on the network for more protection.

Data line protector. Using a data line protector.

Notes

Most newer hubs, switches, routers, and other networking devices include built-in data line protection circuitry, which eliminates the need for additional data line protectors.

See Also Ethernet

Data Link Control (DLC)

Generally, the services that the data-link layer of the Open Systems Interconnection (OSI) reference model provides to adjacent layers of the OSI protocol stack. Specifically, Data Link Control (DLC) is a specialized network protocol.

Overview

The DLC protocol is used primarily for two purposes:

DLC is not used as a network protocol in the usual sense of enabling communication among computers on the network. It is not used by the redirector by the Microsoft Windows 2000 operating system and so cannot be used for session-level communication over a network. DLC is not routable; it is designed only to give devices direct access to the data-link layer. DLC is supported by most Windows operating systems, including Windows 95, Windows 98, Windows Millennium Edition (Me), Windows NT, and Windows 2000. DLC is no longer supported on Windows XP and Windows .NET Server. Windows 95 OSR2 includes both a 16-bit and a 32-bit version of DLC.

Implementation

To use DLC on Windows 2000 to connect to a Hewlett-Packard network print device, perform the following steps:

  1. Connect the printer to the network, and run the self-test routine to obtain the printer's MAC address. Also, think of a friendly name for the printer.

  2. Install the DLC protocol on the Windows NT or Windows 2000 server that will be used as a print server for the network print device. (Use the Network utility or the Windows 2000 Network and Dial-Up Connections utility in Control Panel.)

  3. Run the Add Printer Wizard on the print server, choosing My Computer, Add Port, Hewlett- Packard Network Port, and New Port. Enter the friendly name for the printer, and select its MAC address from the list (or type it if the print device is offline). In Windows 2000, run the Add Printer Wizard, then right-click on the printer in the Printers folder and choose Properties. In the Property sheet for the printer, click the Ports tab, click Add Port, select Hewlett-Packard Network Port, and then click New Port. Enter the friendly name for the printer and select its MAC address from the list (or type it if the print device is offline).

See Also data-link layer ,Open Systems Interconnection (OSI) reference model ,printing terminology

data-link layer

Layer 2 of the Open Systems Interconnection (OSI) reference model.

Overview

The data-link layer converts frames of data into raw bits for the physical layer and is responsible for framing, flow control, error correction, and retransmission of frames. media access control (MAC) addresses are used at this layer, and bridges and network interface cards (NICs) operate at this layer.

Data-link layer. Two sub-layers of the data link layer.

The data-link layer establishes and maintains the data link for the network layer above it. It ensures that data is transferred reliably between two stations on the network. It is responsible for packaging data from higher levels into frames, which are basically constructs containing data, a header, and a trailer.

Uses

A variety of network protocols can be implemented at the data-link layer. These differ depending on whether you are establishing local area network (LAN) or wide area network (WAN) connections between stations. Data-link protocols are responsible for functions such as addressing, frame delimiting and sequencing, error detection and recovery, and flow control.

Examples of data-link protocols for local area networking include the following:

For WANs, data-link layer protocols encapsulate LAN traffic into frames suitable for transmission over WAN links. Common data-link encapsulation methods for WAN transmission include the following:

Implementation

For LANs, the Project 802 standards of the Institute of Electrical and Electronics Engineers (IEEE) separate the data-link layer into two sublayers. The reason for doing this is to make it simpler for network equipment vendors to develop drivers for data-link layer services. The two sublayers of the data-link layer are the following:

See Also Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) ,High-level Data Link Control (HDLC) ,Open Systems Interconnection (OSI) reference model ,Point-to-Point Protocol (PPP)

Data Manipulation Language (DML)

A subset of Structured Query Language (SQL).

Overview

Generally speaking, the term Data Manipulation Language (DML) can apply to any nonprocedural computer language designed specifically for the manipulation of structured data. In common use, DML refers to a subset of SQL commands used specifically for manipulating data in a database. The four SQL commands that comprise DML are

See Also Structured Query Language (SQL)

Data Over Cable Interface Specification (DOCSIS)

A specification defining standards for implementation of cable modem systems.

Overview

Data Over Cable Interface Specification (DOCSIS) is a set of standards developed by CableLabs together with a consortium of cable system vendors and operators to promote interoperability between different cable modem systems. The DOCSIS standards focus on the interaction between equipment at cable providers and their customers.

DOCSIS is also a certification issued by CableLabs to cable modem system vendors meeting DOCSIS specifications and requirements. CableLabs is responsible for certifying DOCSIS-compliant cable modems and related equipment.

Architecture

In its original and widely implemented version, DOCSIS 1 provides a specification for transmission of a single stream of data from a Cable Modem Termination System (CMTS) router at the provider, through a distribution system of more routers, to a cable modem at the customer premises. The cable modem data stream defined by DOCSIS is a best-effort shared-media system that uses contention on upstream traffic and has limited support for class of service (CoS) differentiation between different types of traffic. DOCSIS defines the physical and data-link layers for cable modem systems and solves the problem of eavesdropping due to the shared nature of the network by supporting 56-bit Data Encryption Standard (DES) encryptions of all cable modem traffic.

Data Over Cable Interface Specification (DOCSIS). How DOCSIS works.

DOCSIS 1 specifies the characteristics of upstream and downstream transmission as follows:

A newer version of the specification, DOCSIS 1.1, has a number of enhancements over the original DOCSIS standard, specifically

See Also cable modem

Data Provider

A tool that simplifies data access to different kinds of data sources such as relational databases.

Overview

Also known as the OLE DB Provider for AS/400 and Virtual Storage Access Method (VSAM), Data Provider is included with Microsoft SNA Server version 4. It gives Web applications written with Microsoft Active Server Pages (ASP) technology the ability to access record-level mainframe AS/400 and VSAM file systems.

Using Data Provider, you can write applications that access legacy file data on mainframes and minicomputers running Systems Network Architecture (SNA). You can also directly access AS/400 file structures and VSAM data sets using the IBM DDM protocol native to many IBM host systems without needing to install additional Microsoft software on the host system. You can also integrate unstructured legacy file data on host systems with data stored in a Microsoft Windows 2000 networking environment.

See Also Systems Network Architecture (SNA)

Data Service Unit (DSU)

A digital communication device that works with a Channel Service Unit (CSU) to connect a local area network (LAN) to a telecommunications carrier service.

Overview

Data Service Units (DSUs) provide a modem-like interface between data terminal equipment (DTE) such as a router and the CSU connected to the digital service line. DSUs also serve to electrically isolate the telco's digital telecommunication line from the networking equipment at the customer premises. While the CSU connects to the termination point of the carrier's digital line at the customer premises, the DSU connects to the access device (typically a router) at the border of the customer's LAN.

Data Service Unit (DSU). Using a DSU.

Implementation

As an example, in T1 transmission technologies, the DSU converts network data frames that are received from the router's RS-232, RS-449, or V.35 serial transmission interface into the standard DSX framing format, encoding scheme, and voltages of the T1 line. The DSU also converts the unipolar networking signal into a bipolar signal suitable for transmission over the digital line. The DSU is also responsible for handling signal regeneration and for controlling timing errors for transmission over the T1 line. DSUs usually provide other functions such as line conditioning of the T1 line, as well as remote diagnostic capabilities such as Simple Network Management Protocol (SNMP), which allows the telco central office (CO) to monitor the state of the line at the customer premises.

Notes

DSUs are usually integrated with CSUs to create a single device called a CSU/DSU (Channel Service Unit/Data Service Unit). If these devices are separate, the telco usually supplies and configures the CSU, while the customer supplies the DSU. If the devices are combined, the telco usually supplies, configures, and maintains the CSU/DSU for the customer premises.

The DSUs (or CSU/DSUs) at either end of a digital data transmission line should be from the same manufacturer. If they are not, they might not communicate with each other correctly because different vendors employ different multiplexing and diagnostic technologies that are often incompatible with those of other vendors.

See Also Channel Service Unit (CSU) ,Channel Service Unit/Data Service Unit (CSU/DSU) ,T-carrier

data source name (DSN)

A unique name used to create a data connection to a database using open database connectivity (ODBC).

Overview

Data source names (DSNs) are used by applications that need to access or manage data in its associated database. All ODBC connections require that a DSN be configured to support the connection. When a client application wants to access an ODBC-compliant database, it references the database using the DSN.

Data source name (DSN). Configuring a DSN.

You can configure a DSN for an ODBC-compliant database using the Microsoft Windows 2000, Windows XP, or Windows .NET Server Administrative Tools\Data Sources (ODBC) utility in Control Panel. You can create three kinds of DSNs:

Notes

When you design Web applications that use Microsoft ActiveX Data Objects (ADO) for accessing database information, be sure to use either a file DSN or a system DSN because ADO does not work with user DSNs.

See Also open database connectivity (ODBC)

Data Space Transfer Protocol (DSTP)

A protocol for accessing large amounts of information stored in distributed locations.

Overview

The Data Space Transfer Protocol (DSTP) is a new protocol designed to transport gigabits of information at high speeds around the globe. DSTP finds typical applications in international research projects between different universities where large amounts of data are generated and need to be shared and analyzed. Typical applications include human genome analysis and particle physics.

Implementation

DSTP is derived from Network News Transfer Protocol (NNTP) and uses a similar stream architecture augmented by a command set similar to Simple Mail Transfer Protocol (SMTP). DTSP enables information stored in different formats on servers in different locations to be indexed and retrieved in the form of a database file consisting of rows and columns. Indexing and retrieval is facilitated using a key called a Universal Correlation Key (UCK), which is used to create an Extensible Markup Language (XML) index file by tagging columns in databases.

DSTP is capable of transferring multiple flat-file databases simultaneously at high speeds.

For More Information

Find out more about DSTP at www.www.ncdm.uic.edu/dstp.

See Also Network News Transfer Protocol (NNTP) ,Simple Mail Transfer Protocol (SMTP)

data tap

A type of networking device that you can use to monitor the flow of data in serial lines.

Overview

Data taps provide an easy way to connect monitoring equipment such as data scopes to serial interfaces such as RS-232. These serial connections are used for a variety of networking purposes, including connecting data terminal equipment (DTE) such as servers and routers to data communications equipment (DCE), such as modems, and CSU/DSUs (Channel Service Unit/Data Service Units) for implementing wide area networks (WANs); connecting dumb terminals to asynchronous mainframe hosts; and connecting servers to plotters and other serial devices. Data taps generally display network traffic in binary, hexadecimal, or character format and are used for troubleshooting various kinds of network connections.

Data tap. Using a data tap to troubleshoot a serial connection.

A data tap is essentially a three-way connector in which the third connector interfaces with the test equipment. For RS-232 serial lines, data taps come in a variety of configurations, with mixtures of male and female DB-9 and DB-25 connectors.

See Also serial transmission

data terminal equipment (DTE)

Any device that is a source of data transmission over a serial telecommunications link.

Overview

The term data terminal equipment (DTE) specifically refers to a device that uses serial transmission such as the transmissions involving the serial port of a computer. Most serial interface devices contain a chip called a universal asynchronous receiver-transmitter (UART) that can translate the synchronous parallel data transmission that occurs within the computer's system bus into an asynchronous serial transmission for communication through the serial port. The UART also performs other functions in a DTE, including the following:

Typically, data terminal equipment (DTE) can be a computer, a terminal, a router, an access server, or some similar device. The earliest form of DTE was the teletype machine.

Implementation

To connect a DTE to a telecommunications link, you use data communications equipment (DCE). The DCE provides termination for the telecommunications link and an interface for connecting the DTE to the link. In other words, the DCE connects to the carrier's phone line and the DTE connects to the customer's network or system. The DTE and DCE are then joined together using a serial cable. Typical serial interfaces for DCE-to-DTE connections include RS-232, RS-449, RS-530, X.21, V.35, and HSSI.

An example of a DCE would be an analog modem, which can be used for connecting a DTE such as the serial port on a computer or router to the local loop connection of the Plain Old Telephone Service (POTS).

Data terminal equipment (DTE). Example of DTE.

While the usual configuration is to connect DTE with DCE, there are situations where DTE may need to be connected to DTE, for example when a computer needs to be connected to a router using a serial interface. In this situation a null modem cable is required.

See Also data communications equipment (DCE) ,serial transmission

DAWS

Stands for Digital Advanced Wireless System, a proposed standard for a multimegabit packet-switching radio network from the European Telecommunications Standards Institute (ETSI).

See Also Digital Advanced Wireless System (DAWS)

DB connector

A common family of connectors used for connecting data terminal equipment (DTE).

Overview

The letters DB stand for data bus and are followed by a number that indicates the number of lines or pins in the connector. DB connectors were formerly called D-series connectors. DB connectors can be used for either serial or parallel connections between devices.

Common members of the DB family include the following:

See Also connector (device) ,

DBO

Stands for database owner, the user account in Microsoft SQL Server that created the database and is responsible for managing administrative tasks related to the database. The DBO also owns any database object, including tables, indexes, views, functions, or stored procedures.

See Also database owner (DBO)

DCE

Stands for data communications equipment, which refers to any device that supports data transmission over a serial telecommunications link.

See Also data communications equipment (DCE)

D channel

One of two channels used in Integrated Services Digital Network (ISDN).

Overview

An ISDN D-channel is a circuit-switched channel that carries signaling information between the customer premises termination and the central office (CO) of the telecommunications service provider, or telco. The letter D here stands for data or delta . The D channel is used to signal the telco CO when connections need to be created or terminated. The D channel is thus a control channel for ISDN call setup and tear down.

D channel. How the D channel works.

The D channel forms the "D" part of a 2B+D Basic Rate Interface ISDN (BRI-ISDN) line and carries signaling information at a rate of 16 kilobits per second (Kbps). On a 23B+D Primary Rate Interface ISDN (PRI-ISDN) line, the D channel carries signaling information at the faster rate of 64 Kbps.

Implementation

D channel communication uses a completely separate out-of-band communication network called the Signaling System 7 (SS7) network, as shown in the illustration. This telco network is dedicated solely to servicing system functions that are overhead as far as voice or data communication is concerned. The SS7 network on which D channel communication takes place makes possible the low latency of dial-up ISDN connections, which are typically 1 or 2 seconds (compared to a latency of 15 to 30 seconds for analog phone connections).

The data-link layer of the D channel is defined by the Q.921 standard and uses LAPD (Link Access Protocol, D-channel) for full-duplex, synchronous, serial communications. The physical layer of the D channel is no different from that of the B channel.

Notes

In ISDN voice communication, D channels are also used to activate special calling features such as line call forwarding and caller ID.

See Also B channel ,Integrated Services Digital Network (ISDN) ,Link Access Protocol,D-channel (LAPD),out-of-band (OOB) signaling

DCOM

Stands for Distributed Component Object Model, a Microsoft programming technology for developing distributed applications.

See Also Distributed Component Object Model (DCOM)

DCOM Configuration Tool

A Microsoft Windows NT, Windows 2000, Windows XP, Windows .NET Server, and Windows 98 utility used to configure 32-bit Windows applications for Distributed Component Object Model (DCOM) communication between components of distributed applications on a network.

Overview

You can use the DCOM Configuration Tool to configure DCOM applications to run across computers on a network. Computers can be configured to operate as DCOM clients (making calls to DCOM servers), DCOM servers, or both. Using this tool, you can configure the locations of components of distributed applications and the security settings for those components.

Implementation

To start the tool, choose Run from the Start menu, and then type dcomcnfg . To use the tool to configure a distributed application, you must specify the security and location properties of both the calling client application and the responding server application. For the client application, you specify the location of the server application that will be called by the client. For the server application, you select a user account that will have permission to start the application and the user accounts that will run it.

DCOM Configuration Tool. Using the DCOM Configuration Tool.

Notes

Before you can use the DCOM Configuration Tool on Windows 98, you must be sure that user-level security is being used.

See Also Distributed Component Object Model (DCOM)

DDNS

Stands for Dynamic DNS, a new feature of the Domain Name System (DNS) that enables DNS clients to automatically register their DNS names with name servers.

See Also dynamic DNS (DDNS)

DDoS

Stands for Distributed Denial of Service, a form of Denial of Service (DoS) attack that employs a large number of intermediate hosts to multiply the effect of the attack.

See Also Distributed Denial of Service (DDoS)

DDR (Demand-Dial Routing)

Stands for Demand-Dial Routing, a method of forwarding packets on request across a Point-to-Point Protocol (PPP) wide area network (WAN) link.

See Also Demand-Dial Routing (DDR)

DDR (Dial-on-Demand Routing)

Stands for Dial-on-Demand Routing, a method for connecting two remote networks together using Integrated Services Digital Network (ISDN) and dial-up Public Switched Telephone Network (PSTN) connections.

See Also Dial-on-Demand Routing (DDR)

DDS (digital data service)

Stands for digital data service, a family of leased line data communication technologies that provides a dedicated synchronous transmission connection at speeds of 56 kilobits per second (Kbps).

See Also digital data service (DDS)

DDS (Digital Data Storage)

Stands for Digital Data Storage, a tape backup technology that evolved from Digital Audio Tape (DAT) technologies.

See Also Digital Data Storage (DDS)

dead spot

In wireless networking, a location within the coverage area where a signal is not received.

Overview

Dead spots are typically caused by physical barriers (such as buildings or concrete structures) that absorb or reflect radio or microwave frequencies. The receiving station must relocate or the barrier must be moved if the station is to receive a signal. Dead spots can also be caused by high levels of electromagnetic interference (EMI) from heavy machinery (such as motors and generators) or broad- spectrum sources of radiation (such as microwave ovens). In these cases, too, the solution is to relocate the receiver or eliminate the source of interference.

See Also wireless networking

decibel

A mathematical way of representing power ratios.

Overview

A decibel (dB) is the ratio of two values that measure signal strength, such as voltage, current, or power. This ratio is expressed using base 10 logarithms. In mathematical terms, this means that the decibel is defined as follows, where P1 and P2 are the power (signal strength) measurements:

dB = 10 log10 (P1/P2)

Uses

In computer networking and telecommunications, decibels are the units used for measuring signal loss within a circuit. Decibels are also used in network cabling systems for measuring signal losses. In addition, quantities such as attenuation and near-end crosstalk (NEXT) for fiber-optic cabling are expressed in units that contain decibels. In this scenario, P1 is the strength of the signal when it enters the cabling system, and P2 is its strength at some later point, after it has traversed segments of cable, repeaters, connectors, and other cabling system components. The following table shows signal strength ratios expressed both as ratios and as decibels for conversion purposes.

Signal Strength Ratios

Signal Strength Ratio (P1:P2)

Decibels (dB)

1:1 (no signal loss)

0 dB

2:1 (50 percent signal loss)

-3 dB

4:1 (75 percent signal loss)

-6 dB

10:1 (90 percent signal loss)

-10 dB

100:1 (99 percent signal loss)

-20 dB

1000:1 (99.9 percent signal loss)

-30 dB

Notes

The Category 5 (Cat5) cabling version of unshielded twisted-pair (UTP) cabling has an attenuation rating of 30 dB/1000 feet. This means that after traveling 1000 feet (304 meters) along a UTP cable, the electrical strength of the signal typically diminishes by 99.9 percent and is only 0.1 percent of its original strength at the far end of the cable.

See Also cabling ,fiber-optic cabling ,unshielded twisted-pair (UTP) cabling

DECnet

A protocol suite developed by Digital Equipment Corporation (DEC).

Overview

DECnet was originally designed in 1975 to allow PDP-11 minicomputers to communicate with each other. DECnet conforms to the Digital Network Architecture (DNA) developed by DEC, which maps to the seven-layer Open Systems Interconnection (OSI) reference model for networking protocols.

DECnet. Overview of DECnet.

DECnet is essentially a peer-to-peer networking protocol for all DEC networking environments. DECnet supports various media and link-layer technologies, including Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI). The current release of DECnet is called Phase V, but DECnet is essentially a legacy protocol that is rarely used nowadays.

dedicated line

Any telecommunications line that is continuously available for the subscriber with little or no latency.

Overview

Dedicated lines are also referred to as leased lines, since businesses lease these lines from telcos so that they can have continuous, uninterrupted communication with branch offices and with the Internet. The opposite of a dedicated line is a dial-up line, which costs less because it is used intermittently and requires fewer telco resources. However, dial-up lines suffer from the delaying effects of latency as well as less available bandwidth. Dial-up lines are generally local loop Plain Old Telephone Service (POTS) connections that use modems and provide backup services for more expensive leased lines. Dedicated lines, on the other hand, use specially conditioned phone lines and are allocated to the subscriber's private domain with dedicated switching circuits. By contrast, circuits for dial-up lines are shared with all other subscribers in the Public Switched Telephone Network (PSTN) domain.

Dedicated lines can be either point-to-point or multipoint communication paths. They are generally synchronous digital communication lines, and are terminated with one of the following serial interfaces: RS-232, RS-449, RS-530, X.21, V.35, or HSSI.

Advantages and Disadvantages

The main advantages of dedicated lines are the following:

The main disadvantage of dedicated lines is that they cost more than dial-up lines.

See Also dial-up line ,leased line ,T-carrier

Dedicated Token Ring (DTR)

A high-speed Token Ring networking technology.

Overview

Dedicated Token Ring (DTR) is an extension of 802.5 Token Ring technologies developed as an evolutionary upgrade to higher speeds for Token Ring users. It defines a set of signaling protocols and topologies that are backward-compatible with standard Token Ring networking.

DTR uses the same 802.5 frame format as standard Token Ring and supports full-duplex communications using the Transmit Intermediate (TXI) protocol to allow simultaneous transmission and reception of frames by stations.

DTR uses special concentrators to create high-speed ring topology networks. A traditional Token Ring Multistation Access Unit (MAU) can be joined to a DTR concentrator to enable 802.5 stations to communicate over the high-speed ring.

Prospects

Although DTR began development in 1995, it has fallen into eclipse in recent years along with standard Token Ring technologies due to the continued evolution of Ethernet into its Fast Ethernet and Gigabit Ethernet (GbE) varieties.

See Also 802.5 ,Multistation Access Unit (MAU or MSAU) ,Token Ring

default gateway

An address in a routing table to which packets are forwarded when there is no specific route for forwarding them to their destination.

Overview

In an internetwork, a given subnet might have several router interfaces that connect it to other, remote subnets. One of these router interfaces is usually selected as the default gateway of the local subnet. When a host on the network wants to send a packet to a destination subnet, it consults its internal routing table to determine whether it knows which router to forward the packet to in order to have it reach the destination subnet. If the routing table does not contain any routing information about the destination subnet, the packet is forwarded to the default gateway (one of the routers with an interface on the local subnet). The host assumes that the default gateway knows what to do with any packets that the host itself does not know how to forward.

Default gateway. How a default gateway works.

When configuring a client machine on a TCP/IP internetwork, the client must know the Internet Protocol (IP) address of the default gateway for its network. On Microsoft Windows NT, Windows 95, and Windows 98 clients, you configure this information on the TCP/IP property sheet for the client. The property to configure is known as the Default Gateway Address. In Windows 2000, Windows XP, and Windows .NET Server, you can have a default gateway assigned automatically using Dynamic Host Configuration Protocol (DHCP).

See Also IP address ,routing table

Defense Messaging System (DMS)

A global messaging system for the U.S. Department of Defense (DoD).

Overview

The Defense Messaging System (DMS) is a program established by the U.S. Undersecretary of Defense (Acquisition) to develop an integrated, global messaging system for transferring classified and unclassified data. The Defense Messaging System (DMS) will replace the existing Automatic Digital Network (AUTODIN) system currently in use by the U.S. Department of Defense.

Microsoft Exchange DMS, a version of Microsoft Exchange Server, complies with the DMS specification. It is suited for government agencies that are required to use DMS-compliant products and for companies that do defense business with the U.S. government. Exchange DMS technology can be purchased only through Lockheed Martin Federal Systems.

See Also Exchange Server

defragmentation

The process of reorganizing information written on disk drives to make reading the information more efficient.

Overview

When files on disk drives are frequently written, copied, moved, and deleted, the information stored on the drive tends to become fragmented over time. Instead of storing a file across several contiguous (successive) sectors of the drive, files tend to become split up into discontiguous (disconnected) portions scattered all over the drive. Then when the file needs to be accessed again, the drive heads need to skip around a lot to find and read the successive portions of the file, which is a slower process than if the file was located in only one continuous section of the drive. Fragmented drives thus perform more poorly for disk reads (and writes) than drives that are not significantly fragmented. And the more often files are modified on a drive, the more fragmented it tends to become and the more poorly it performs (you can hear the sound of a fragmented drive "thrashing" as it reads files while loading programs and data into memory).

To improve performance of a fragmented drive, the drive should be defragmented. A defragmentation tool is an application that reads portions of fragmented files, copies them to memory, erases them from the disk, and then copies them back to the disk in successive sectors or clusters for better performance. Defragmentation can be performed on files, free space, or both for best performance, and should be done regularly on all computers and particularly those (such as file servers) that experience a lot of disk reads and writes.

Marketplace

Studies by industry analysts have estimated that as much as $50 billion per year is being lost by businesses simply by failing to defragment computers regularly on their networks. While best gains are achieved by defragmenting servers regularly (performance gains of 10 percent to 20 percent are typical), workstations also perform better when periodically defragmented. In spite of these estimates, the present penetration of network defragmentation software in enterprise environments is currently less than 15 percent, so much work remains to be done alerting IT (information technology) administrators to the problem.

While Windows 2000 and other operating systems have their own built-in defragmentation tools, several third- party vendors offer defragmentation tools that are more powerful and manageable and allow an administrator to centrally configure and schedule defragmentation of computers across a network. Some of the more popular tools include Diskeeper from Executive Software (Windows 2000 includes a "light" version of this tool that lacks scheduling capability), PerfectDisk from Raxco Software, and Norton Speed Disk from Symantec Corporation.

delegation

A feature of Microsoft Windows 2000 and the Windows .NET Server family that simplifies the administration of Active Directory directory service.

Overview

Delegation is a process for simplifying the assignment of permissions and rights to an object, container, or subtree of containers or organizational units (OUs) within Active Directory. These permissions and rights can be assigned for the following purposes:

Using delegation, the network administrator can distribute the job of managing an Active Directory enterprise-level implementation among a group of individuals, each with the appropriate permissions and rights to manage her or his portion of the directory. For example, users can be granted permissions and rights on the Users container so that they can create new users or modify the attributes of existing ones. In this fashion, the network administrator can be relieved of the tiresome duty of creating and configuring new user accounts by delegating the job to a junior administrator. Delegation is designed to relieve the network administrator of the burden of managing the entire Active Directory and is an important security management feature in Windows 2000 and Windows .NET Server.

You can perform delegation using the Delegation of Control Wizard, which is part of the Active Directory Users and Computers administrative tool, and you can use it to delegate administration of portions of Active Directory to other administrators and users.

Delegation is part of the security framework of Active Directory. Along with other features such as the discretionary access control list (DACL), inheritance, and trust relationships, it enables Active Directory to be administered securely, protected from unauthorized access.

Notes

Always delegate administrative control at the level of OUs, not at the level of individual objects. This allows you to better manage access to Active Directory because OUs are used to organize objects in the directory. One good idea is to delegate authority to those who are responsible for creating users, groups, computers, and other objects that commonly change in an enterprise.

Always assign permissions to groups instead of to individual users. Groups can be nested within one another and, together with inheritance of permissions, they provide a powerful tool for organizing the administration of Active Directory.

See Also Active Directory , permissions

Delegation of Control Wizard

A wizard that you can run using the Active Directory Users and Computers administrative tool for networks in Microsoft Windows 2000 and the Windows .NET Server family.

Overview

The Delegation of Control Wizard facilitates delegating control of different portions of Active Directory directory service to other administrators and users. The wizard simplifies the process by allowing only the administrator to assign permissions at the level of organizational units (OUs). Assigning permissions to OUs rather than to particular directory objects ultimately simplifies the Active Directory administrator's work.

To start the wizard, open the Active Directory Users and Computers tool, select the OU for which you want to delegate control, and on the Action menu, choose Delegate Control. Specify the users or groups to whom you want to delegate control, the subset of object types in the OU for which this should take place, and the kinds of permissions you want to assign.

See Also Active Directory , permissions

Demand-Dial Routing (DDR)

A method for forwarding packets on request across a Point-to-Point Protocol (PPP) wide area network (WAN) link.

Overview

Demand-Dial Routing (DDR) is a technology in Microsoft Windows 2000 and the Windows .NET Server family that uses PPP links to create on-demand connections for transferring packets to remote networks. DDR works with a variety of dial-up technologies, including Public Switched Telephone Network (PSTN), Integrated Services Digital Network (ISDN), and X.25.

DDR is used to connect networks and uses Windows 2000 and Windows .NET Server's Routing and Remote Access Service (RRAS). DDR links are represented by RRAS as demand-dial interfaces and can be either persistent or on-demand and either one-way or two- way initiated. DDR is different from remote access-in remote access a single user connects to a remote network, but in DDR two networks connect. Like remote access connections, however, DDR connections can use the same security and encryption mechanisms, be implemented using remote access policies, support Remote Authentication Dial-In User Service (RADIUS) authentication, and use advanced PPP features such as Multilink PPP (MPPP), Microsoft Point-to-Point Compression (MPPC), and Bandwidth Allocation Protocol (BAP).

Notes

Demand-Dial Routing (DDR) is not the same as Dial-on Demand Routing (DDR), a Cisco router technology for connectivity-on-demand between Cisco routers.

See Also Point-to-Point Protocol (PPP)

demand priority

The media access control method used by 100VG- AnyLan networks.

Overview

100VG-AnyLan is a high-speed networking architecture developed by Hewlett-Packard and based on the Institute of Electrical and Electronics Engineers (IEEE) standard 802.12. Demand priority is the method by which stations on a 100VG-AnyLan network gain access to the wire for transmitting data.

Implementation

A 100VG-AnyLan network based on the demand priority access method consists of end nodes (stations), repeaters (hubs), switches, routers, bridges, and other networking devices. A typical 100VG-AnyLan network consists of a number of stations plugged into a cascading star topology of repeaters (hubs). Because of timing, a maximum of five levels of cascading of the physical wiring is permitted. Hubs are connected using uplink ports. Each hub is aware only of the stations directly connected to it and any hubs that are uplinked from it.

Demand priority. How the demand priority media access method works.

The key feature of the demand priority access method, as shown on the illustration, is that the 100VG-AnyLan hubs control which computers are allowed to transmit signals on the network at any given moment. Hubs can be thought of as servers and end nodes as computers (clients). With demand priority, a client (a computer with a 100VG-AnyLan network interface card installed in it) must first request access to the network media (cabling) before transmitting data. The server (hub) processes this request and decides whether to allow the client access to the media. If the hub decides to grant the client access to the wire, it sends the client a signal informing it of this decision. The client then takes over control of the media and transmits its data.

Demand priority is considered a contention method, but it operates differently from the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) access method used in Ethernet networks. Cables in a 100VG- AnyLan network are capable of transmitting and receiving data at the same time using all four pairs of twisted- pair cabling in a quartet signaling method. Each pair of wires in a twisted-pair cable transmits and receives data at 25 megahertz (MHz), for a total bandwidth of 100 MHz. All contention on the network occurs at the hub. If two computers attempt to transmit signals at the same time, the hubs can either choose between the two signals based on priority or alternate between them if the priorities are equal. The hubs can do this because demand priority provides mechanisms for prioritizing transmission of different data types. Computers in demand priority networks can simultaneously transmit and receive data, and they do not need to listen to the network because the hubs control access to the wire.

See Also 100VG-AnyLAN ,Carrier Sense Multiple Access with Collision Detection (CSMA/CD) ,media access control method

demarc

The point in a carrier's wide area network (WAN) service at which the customer's responsibility for line management ends and the carrier's responsibility begins.

Overview

Demarc is short for demarcation point and indicates the point at which the carrier assumes responsibility for troubleshooting the WAN connection. For example, if a carrier provides the customer with a Channel Service Unit/Data Service Unit (CSU/DSU) for a leased line, the CSU/DSU is included within the carrier's responsibility and the demarc point is the serial interface on the CSU/DSU to which the customer's router is connected. If the demarc point instead is the RJ-48 connector that terminates the leased line at the customer premises, then the customer is responsible for managing and troubleshooting (and possibly also providing) the CSU/DSU that connects to this connector.

See Also telecommunications services

demilitarized zone (DMZ)

Also called a perimeter network, a security network at the boundary between a corporate local area network (LAN) and the Internet.

See Also perimeter network

DEN

Stands for Directory Enabled Network, an initiative toward a platform-independent specification for storing information about network applications, devices, and users in a directory.

See Also Directory Enabled Network (DEN)

denial of service (DoS)

Any attack conducted against a system that tries to prevent legitimate users from accessing it.

Overview

Denial of service (DoS) refers to a broad family of different methods that hackers use to try to prevent legitimate users from accessing Web servers, mail servers, networks, and other systems. DoS attacks exploit weaknesses inherent in the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, bugs in operating systems and applications, and holes in firewalls and other security devices. Over the last half dozen years, holes have been discovered in Apache, Berkeley Internet Name Domain (BIND), Sendmail, Internet Information Services (IIS), Common Gateway Interface (CGI), Simple Network Management Protocol (SNMP), and other systems that make them vulnerable to DoS attacks. Vendors of these systems have issued patches to guard against these attacks, but new ones are often being discovered.

Generally speaking, DoS attacks occur when a malicious user consumes so many resources on a remote network or system that none are left for legitimate users who need them. The resources attacked might include processors, disk space, memory, network connections, modems, and telephone lines. In a way, a DoS attack is like driving an extra-wide truck down a freeway-the truck blocks legitimate traffic from getting through to its destination.

Another general goal of DoS attacks is to disable critical services on a machine. Once these services are disabled, requests from legitimate users cannot be serviced until the machine is rebooted. For example, if an attacker can send malformed packets to a Web server to shut down its Hypertext Transfer Protocol (HTTP) service, the attacker has succeeded in denying access to the server by its real clients.

History

DoS attacks are not new in theory-a similar technique, jamming, was used during World War II to render enemy radar systems unusable by overwhelming them with useless information. The developers of TCP/IP envisioned the possibility of DoS attacks early on as an inevitable result of the open nature of TCP/IP, but it was not until 1996 that DoS attacks caught widespread public attention when an Internet service provider (ISP) called PANIX in New York experienced a sustained DoS attack on its servers that denied Internet services to legitimate users for more than a week. This attack demonstrated the Internet's fragility and led many vendors to patch weaknesses in their products that could be exploited by DoS attacks.

DoS again made the front page in 1999 when a number of U.S. government Web sites (including the FBI's Web site) were attacked by disgruntled hackers as retaliation for an FBI crackdown on some of their members.

More recently, a new and deadlier type of attack called Distributed Denial of Service (DDoS) made headlines in February 2000 when a young Canadian nicknamed Mafiaboy allegedly denied service for several hours to a number of major commercial Web sites, including Yahoo!, eBay, Amazon.com, Buy.com, E-Trade.com, ZDNet, and CNN.com. Also in 2000, a coordinated Distributed Denial of Service (DdoS) attack was launched against the Internet's Internet Relay Chat (IRC) system, crippling the system for several weeks.

The Internet Engineering Task Force (IETF) is currently working on a new Internet protocol called ICMP Traceback Messages that, if implemented widely, would allow networks experiencing DoS attacks to better determine the source of the attacks. Until then, network managers and system administrators need to be vigilant in applying the latest patches to systems and monitoring their networks for evidence of intrusion and DoS attacks.

Types

There are a number of different types of DoS attacks. Some of these are described in some detail below, but others are mentioned only briefly.

Notes

If users try to connect to an IIS Web server and receive error messages such as "The connection has been reset by the remote host," a SYN attack might be under way on your machine. (When the maximum number of TCP ports are in use [open or half-open] on a machine, the machine usually responds to any further connection attempts with a reset.)

To determine whether such an attack is in progress, type netstat -n -p tcp at the command prompt to see whether there are a large number of ports in the half-open SYN_RECEIVED state. If so, try using a network protocol analyzer such as Network Monitor to further examine the situation. You might need to contact your ISP to investigate the problem more closely.

If your server is under a heavy SYN attack, one fix you can try on Windows NT platforms running Microsoft IIS is to decrease the default timeout for terminating half- open TCP connections. Open the TcpMaxConnectResponseRetransmissions parameter in the registry and set it to 3, 2, or even 1 to reduce the timeout to 45, 21, or 9 seconds, respectively. However, if you set this parameter too low, legitimate connections might experience timeouts. Windows 2000 and Windows NT 4 Service Pack 3 have corrected this problem. A fix is available for Windows NT version 3.51 from Microsoft Corporation.

If your Cisco router is experiencing a SYN attack, use the TCP Intercept feature to validate incoming TCP connection requests to counter the flood.

For More Information

Visit these Web sites for useful information about DoS issues: CERT Coordination Center at www.cert.org, SANS Institute at www.sans.org, International Computer Security Association at www.icsa.net, FBI National Infrastructure Protection Center at www.nipc.gov, and Forum of Incident Response and Security Teams at www.first.org

See Also Distributed Denial of Service (DDoS) ,hacking ,security ,spoofing

dense mode

One of two forms of the spanning tree algorithm used in multicasting.

Overview

While sparse mode routing is designed to be efficient in routing multicast packets to clusters of hosts across a network, dense mode is intended for large-scale multicasting where hosts are spread out across every corner of the network. Dense mode thus assumes that hosts are densely concentrated in large subnets. An example of a situation where dense mode multicasting is required would be a large-scale webcast of a corporate presentation or sports event.

Implementation

Dense mode multicasting creates multiple routing trees, one for each multicast group. Dense mode multicasting floods the network with multicast packets and so assumes that a large amount of bandwidth is available for transmission. Packets are multicast to every area of the network, and then the unneeded branches of the routing tree are pruned for more efficiency.

Dense mode multicasting can employ several different routing protocols to handle the flow:

See Also Distance Vector Multicast Routing Protocol (DVMRP) ,multicasting ,Multicast Open Shortest Path First (MOSPF) ,Protocol Independent Multicast-Dense Mode (PIM-DM) ,routing protocol ,spanning tree algorithm (STA) ,sparse mode

dense wavelength division multiplexing (DWDM)

A multiplexing technology for achieving extremely high data rates over fiber-optic cabling.

Overview

Also known sometimes as simply wavelength division multiplexing (WDM), dense wavelength division multiplexing (DWDM) modulates multiple data channels into optical signals that have different frequencies and then multiplexes these signals into a single stream of light that is sent over a fiber-optic cable. Each optical signal has its own frequency, so up to 160 separate data streams can be transmitted simultaneously over the fiber using only eight different light wavelengths. In addition, each data stream can employ its own transmission format or protocol. This means that, using DWDM, you can combine Synchronous Optical Network (SONET), Asynchronous Transfer Mode (ATM), Transmission Control Protocol/Internet Protocol (TCP/IP), and other transmissions and send them simultaneously over a single fiber. At the other end, a multiplexer demultiplexes the signals and distributes them to their various data channels.

Dense wavelength division multiplexing (DWDM). How DWDM is implemented.

Marketplace

Many networking vendors now offer switching equipment that supports DWDM. Big players in this area include Lucent Technologies and Nortel Networks, with numerous smaller players being attracted to the market.

An example of a DWDM deployment is AT&T, which has implemented DWDM switching equipment through much of its long-haul backbone network to carry up to 80 different channels per strand of fiber, and is planning to upgrade this soon to 160 and eventually 320 channels per fiber. This managed DWDM service from AT&T is called Ultravailable Broadband Network, and it can provide 2.4 gigabits per second (Gbps) of bandwidth for metropolitan-area connections where it is available. AT&T targets mainly large enterprises that enter into long-term contracts for these services.

Prospects

DWDM is rapidly replacing time-division multiplexing (TDM) as the standard transmission method for long- haul high-speed fiber-optic carrier links. The main deployment issue for many carriers is cost: devices that support DWDM are more expensive because the laser light sources for generating signals over fiber must be extremely stable. The benefits are so great, however, that many carriers are moving toward implementing it on their backbone networks, particularly long-distance carriers (inter-exchange carriers, or IXCs) who see the greatest benefit/cost ratio in deploying it. DWDM is less likely to be used by competitive telcos (competitive local exchange carriers or CLECs) or baby bells (regional bell operating companies, or RBOCs) for their access networks because of the cost of upgrading equipment to support DWDM compared to simply laying additional fiber to meet their needs.

A newer all-optical switching technology called lambda switching has recently evolved from DWDM and promises significant advantages over traditional DWDM. As a result of these developments, some carriers are holding back on further DWDM deployments while they wait for the new technology to mature.

See Also fiber-optic cabling ,lambda switching ,time-division multiplexing (TDM)

DES

Stands for Data Encryption Standard, the former U.S. government standard for encryption. It now has been replaced by Advanced Encryption Standard (AES).

See Also Data Encryption Standard (DES)

desktop

The graphical user interface (GUI) for Microsoft Windows 95, Windows 98, Windows NT version 4, Windows 2000, Windows XP, and Windows .NET Server operating system platforms.

Overview

The desktop is the user's on-screen work area; its various icons and menus are arranged as if on top of a physical desk. Users can place items on the desktop, drag them around, move them into folders, and start and stop tasks using simple mouse actions such as clicking, double-clicking, dragging, and right-clicking.

When the Active Desktop feature of many of these platforms is selected, Web browser functions also appear on the desktop. Users can browse local and network file system objects along with content on the Internet using a familiar Web browser paradigm. Active Web content can be placed directly on the desktop and updated automatically.

See Also Active Desktop

Desktop Management Interface (DMI)

A standard for managing desktop systems developed by the Desktop Management Task Force (DMTF), now the Distributed Management Task Force.

Overview

Desktop Management Interface (DMI) was designed to allow information to be automatically collected from system components such as network interface cards (NICs), hard disks, video cards, operating systems, and applications that comply with the DMI standard. DMI was designed to be operating system-independent and protocol-independent and was designed for use on local systems that do not have a network installed.

Implementation

DMI by itself does not specify a protocol for managing systems over the network. Instead, DMI must use an existing network management protocol such as Simple Network Management Protocol (SNMP) to send and receive information over the network. DMI is in fact similar in design to SNMP. Each component to be managed must have a Management Information Format (MIF) file that specifies the location of the component, name of vendor and model, firmware revision number, interrupt request line (IRQ), input/output (I/O) port address, and so on. MIF files are formatted as structured ASCII flat-file databases; the Desktop Management Task Force has defined several standard MIFs, including the Desktop System MIF file, the Adapter Card MIF file, and the Printer MIF file.

DMI service layer software running on the desktop collects information from DMI-enabled components and stores this information in the appropriate MIF file. The service layer thus acts as an intermediary between the DMI-enabled components and the DMI management application, and it coordinates shared access to the various MIFs installed on the desktop system. DMI management applications can then query the service layer on the desktop to obtain the various system components and applications from these MIF files. The service layer allows the management layer to interact with the MIFs by using commands such as

One advantage of DMI over SNMP is that DMI management applications can access MIF files even when they have no prior information about them.

DMI management applications include Intel Corporation's LANDesk and Microsoft Systems Management Server (SMS). SMS 1.2 uses standard DMI 4.5 MIF files to expose inventory data for systems it manages and then stores this information in a Microsoft SQL Server database.

Prospects

DMI is now considered a legacy specification, and the newer Web-Based Enterprise Management (WBEM) initiative from the DMTF has largely replaced it. WBEM specifies a Common Information Model (CIM) as a common abstraction layer for unifying the various existing data providers for system and network management, including DMI and SNMP. Microsoft Corporation has implemented WBEM into the Windows 2000, Windows XP, and Windows .NET Server family operating systems as Windows Management Instrumentation (WMI) and in Microsoft System Management Server 2.

See Also Common Information Model (CIM) , Web-Based Enterprise Management (WBEM), Windows Management Instrumentation (WMI)

destination address

The address to which a frame or packet of data is sent over a network.

Overview

The destination address is used by hosts on the network to determine whether the packet or frame is intended for them or for other hosts. The destination address is also used by routers to determine how to forward the packet or frame through an internetwork. The destination address can be one of the following:

Destination addresses can be either specific or general. Specific addresses point to a specific host on the network. A general address points the packet or frame to all hosts on the network or multicasts it to a specific multicast group of hosts on the network.

You can see the destination address of a packet or frame by using a network sniffer device such as Network Monitor, a tool included with Microsoft Systems Management Server (SMS). Network Monitor displays destination addresses in both ASCII and hexadecimal form.

Notes

The other kind of address in a packet or frame is the source address. This is the address of the host from which the packet originates (unless the source address is being spoofed).

See Also source address

device

Generally, any hardware component that can be driven by software. In Microsoft SQL Server, a file used to store databases.

Overview

In Microsoft Windows 2000, Windows XP, and Windows .NET Server, you can manage devices and their drivers using Device Manager, which you access through the System utility in Control Panel. In Windows NT, you can view, enable, disable, stop, and start devices using the Devices utility in Control Panel. You can also install and uninstall devices and update device drivers in Device Manager.

In Microsoft SQL Server, a device is a file used to store SQL Server databases. Multiple SQL Server databases can be stored on a single device, and a single database can span multiple devices.

The master system device contains four databases:

Dfs

Stands for distributed file system, a network file system that makes many shares on different file servers look like a single hierarchy of shares on a single file server.

See Also Distributed file system (Dfs)

DHCP

Stands for Dynamic Host Configuration Protocol, a protocol that enables the dynamic configuration Internet Protocol (IP) address information for hosts on an internetwork.

See Also Dynamic Host Configuration Protocol (DHCP)

DHCP client

Software running on an Internet Protocol (IP) host that enables the host to have its IP address information dynamically assigned using Dynamic Host Configuration Protocol (DHCP).

Overview

The term DHCP client can also describe the software component on a computer that is capable of interacting with a DHCP server to lease an IP address.

Microsoft Windows comes with DHCP client software that you can configure when you install the TCP/IP protocol suite. This software allows a machine to immediately take its place in TCP/IP internetworks using DHCP. Other operating systems might require that the DHCP client software be installed and configured separately.

Microsoft operating systems that can function as DHCP clients include the following:

Notes

On machines running Windows 2000, Windows XP, and Windows .NET Server, the DHCP client is DNS- aware and uses dynamic update for registering addresses, which allows the IP address and fully qualified domain name (FQDN) of client machines to be assigned and supported together.

Windows NT, Windows 95, and Windows 98 clients can release and renew their IP address leases using the ipconfig command. This command can also be useful for resolving IP address conflicts or for troubleshooting DHCP clients and servers.

See Also DHCP server ,Dynamic Host Configuration Protocol (DHCP)

DHCP client reservation

A process for configuring a Dynamic Host Configuration Protocol (DHCP) server so that a particular host on the network always leases the same Internet Protocol (IP) address.

Overview

You can create a client reservation on a DHCP server if you want the server to always assign the same IP address to a specific machine on the network. You might do this to assign IP addresses to servers on the network because the IP addresses of servers should not change. (If they do, client machines might have difficulty connecting with them.) An alternative and more common way to assign a client reservation to a server is to manually assign a static IP address to the server.

On Microsoft Windows 2000- and Windows .NET Server-based networks you can create DHCP client reservations using the DHCP console, and in Windows NT you use DHCP Manager. Enter the media access control (MAC) address as the client's unique identifier. When the client with that address contacts the DHCP server to request an IP address, the server leases the reserved address to the client.

See Also DHCP console

DHCP Client service

The service in Microsoft Windows 2000, Windows XP, Windows .NET Server, and Windows NT that implements the client component of the Dynamic Host Configuration Protocol (DHCP) on workstations and servers.

Overview

You can use the DHCP Client service to obtain Internet Protocol (IP) addresses and other Transmission Control Protocol/Internet Protocol (TCP/IP) configuration information from a DHCP server (such as a Windows 2000 or Windows NT server running the DHCP Server service).

Microsoft Windows includes support for DHCP and provides client software that lets you manage a machine's IP address over a network. This software runs as a service under Windows 2000, Windows XP, Windows .NET Server, and Windows NT. DHCP simplifies the administration and management of IP addresses for machines on a TCP/IP network.

See Also DHCP Server service ,Dynamic Host Configuration Protocol (DHCP)

DHCP console

A Microsoft Windows 2000 and Windows .NET Server administrative tool for managing the DHCP Server service on Windows 2000 Server and Windows .NET Server.

Overview

The DHCP console is the main tool used for managing and configuring all aspects of the Dynamic Host Configuration Protocol (DHCP) on a Windows 2000- and Windows .NET Server-based network and is implemented as a snap-in for the Microsoft Management Console (MMC).

DHCP console. The DHCP console for Windows 2000 Server.

You can use the DHCP console for the following standard DHCP administration tasks:

The DHCP console also includes the following advanced features, which are new to Windows 2000 and are also included with Windows .NET Server:

See Also Dynamic Host Configuration Protocol (DHCP)

DHCP lease

The duration for which a Dynamic Host Configuration Protocol (DHCP) server lends an IP address to a DHCP client.

Overview

You can configure the lease duration using the Microsoft Windows NT administrative tool DHCP Manager or the Windows 2000 or Windows .NET Server console snap-in. If your Transmission Control Protocol/Internet Protocol (TCP/IP) network configuration does not change often or if you have more than enough IP addresses in your assigned IP address pool, you can increase the DHCP lease considerably beyond its default value of three days. However, if your network configuration changes frequently or if you have a limited pool of IP addresses that is almost used up, keep the reservation period short-perhaps one day. The reason is that if the pool of available IP addresses is used up, machines that are added or moved might be unable to obtain an IP address from a DHCP server and thus will be unable to participate in network communication.

See Also DHCP console

DHCP options

Additional Internet Protocol (IP) address settings that a Dynamic Host Configuration Protocol (DHCP) server passes to DHCP clients.

Overview

When a DHCP client requests an IP address from a DHCP server, the server sends the client at least an IP address and a subnet mask value. Additional information can be sent to clients if you configure various DHCP options. You can assign these options globally to all DHCP clients, to clients belonging to a particular scope, or to an individual host on the network.

You can configure a number of different DHCP options using the Microsoft Windows 2000 or Windows .NET Server snap-in DHCP console, but the options listed in the following table are the ones most commonly used by Microsoft DHCP clients. In Windows 2000- and Windows .NET Server-based networks, options 3, 6, and 15 are commonly used.

DHCP Options

Number

Option

What It Configures

003

Router

Default gateway IP address

006

DNS Servers

IP addresses of DNS servers

015

DNS Domain Name

Parent domain of associated DNS servers

044

NetBIOS over TCP/IP Name Server

IP addresses of Windows Internet Name Service (WINS) server

046

NetBIOS over TCP/IP Node Type

Method of NetBIOS name resolution to be used by the client

047

NetBIOS over TCP/IP Scope

Restricts NetBIOS clients to communication with clients that have the same scope ID

See Also DHCP console

DHCP relay agent

A host that enables a Dynamic Host Configuration Protocol (DHCP) server to lease addresses to hosts on a different subnet than the one the server is on.

Overview

DHCP relay agents make it unnecessary to maintain a separate DHCP server on every subnet in an internetwork. Without DHCP agents, every subnet on an internetwork would need at least one DHCP server to provide address leases to hosts on that subnet. With DHCP agents, you can manage with only a single DHCP server for your entire internetwork (though two are recommended in case of failure).

DHCP relay agent. How a DHCP relay agent works.

Implementation

The DHCP relay agent is a machine with the DHCP Relay Agent service installed and configured to forward DHCP requests to a DHCP server on a different subnet (as shown in the illustration). The process happens as follows:

  1. A DHCP client on the subnet where the DHCP relay agent is configured broadcasts a request for a lease from a DHCP server.

  2. Since there is no DHCP server on the client's subnet, the DHCP relay agent picks up the client's request and forwards it directly to the DHCP server on another subnet.

  3. The DHCP server responds to the request by offering a lease directly to the client.

Notes

You can configure Microsoft Windows NT, Windows 2000 and Windows .NET Server machines to operate as DHCP relay agents. To configure a machine running Windows NT Server as a DHCP relay agent, you must do the following steps:

  1. Install the DHCP Relay Agent service using the Services tab of the Network utility in Control Panel.

  2. Configure the DHCP server that the agent will pass requests to. Do this on the DHCP Relay tab of the Microsoft TCP/IP Properties sheet of the TCP/IP protocol.

To configure a Windows 2000 server as a DHCP relay agent, follow these steps:

  1. Open the Routing and Remote Access console from the Administrative Tools program group.

  2. Expand the server node to display General beneath IP Routing in the console tree.

  3. Right-click General, and select New Routing Protocol from the context menu.

  4. Specify DHCP Relay Agent in the New Routing Protocol dialog box, and click OK.

  5. Open the property sheet for DHCP Relay Agent under IP Routing in the console tree, specify the IP address of the DHCP server to which lease requests should be relayed, and click OK.

  6. Right-click DHCP Relay Agent in the console tree, and select New Interface to specify a router interface on which relay will be enabled.

See Also Dynamic Host Configuration Protocol (DHCP)

DHCP scope

A range of Internet Protocol (IP) addresses that a Dynamic Host Configuration Protocol (DHCP) server can lease out to DHCP clients.

Overview

You configure the DHCP scope using the Microsoft Windows NT administrative tool DHCP Manager or the Windows 2000 and Windows .NET Server snap-in DHCP console. The IP addresses are leased for a specific Time to Live (TTL), usually three days. Information about scopes and leased IP addresses is stored in the DHCP database on the DHCP server. The values for IP address scopes created on DHCP servers must be taken from the available pool of IP addresses allocated to the network. Errors in configuring the DHCP scope are a common reason for problems in establishing communication on Transmission Control Protocol/ Internet Protocol (TCP/IP) networks.

Notes

If non-DHCP clients have static IP addresses that fall within the range of the server's DHCP scope, these static IP addresses must be excluded from the scope. Otherwise, two hosts might end up with the same IP address, one assigned statically and the other assigned dynamically, resulting in neither host being able to communicate on the network.

See Also DHCP console

DHCP server

A server that dynamically allocates Internet Protocol (IP) addresses to client machines using the Dynamic Host Configuration Protocol (DHCP).

Overview

DHCP servers perform the server-side operation of the DHCP protocol. The DHCP server is responsible for answering requests from DHCP clients and leasing IP addresses to these clients.

DHCP servers should have static IP addresses. A DHCP server gives DHCP clients at least two pieces of Transmission Control Protocol/Internet Protocol (TCP/IP) configuration information: the client's IP address and the subnet mask. Additional TCP/IP settings can be passed to the client as DHCP options.

Implementation

To have Microsoft Windows 2000 or Windows .NET Server function as a DHCP server, install the DHCP Server service and manage it using the DHCP console snap-in for the Microsoft Management Console (MMC). To have Windows NT Server function as a DHCP server, install the DHCP Server service and configure it using the administrative tool DHCP Manager. Note that a DHCP server should generally not be a DHCP client-that is, it should have a static IP address.

Notes

If hosts on a TCP/IP network are randomly losing connectivity with the network one by one, the DHCP server might be down and unable to renew leases for IP addresses obtained by the clients. Without a valid IP address leased to them, DHCP clients cannot communicate over the network.

See Also DHCP client ,DHCP Server service

DHCP Server service

The service in Microsoft Windows 2000, Windows .NET Server, and Windows NT that implements the server component of the Dynamic Host Configuration Protocol (DHCP) on Windows 2000, Windows .NET Server, or Windows NT Server.

Overview

The DHCP Server service is an optional networking component that can be installed on

Notes

The DHCP Server service should generally be installed only on a machine that has a manually assigned static IP address.

See Also Dynamic Host Configuration Protocol (DHCP)

DHTML

Stands for Dynamic HTML, a proposed World Wide Web Consortium (W3C) standard developed by Microsoft Corporation for creating interactive multimedia Web content.

See Also Dynamic HTML (DHTML)

Dial-on-Demand Routing (DDR)

A method for connecting two remote networks together using Integrated Services Digital Network (ISDN) and dial-up Public Switched Telephone Network (PSTN) connections.

Overview

Dial-on-Demand Routing (DDR) is a procedure used in Cisco routers to connect two remote networks only when there is traffic to forward between them. DDR uses circuit-switched dial-up connections that must first be established for communications to take place and that are torn down when communications are finished. DDR is not the same as remote access: in remote access a computer connects with a remote network, but in DDR two networks are being connected together using routers.

DDR is a way of minimizing costs over wide area network (WAN) links. DDR is only active when data needs to be sent to the remote network, and this is generally cheaper than having a dedicated or leased line connecting the networks that is on all the time.

Implementation

DDR is typically implemented using ISDN for backup WAN connections when the primary WAN link is a dedicated T1 line. DDR allows ISDN calls to be placed to one or more remote networks as required. These calls typically take less than 5 seconds to connect and begin transferring data. During the connection phase a switched circuit must be established between the networks. After the call is finished, this circuit is torn down after a prespecified idle time period has elapsed to save money. Since different circuits may be used for each DDR session, the quality of the connection can vary from session to session.

When used with ISDN, DDR allows different service types to be assigned to different kinds of traffic. Only traffic classified as "interesting" causes a dial-up session to be initiated-all other traffic is ignored by the DDR-capable router. A typical scenario might be to use DDR to connect when Simple Mail Transfer Protocol (SMTP) mail needs to be forwarded to a remote network. On a Cisco router you use the dialer-list command from the Internetwork Operating System (IOS) command set to specify which kinds of traffic are "interesting" from the standpoint of DDR. For IOS 11 and higher, DDR supports a number of different protocols including Internet Protocol (IP), Internetwork Packet Exchange (IPX), and Appletalk.

When implementing DDR for a WAN link, make sure you configure static routes to your remote network. Using dynamic routing will generate routing table advertisements that may trigger unwanted DDR sessions.

Notes

Dial-on Demand Routing (DDR) is different from Demand-Dial Routing (DDR), a Microsoft technology for forwarding packets on request across a Point- to-Point Protocol (PPP) WAN link.

See Also Integrated Services Digital Network (ISDN) ,routing

dial-up line

Any telecommunications link that is serviced by a modem.

Overview

Dial-up lines are ordinary phone lines used for voice communication, but dedicated or leased lines are digital lines with dedicated circuits. Dial-up lines are generally much less expensive to use, but they have less available bandwidth.

Companies often use dial-up lines for occasional, low-bandwidth usage (such as remote access networking) or as a backup for more costly dedicated lines. Dial-up lines are shared with all subscribers in the Public Switched Telephone Network (PSTN) domain, while dedicated or leased lines are allocated solely to the subscriber's private telecommunications domain.

Besides dial-up lines using analog modems over local loop connections, there are also some digital services that can be dial up (instead of dedicated) in nature. These services include

See Also dedicated line

DID

Stands for direct inward dialing, a service provided by a local exchange carrier (LEC) to a corporate client.

See Also direct inward dialing (DID)

differential backup

A backup type in which the only files and folders that are backed up are those that have changed since the last normal backup occurred.

Overview

Unlike an incremental backup, a differential backup does not clear the archive attribute for each file and folder. You can use differential backups in conjunction with normal backups to simplify and speed up the process. If a normal backup is done on a particular day of the week, differential backups can be performed on the remaining days of the week to back up the files that have changed since the first day of the schedule. Differential backups are faster than normal backups and use less tape or other storage media.

Notes

Differential backups are cumulative (unlike incremental backups), so when you need to do a restore, you need only the normal backup and the most recent differential backup. Differential backups take longer to complete than incremental backups, but you can restore data from them faster.

See Also backup type ,incremental backup

Differentiated Services (DS)

A system for service classification of network traffic.

Overview

Differentiated Services (DS) was developed by the diff-serv working group of the Internet Engineering Task Force (IETF) as a framework for standardizing service classification mechanisms. DS manages network traffic based on the forwarding behaviors of packets instead of by traffic priority or application. DS is rule-based and can be used for policy-based traffic management.

Implementation

DS works by packaging Differentiated Services Code Point (DSCP) information within standard Internet Protocol (IP) headers. This DSCP information specifies the level of service required for the packet, and supports up to 64 different traffic forwarding behaviors. DSCP maps a particular packet to a packet hop behavior (PHB) applied by a policy to a DS-compliant router.

See Also quality of service (QoS)

Diffie-Hellman (DH)

An encryption scheme used in public key cryptography.

Overview

Diffie-Hellman (DH) is an asymmetric encryption scheme that uses a key pair consisting of different public and private keys. The private key is used to encrypt the message, and the public key is used to decrypt it. DH also specifies a key exchange mechanism that allows private keys to be used over the Internet.

Another asymmetric encryption algorithm is RSA, developed by Ron Rivest, Adi Shamir, and Leonard Adleman. Asymmetric encryption schemes typically have much larger keys than symmetric schemes such as DES (Data Encryption Standard). A key for an asymmetric scheme is typically 1024 bits or larger. Asymmetric encryption schemes form the basis of the Secure Sockets Layer (SSL) protocol used on the Internet.

See Also Data Encryption Standard (DES) ,public key cryptography

digital

Transmission of signals that vary discretely with time.

See Also digital transmission

Digital Advanced Mobile Phone Service (D-AMPS)

The digital version of the Advanced Mobile Phone Service (AMPS) cellular communications system.

Overview

AMPS is the oldest cellular phone system still widely deployed. It is an analog system operating in the 800 megahertz (MHz) band and is based on the EIA-533 standard.

Digital Advanced Mobile Phone Service (D-AMPS) is the digital version of AMPS and is based on the IS-54 standard. D-AMPS has been around since 1992, and it builds on the large installed base of AMPS cellular network installations. D-AMPS is used in North and South America and in parts of Asia and the Pacific region. The technical name for D-AMPS is Time Division Multiple Access/IS-136.

Implementation

D-AMPS uses the same 800 Mhz frequency band as AMPS, specifically frequencies between 824 and 891 MHz (although a dual-band 800/1900 MHz system has also been implemented). D-AMPS also uses 30- kilohertz (kHz) channels as AMPS does, but whereas AMPS can carry only one conversation per channel, D-AMPS can carry between three and six conversations. As a result, D-AMPS extends the capacity of AMPS from 50 conversations per cell to up to 300 conversations per cell, making D-AMPS more efficient in bandwidth utilization than AMPS.

While AMPS transmits information continuously, D-AMPS transmits in bursts over a time-shared system based on Time Division Multiple Access (TDMA) technology as the media access method. D-AMPS is more immune to interference from noise than AMPS and can be scrambled to make it more secure (AMPS is very easy to eavesdrop on).

D-AMPS is cheaper and easier to implement than other digital cellular systems such as Code Division Multiple Access (CDMA), but its transmissions are not as secure as CDMA. D-AMPS represents a simpler upgrade path from AMPS than Global System for Mobile Communications (GSM), which also uses TDMA technology but does so in an incompatible format.

See Also Advanced Mobile Phone Service (AMPS) ,cellular communications ,Code Division Multiple Access (CDMA) ,Global System for Mobile Communications (GSM) ,Time Division Multiple Access (TDMA)

Digital Advanced Wireless System (DAWS)

A proposed standard for a multimegabit packet-switching radio network from the European Telecommunications Standards Institute (ETSI).

Overview

The Digital Advanced Wireless System (DAWS) will be compatible with the existing packet radio system called the Terrestrial Trunked Radio (Tetra), which enables terminals to communicate directly with each other in regions without cellular coverage.

DAWS is being developed in response to the rapid deployment of Global System for Mobile Communications (GSM) wireless cellular communication systems and the increasing demand for high-speed wireless mobile data services in response to the phenomenal growth of the Internet in recent years. The ultimate goal of the DAWS effort is to provide mobile wireless Asynchronous Transfer Mode (ATM) data communication services with full-terminal mobility over wide areas of roaming. ATM has been selected by ETSI as the technology of choice for the backbone of the future envisaged European Information Infrastructure (EII).

DAWS will be designed to support applications that require data rates in excess of the 2-megabit-per-second (Mbps) rate supported by the International Mobile Telecommunications-2000 (IMT-2000) standards, with eventual planned support for full ATM rates of 155 Mbps envisioned. Examples include wireless networking, Internet browsing, video conferencing, file transfer, and Voice over IP (VoIP).

See Also Global System for Mobile Communications (GSM) ,International Mobile Telecommunications-2000 (IMT-2000) ,Terrestrial Trunked Radio (Tetra)

Digital Audio Broadcasting (DAB)

A specification for broadband digital radio.

Overview

Digital Audio Broadcasting (DAB) is a specification for broadband transmission of digital information at speeds up to 2.4 megabits per second (Mbps). DAB is designed for transmission of digital audio and is envisioned as a replacement for existing analog AM/FM radio systems. DAB is a European specification only; however, a similar (but not interoperable) specification called IBOC (In Band, On Channel) is being developed in the United States.

Uses

Although DAB is intended mainly for audio broadcasting and is regulated as such, the possibility of it supporting digital data broadcasts, and even Internet access, are currently being explored. Since digital audio only requires 56 kilobits per second (Kbps), DAB's 2.4 Mbps bandwidth leaves lots of room for broadcasting location-based information such as weather reports, traffic information, airline arrivals and departures, stock quotes, and so on.

Britain leads the world in DAB deployment, with five channels already allocated to the BBC. Other European countries and regions are following with their own deployments. Psion has introduced a DAB receiver that plugs into the USB port of a PC to provide users with reception of DAB broadcasts, and other vendors are developing similar devices.

Although DAB is a broadcast (one-way) service, some carriers are envisioning using DAB to provide users with high-speed Internet access by using conventional or wireless modems for the upstream connection and DAB for downstream.

Prospects

DAB may gain the advantage over 2.5G and third- generation (3G) cellular systems such as General Packet Radio Services (GPRS) and Universal Mobile Telecommunications System (UMTS) by being first-to-market with high-speed location-based digital broadcast services. Spectrum licenses for DAB have cost only a small fraction of what carriers have bid for 3G licenses, allowing DAB providers more liquidity for rolling out deployments rapidly.

See Also 2.5G ,3G

digital certificate

A technology similar to an identification card that can be used for verifying the identity of a user or service you are communicating with electronically.

Overview

Digital certificates are entities issued by certificate authorities (CAs), public or private organizations that manage a public key infrastructure (PKI). Digital certificates are the networking equivalent of driver's licenses, and they go hand in hand with encryption to ensure that communication is secure. Digital certificates verify the authenticity of the holder, and they can also indicate the holder's privileges and roles within secure communication. They can be used like driver's licenses for identification purposes or like bank cards (together with a password) to perform financial transactions in e-commerce and online banking. Digital certificates enable various rights, permissions, and limitations to be applied to their holders for various kinds of trusted communication purposes such as purchasing, government banking, benefits, and voting rights. The main function of a digital certificate is to associate a specific user with his public/private key pair.

Implementation

A digital certificate consists of data that definitively identifies an entity (an individual, a system, a company, or an organization). Digital certificates are issued by and digitally signed with the digital signature of the CA (once the CA has verified the identity of the applying entity). In addition to identification data, the digital certificate contains a serial number, a copy of the certificate holder's public key, the identity and digital signature of the issuing CA, and an expiration date. The CA also maintains a copy of the user's public key in its centralized certificate storage facility.

Digital certificates are formatted according to an International Organization for Standardization (ISO) standard called X.509 v3. The X.509 standard specifies that a digital certificate must contain the following information fields:

Uses

Digital certificates and public key cryptography are used in the popular Secure Sockets Layer (SSL) protocol, which provides secure transactions over the Internet. Several types of digital certificates are involved in this process, including

Notes

A digital certificate is not the same as a digital signature. A digital certificate is a file that certifies the owner's identity, contains the owner's public key, and can be used to support encrypted communication. The purpose of a digital certificate is to certify that the user has the right to use the public/private key pair that has been issued by the CA. A digital signature, on the other hand, contains identity information along with the message or document itself (which has been hashed using the private key of the sender), and it confirms the identity of the sender and ensures that the content of the message has not been modified in transit.

In other words, to send an encrypted transmission, a user signs the message with a digital signature. But in order to be able to do this at all, the user must first be issued a key pair and its associated digital certificate.

See Also certificate authority (CA) , public key cryptography

digital dashboard

A technology based on Microsoft Office for customizing a user's interface to contain information from multiple data sources.

Overview

Digital dashboards allow users to consolidate business information from different sources such as personal folders, team folders, databases, messaging systems, Web sites, and so on. They provide a single, customizable user interface that helps users sift through and organize the mass of information crying for their attention in today's busy office environment.

Digital dashboards are designed to make knowledge workers more productive and help facilitate collaboration between teams of individuals. They are easy to build and customize and are based on standard Microsoft technologies, with Microsoft Outlook at the center of things.

Implementation

A digital dashboard in its simplest sense is a dynamic Web page displayed in Outlook. Outlook is the messaging and collaboration component of Office and acts as the infrastructure on which digital dashboards are constructed. Using Office Web Components (OWCs), developers can build systems that can allow documents, messages, spreadsheets, databases, and charts to be generated and published from back-end systems and displayed through digital dashboards. Typical back-end systems that can support digital dashboards include Microsoft Exchange Server, which lets users create and share team folders for collaboration purposes, and Microsoft SQL Server, an online analytical processing (OLAP) repository for storing and analyzing business data.

For More Information

You can download the Digital Dashboard Starter Kit from www.microsoft.com/business.

See Also Exchange Server ,Outlook ,SQL Server

digital data service (DDS)

A family of leased line data communication technologies that provides a dedicated synchronous transmission connection at speeds of 56 kilobits per second (Kbps).

Overview

DDS was originally a trademark for an AT&T all- digital service running at 56 Kbps, but it has evolved into a general descriptor for a variety of digital services offered by different carriers under various names. Digital data service (DDS) is available in both a dial-up version called Switched 56 and a dedicated leased line service for continuous connections. The dial-up version can serve as a backup for the dedicated version. The more common dedicated version are lines with negligible connection establishment latency; they are always on and never busy.

Implementation

Typically, DDS uses four wires to support digital transmission speeds of 56 Kbps, but it is actually a 64-Kbps circuit that uses 8 Kbps for sending signaling information. Some vendors provide a variant of DDS with a data transmission rate of a full 64 Kbps-this service is sometimes called Clear 64.

To use DDS services for wide area network (WAN) connectivity, route packets from your local area network (LAN) through a bridge or a router, which is connected by means of a V.35 or RS-232 serial interface to a Channel Service Unit/Data Service Unit (CSU/DSU). The CSU/DSU is connected to the four-wire termination of the DDS line by means of an M-block connector, a screw terminal block, or some other connection mechanism. The Channel Service Unit (CSU) converts the data signal into a bipolar signal suitable for transmission over the telecommunications link. The DDS lines themselves use four wires and support speeds of 64 Kbps, but 8 Kbps of bandwidth is usually reserved for signaling, so the actual data throughput is usually only 56 Kbps.

Digital data service (DDS). Implementing DDS.

(DDS is only one example of a type of digital line; others include Integrated Services Digital Network (ISDN) and T1. DDS can be used in either multipoint or point- to-point communications and requires dedicated digital lines. DDS lines can also be used to connect buildings on a campus, usually with a maximum distance of about 3 miles (5 kilometers).

Notes

Another name for DDS is Dataphone Digital Service.

See Also Switched 56 ,telecommunications services

Digital Data Storage (DDS)

A tape backup technology that evolved from Digital Audio Tape (DAT) technologies.

Overview

Digital Data Storage (DDS) is a tape backup technology broadly used in all levels of businesses. DDS provides high capacity and performance at a relatively low cost. Although commonly called DAT instead of DDS, this is a misnomer and refers only to the type of tape employed.

Implementation

DDS records information on tapes using a helical scan method similar to that used in VCRs. Tracks are laid down at an angle in sweeps across the width of the tape, which allows DDS tape drives to operate at slower speeds than parallel-scan drives. The result of lower tape speeds is less wear and tear on both the tape and the drive.

DDS tape drives use a Small Computer System Interface (SCSI) interface to connect to backup servers. Most DDS tape drives are capable of simultaneous reading and writing of information.

DDS comes in different flavors that determine capacity and backup speed. For example, DDS-2 supports backup of 4 gigabytes (GB) of data at a transfer rate of 46 megabytes (MB) per minute, and DDS-3 lets you back up 12 GB at 70 MB per minute. Higher levels, such as DDS-4, exist for large enterprise networks. These storage figures are for uncompressed data (double them for compressed data).

DDS tapes are cheap (usually costing under $10), which means that DDS is best used when your company's tape rotation scheme demands that a large number of tapes be used. DDS is a good solution for small to mid-sized companies implementing their first tape backup solution because it is inexpensive, easy to use, and performs well. The price of DDS tape drives starts at around $1,000.

See Also tape format

digital line

An umbrella term for various kinds of digital telecommunications services.

Overview

The distinguishing feature of a digital line is that it is digital from end to end and does not employ any kind of analog modem technologies. As a result, digital lines have higher traffic-carrying capacities, less noise, and better error-handling features than analog lines. The term digital line can refer to circuits based on the following:

See Also digital data service (DDS) ,Integrated Services Digital Network (ISDN) ,Switched 56 ,T-carrier

Digital Linear Tape (DLT)

A tape backup technology.

Overview

Digital Linear Tape (DLT) was developed in 1991 by Conner (later acquired by Quantum Corporation) as a tape backup solution for large companies. DLT tape drives can typically back up information at rates as high as 300 megabytes (MB) per minute. DLT tape drives typically cost $10,000 or more and are available in robotic tape libraries as well as standard single-tape units. DLT was designed as an enterprise backup solution and is comparable to Exabyte Corporation's 8mm format in that respect.

Implementation

DLT is a channelized tape backup technology that allows multiple channels to be backed up simultaneously in parallel on a single tape. A DLT drive's tape head is stationary and has multiple read/write channels on it. As a result, DLT can back up information much more quickly than many other types of tape backup technologies such as 8mm or Digital Data Storage (DDS).

An example is Quantum's DLT 7000 series, which supports backup of 35 gigabytes (GB) uncompressed (70 GB compressed) at transfer speeds up to 5 MBps.

Notes

Quantum has recently developed an upgrade to DLT called SuperDLT, which competes with linear tape open (LTO) tape drive technology in speed and storage capacity.

See Also tape format

digital modem

Any type of modem used for synchronous transmission of data over circuit-switched digital lines.

Overview

Unlike analog modems, which convert analog signals into digital ones using analog-to-digital converter (ADC) technologies, digital modems operate on end-to-end digital services.

A common example of a digital modem is an Integrated Services Digital Network (ISDN) terminal adapter, which uses advanced digital modulation techniques for changing data frames from a network into a format suitable for transmission over a digital line such as an ISDN line. Terminal adapters are thus basically data framing devices, rather than signal modulators such as analog modems, so in some sense the term digital modem is a misnomer because no modulation actually occurs.

See Also Integrated Services Digital Network (ISDN)

digital nervous system

A paradigm created by Microsoft Corporation for electronic connectivity between businesses.

Overview

A digital nervous system enables businesses to create efficient, integrated systems that are easy to use and manage. The digital nervous system can be viewed as the next evolutionary phase of the Information Age.

The idea of this business paradigm is that businesses connect to each other in a way that is similar to the organization of a living organism. Digital information- whether it is text, graphics, audio, or video-flows between businesses much as electrical impulses flow between parts of the body. A stimulus of information entering one business that is generated by another business produces a response. The greater the complexity and the more interconnected the nervous system, the higher the organism-and the same applies to business. Greater interflow of digital information can lead to the evolution of new forms of doing business. The Internet and its related paradigms "intranet" and "extranet" serve as examples of this evolution. These concepts grew naturally-almost organically-from the complex interconnectedness fostered by advances in software and networking.

For More Information

You can visit the Digital Nervous System site at www.microsoft.com/dns.

Digital Signal Zero

More usually known simply as DS-0, a transmission standard for digital telecommunications having a transmission rate of 64 kilobits per second (Kbps) and intended to carry one voice channel.

See Also DS-0

digital signature

An electronic signature that you can use to validate the identity of the sender of a digital transmission.

Overview

Digital signatures can be used to sign a document being transmitted by electronic means such as e-mail. Digital signatures validate the identity of the sender and ensure that the document they are attached to has not been altered by unauthorized parties during the transmission.

Uses

Digital signatures are mainly intended for signing documents that have a relatively short lifespan of no more than a few years. Examples include business contracts, invoices, and similar documents. They are not generally suited for documents requiring long-term archiving, such as medical or financial documents, because advances in cryptography might render them insecure within the next decade or so. If you use digital signatures for documents that have to be archived for the long term, you need to also include a verification trail of how the documents are transmitted and stored. This evidentiary trail might be necessary should the authenticity of these signatures ever be challenged in court.

Implementation

Digital signatures are based on public key cryptography. In order for digital signatures to work, the sender must have both a digital certificate and a key pair issued by a certificate authority (CA) such as VeriSign.

A digital signature for a particular document is created by first performing a mathematical hash of the document. A hash is an iterative cryptographic process that employs a complex one-way mathematical function. To create digital signatures, a special hash called SHA-1 (Secure Hash Algorithm-1) is employed, and the end result of this process is a 160-bit text file called a message digest (MD). The MD is then encrypted using the sender's private key to create the digital signature, and the resulting signature is then attached to the document, which is then transmitted to its intended recipient. Note that each digital signature is unique and depends on the document being transmitted.

When the recipient receives the signed message, the same hash is performed on the received document to create a new message digest. The sender's public key is then applied to the MD, and the result is compared with the signature of the received message to determine whether the message is from its expressed recipient and whether it is intact or has been tampered with during transmission.

Marketplace

The complexity of public key cryptography and the wide variety of CAs in the marketplace have slowed the general adoption of digital signatures for sender verification purposes. In 2000, however, the U.S. government passed legislation called Esign (the Electronic Signatures in Global and National Commerce Act) that should speed widespread adoption of digital signatures. Esign basically makes digital signatures carry the same weight in law as regular signatures.

Digital signature. How a digital signature is created and verified.

As a result of Esign, several vendors have started offering turnkey solutions for signing electronic documents that are designed to be as easy to use as sealing and stamping an envelope. Examples of such vendors include Digital Applications International, Silanis Technology, and signOnline. The first of these vendors to receive approval from a government agency was Silanis, whose Approvelt software was approved by the Food and Drug Administration (FDA) for government use.

A second group of vendors to emerge as a result of Esign consists of consulting companies whose products and services are designed to help large enterprises integrate digital signatures into their business processes. These vendors include Digital Signature Trust, NewRiver, Entrust Technologies, ValiCert, and DataCert.com.

A recent development that might lead to much wider use of digital signatures for signing documents is the implementation of public key infrastructure (PKI) technology by the United States Postal Service (USPS). In 2001, the USPS announced NetPost.Certified, a service that uses an electronic postmark based on digital signatures and stored on smart cards to guarantee the identity and integrity of electronic transmissions. NetPost.Certified is currently available only when one of the parties involved (sender or receiver) is the U.S government (where it should help streamline government by reducing the vast amount of paperwork needed for traditional document processing), but in a few years it is expected that the service will become more widely deployed in business use for signing contracts, deeds, affidavits, and other legal documents. The first U.S. government agency to use NetPost.Certified is the Social Security Administration, which uses it for vital statistics collections.

Issues

The weakest point in the digital signature process is making sure that the sender's private key is secure. The private key is normally stored on the hard disk of the sender's machine and is secured using a secret PIN (personal identification number) code known only to the sender. Because of its location on the hard drive, however, the private key is often vulnerable to hackers who break into networks to collect such information. Should hackers obtain a copy of your private key and somehow guess or determine your PIN code, they can sign electronic documents using your identity, and unless there is an evidential trail to the contrary, these signed documents would be legally enforceable.

To better protect the sender's private key, another new technology has emerged that may help propel the widespread use of digital signatures. This new technology is called Universal Serial Bus (USB) crypto-tokens, and it consists of a small device such as a smart card that securely stores the sender's private key in a medium inaccessible to hackers (you could, of course, still lose your USB token and land in trouble, but you could lose your driver's license as well, with similar results).

See Also cryptography , message digest (MD) algorithms, public key cryptography

Digital Subscriber Line (DSL)

A group of broadband telecommunications technologies supported over copper local loop connections.

Overview

Digital Subscriber Line (DSL) was originally designed to provide high-speed data and video-on-demand services to subscribers. DSL is an always-on service such as Integrated Services Digital Network (ISDN) but works at much faster speeds compared to T1 (and even T3) leased lines for a fraction of the cost.

DSL uses the same underlying PHY layer as ISDN Basic Rate Interface (BRI) and is basically a form of modem technology that specifies a signaling process for high- speed, end-to-end digital transmission over the existing copper twisted-pair wiring of the local loop. DSL accomplishes this feat by using advanced signal processing and digital modulation techniques. DSL uses digital modems in which the digital signals are not converted to analog or vice versa; instead, the signals remain digital for the complete communication path from the customer premises to the telco's central office (CO).

DSL actually represents a family of related services commonly referred to as "x DSL," which includes the following:

Uses

In the last few years, the use of DSL for residential broadband Internet access has skyrocketed, and DSL is currently the main competition for cable modem services to provide such services. DSL also can deliver other services to homes and businesses, including digital TV and Voice over DSL (VoDSL).

Another common use of DSL is for telecommuting (working remotely from home). DSL modems provide a cheap and easy way of gaining high-speed access to corporate intranets for telecommuters. DSL offers good security, performance, and reliability for telecommuters, but its distance limitation means that they must reside in metropolitan areas.

Many small to mid-sized businesses look on DSL as a replacement for aging 56 Kbps synchronous lines, ISDN lines, frame relay, and other traditional WAN services. DSL provides much higher bandwidth than these services, typically at half the cost or less. For example, although a T-1 line might cost more than $1,000 a month, HDSL can provide similar throughput for less than $400 a month.

Implementation

DSL can be deployed in different ways depending on the flavor being used. In ADSL, for example, an ADSL modem and a signal splitter are installed at the customer's premises and connected to the copper phone line. The splitter separates voice and data signals so that a single phone line can carry both voice and data. At the telco's CO, a Digital Subscriber Line Access Multiplexer (DSLAM) connects subscribers to a high-speed Asynchronous Transfer Mode (ATM) backbone, which typically uses a permanent virtual circuit (PVC) connection to an ISP for Internet access.

DSL modems can use a variety of signal coding methods, including carrierless amplitude and phase modulation (CAP) or discrete multitone (DMT) technology modulation, depending on the vendor's implementation, with CAP currently being the most popular modulation scheme used. DSL modems are simple Layer-2 devices that generally have few security features, lack remote management capability, and are intended for connecting a single computer to the Internet. For connecting an office LAN, a DSL router with built-in firewall and Network Address Translation (NAT) support is used instead of a DSL modem. All customer premises equipment (CPE) DSL devices are technically called ADSL Transmission Unit-Remote (ATU-R), and DSLAMs and similar DSL equipment at telco COs are known as ADSL Transmission Unit-Central office (ATU-C).

Advantages and Disadvantages

The main advantage DSL has over cable modem technology for delivering high-speed Internet access is that DSL uses ordinary phone lines to accomplish this, and such phone lines are everywhere. By contrast, cable TV, the service on which cable modem Internet access rides, has been widely deployed in residential areas but is rare in business districts and industrial parks. As a result, DSL has the advantage of having a ready-made wiring infrastructure, while cable companies have to invest money and effort to build their infrastructure out to business customers.

Digital Subscriber Line (DSL). Implementing DSL for residential and business customers.

DSL is also inherently more secure than cable modem services and offers better throughput guarantees. This is because each DSL connection between subscriber and telco CO is a dedicated connection. By comparison, all cable modem users in a given neighborhood essentially operate as a shared local area network (LAN). As a result, cable modem users can often see each other's computers on the network, and the more cable modem users simultaneously accessing the Internet in a given neighborhood, the slower the download speed everyone will experience. By contrast, DSL line speeds are independent of other users and depend only on the distance from the CO and the backbone capacity of the carrier's connection to the Internet.

For businesses, DSL can provide wide area network (WAN) links that are faster and cost less than traditional WAN services such as Frame Relay and ISDN. Unlike ISDN and modem-based services, DSL is an always-on technology as well.

DSL does have several disadvantages, however, the most important being its distance limitations. DSL typically cannot be deployed beyond 18,500 feet (5340 meters) from a CO, and even within this distance, the farther the subscriber is from the CO, the worse the connection and slower the speed. DSL is thus restricted mainly to dense urban areas and is often not an option in rural areas. Using remote DSL access terminals, however, some regional bell operating companies (RBOCs) are pushing DSL farther out from the CO (see Issues below).

Another issue is that DSL may be impossible to deploy over older portions of the PSTN. This is because load coils and bridged taps often prevent DSL signals from passing between the customer's premises and the telco CO. Load coils are induction coils that shift frequencies upward for voice transmission to compensate for unwanted wire capacitance, and unfortunately this often shifts voice transmission into the frequency band used by DSL, causing interference. Bridged taps are a shortcut method for telcos to provide local loop phone services to customers without actually running dedicated lines to those customers. Instead, an existing local loop line is tapped somewhere in the middle to run a line to a second customer. Too many such taps cause excessive echoes that can create noise that interferes with DSL signals. Finally, if a telco has fiber deployed anywhere along a local loop, DSL cannot be provisioned since it works only over copper and not over fiber (see Issues below).

DSL is also more difficult to provision than cable modem services. This is because to provision DSL typically requires the cooperation of three different companies: the DSL provider, which deploys the connection at the customer premises, the RBOC or inter-exchange carrier (IXC) from which the DSL provider purchases wholesale DSL services to retail to customers, and the ISP, which provides the actual connection to the Internet. As a result of this complexity, DSL provisioning can sometimes take weeks or even months, and when a connection goes down, finger-pointing among the different players can sometimes make for slow resolution of problems.

Finally, for business purposes traditional leased lines have a much better track record of reliability than DSL because of the aforementioned lack of cooperation among the DSL provider, RBOC, and ISP. T-1 lines are generally provisioned by a single company (the RBOC), which makes problems easier and quicker to troubleshoot. Medium and large companies often consider DSL an option for backup WAN services, but many are reluctant to make DSL their front-tier WAN service, despite the cost savings they can achieve by replacing their T-1 and fractional T-1 lines with DSL connections.

Marketplace

While most RBOCs offer DSL services, several independent DSL providers have recently appeared. However, some of these have failed or been acquired by RBOCs during the dot-com shakeout of 2000. Some of the larger players that remain in the pure-play DSL arena include Covad Communications Company, EarthLink, and Rhythms NetConnections. These providers are often classified as competitive local exchange carriers (CLECs), though their services are generally limited to DSL. Other names sometimes used to describe them include competitive DSL providers and DSL local exchange carriers (DLECs).

DSL providers generally offer high-speed residential Internet access for service charges in the range of $40 to $60 a month. Business customers generally pay up to 10 times more, primarily because of additional services supported, such as providing access to multiple users on a LAN, managed firewalls, service level agreements (SLAs), and top-tier customer support.

Some DSL providers offer a service called bonding, which allows two or more DSL connections to be combined into a higher throughput connection. In a typical bonding scenario, two or four 144 Kbps IDSL lines are connected to a DSL router and combined using MLPPP (Multilink PPP) into a single 576 Kbps connection. Bonding-enabled DSL routers are much cheaper to deploy at the customer premises than a DSLAM, which can accomplish the same thing but has greater space requirements.

Issues

A major issue that has hindered the widespread use of DSL is that different DSL vendors use proprietary solutions that make equipment from one vendor incompatible with that from another. This situation is particularly aggravating for businesses with offices scattered around geographically, as each office may use different equipment provided by different DSL providers or RBOCs. To overcome this issue, the OpenDSL initiative has been formed by 3Com Corporation, Cisco Systems, SBC Communications, Qwest Communications, and others. The goal of OpenDSL is to standardize all aspects of DSL the same way the Data Over Cable Service Interface Specification (DOCSIS) has done with cable modem technologies. Widespread acceptance of OpenDSL will likely drive DSL deployment further, especially in the business marketplace, but it is likely to take a few years for RBOCs and Competitive Local Exchange Carriers (CLECs) to replace their proprietary DSL systems with the new standard.

Although DSL appears to provide guaranteed throughput due to its dedicated connection (as opposed to shared-LAN cable modem services, where throughput decreases for each user as more users come online), such guarantees may not take into account the back- end configuration at the telco CO. For example, the DSLAM at the CO may become a bottleneck when multiple DSL subscribers all try to simultaneously download large files from the Internet. Even if the DSLAMs can scale to meet demand, the DSL provider's connection to the ISP can also be a bottleneck, particularly if the DSLAM is not colocated at the ISP's point of presence (POP). Business customers should be certain that the DSL provider they are considering working with has adequate DSL equipment at the CO end of the connection to guarantee the level of service they desire.

In an attempt to overcome DSL's distance limitation of 5.6 kilometers (3 miles) from the telco CO, some RBOCs have begun deploying remote DSL access terminals that are located within that distance of potential subscribers while being connected to the CO using fiber. This configuration allows RBOCs to push DSL out to remote neighborhoods to gain additional customers but has provoked a reaction from some pure-play DSL providers as it makes their own deployments difficult or impossible. This is because the fiber between the remote terminal and the CO makes it impossible for the DSL provider to provision DSL to customers directly from the CO (DSL cannot be provisioned over fiber). Furthermore, these remote terminals are often installed inside digital loop carrier (DLC) systems, small boxes deployed by RBOCs in neighborhoods, and these DLCs may not have sufficient space within them for DSL providers to deploy their own equipment to provision their own pushed-out DSL services (even if there is sufficient space, this means another costly expense for cash- strapped DSL providers). An example of this is Project Pronto, being deployed by SBC Communications, an RBOC. DSL remote terminals are sold by Alcatel, Lucent Technologies, and other vendors.

Prospects

Prospects are uncertain for some pure-play DSL providers. This is because the success of their DSL deployments depends on cooperation with both the RBOCs from which they lease local loop access and the ISPs they partner with to provide their DSL customers with Internet access. Nevertheless, DSL providers remain a popular choice for business customers seeking DSL services, mainly because such providers generally offer better service than some traditional RBOCs that have not yet adjusted to the new economy way of doing business. Pure-play DSL providers can also provide nationwide coverage compared to the regional coverage of most RBOCs, and this appeals to companies that have branch offices scattered geographically, but such rollouts are complicated by the fact that the DSL provider has to work with multiple RBOCs and ISPs to make everything work properly for such customers.

RBOCs, however, often have the advantage of having deeper pockets, which may give them a competitive edge in the long run over many competitive DSL providers. Furthermore, some larger RBOCs such as SBC Communications and Qwest are planning nationwide rollouts of DSL through partnerships and mergers with other providers.

An emerging technology that may help competitive DSL providers grow market share is line sharing. This technology lets a CLEC provision DSL to a customer over the same local loop connection that the RBOC uses to provide the customer with voice telephone service. In this way the DSL provider does not have to roll out a second phone line to the customer, which often takes weeks because it must be done in cooperation with the RBOC.

Notes

If you are using a DSL connection to the Internet, check with your DSL provider before you try to use it to deploy a public Web server, or else you might find your DSL service unexpectedly cut off once they discover what you are doing.

For More Information

You can find DSL news at www.dslreports.comThe DSL Forum Web site can be found at www.adsl.com

See Also Asymmetric Digital Subscriber Line (ADSL) ,cable modem ,G.Lite ,High-bit-rate Digital Subscriber Line (HDSL) ,ISDN Digital Subscriber Line (IDSL) ,line sharing ,Rate-Adjusted Digital Subscriber Line (RADSL) ,Symmetric Digital Subscriber Line (SDSL) ,Very-high-rate Digital Subscriber Line (VDSL)

Digital Subscriber Line Access Multiplexer (DSLAM)

The DSL termination device at a telco central office (CO).

Overview

The Digital Subscriber Line Access Multiplexer (DSLAM) is an Asynchronous Transfer Mode (ATM) access device installed at the CO of a DSL provider. DSL lines coming from multiple subscribers terminate at the DSLAM and are multiplexed together to ride on the provider's ATM backbone, typically through a permanent virtual circuit (PVC) to an Internet service provider (ISP) to provide customers with Internet access.

A typical DSLAM can aggregate up to 100 DSL customer connections. Multiple DSLAMs are required for larger numbers of subscribers. DSLAMs usually have minimal support for quality of service (QoS) as they are often required to split off voice traffic from Internet Protocol (IP) data, and voice traffic is delay-sensitive in nature. Since DSLAMs usually package IP traffic into ATM cells, they cannot distinguish between different kinds of IP traffic to provide different levels of service to these different types of traffic. For service to customers who need to transport both voice and data traffic, telcos must set up a different PVC for each type of traffic and for each customer, which means that managing and configuring DSLAMs is a lot of work for a DSL provider that is rolling out thousands of DSL lines to customers. For example, if 50 customers need both voice and data DSL services, then 100 PVCs will need to be configured.

DSLAMs usually support only one kind of DSL service, but now reaching the market are newer DSLAMs are that support multiple services (including Asymmetric Digital Subscriber Line [ADSL], ISDN Digital Subscriber Line [IDSL], and Very-high-rate Digital Subscriber Line [VDSL]) across thousands of ports in a single device. These new DSLAMs have greater QoS support to enable them to carry Voice over DSL (VoDSL) services in addition to traditional wide area network (WAN) and Internet access services. Some new DSLAMs also act as concentrators, allowing traffic to be aggregated according to type. Thus if 50 customers require both voice and data DSL services, only two PVCs need to be configured, one for voice traffic and the other for data.

Marketplace

Some of the vendors offering state-of-the-art DSLAM equipment include Nortel Networks, Lucent Technologies, Paradyne, and Copper Mountain.

See Also Asynchronous Transfer Mode (ATM) , multiplexing, telco

digital transmission

Transmission of signals that vary discretely with time.

Overview

Digital transmission signals vary between two discrete values of some physical quantity, one value representing the binary number 0 and the other representing 1. With copper cabling, the variable quantity is typically the voltage or the electrical potential. With fiber-optic cabling or wireless communication, variation in intensity or some other physical quantity is used. Digital signals use discrete values for the transmission of binary information over a communication medium such as a network cable or a telecommunications link. On a serial transmission line, a digital signal is transmitted 1 bit at a time.

Digital transmission.

The opposite of digital transmission is analog transmission, in which information is transmitted as a continuously varying quantity. An analog signal might be converted to a digital signal using an analog-to-digital converter (ADC) and vice versa using a digital-to-analog converter (DAC). ADCs use a method called "quantization" to convert a varying AC voltage to a stepped digital one.

See Also analog

DIME

Stands for Direct Internet Message Encapsulation, a protocol for encapsulating attachments to Simple Object Access Protocol (SOAP) routing protocol messages.

See Also Direct Internet Message Encapsulation (DIME)

direct burial fiber-optic cabling

Fiber-optic cable designed for burial.

Overview

Direct burial fiber-optic cable typically consists of multiple fiber-optic cables bundled together and enclosed in a protective sheath. Direct burial fiber-optic cabling is designed to be buried in trenches and contains a gel filling that protects the individual fibers from temperature and moisture variations. A strip of strengthening material runs axially down the cable to prevent excessive bending, which can fracture the individual fibers. Direct burial cabling can have steel-armor construction with heavy waterproof polyethylene jackets and can contain either multimode or single-mode fiber-optic strands.

Direct burial fiber-optic cabling. Deploying direct burial fiber-optic cable to connect networks in different buildings.

Uses

Direct burial cabling is more cost-effective than single-fiber cabling for long outdoor cable runs between buildings or across a campus because it allows for future bandwidth upgrades.

See Also fiber-optic cabling

Direct Cable Connection

A Microsoft Windows tool that facilitates file transfers between two non-networked computers.

Overview

Direct Cable Connection can be used on Windows 95, Windows 98, Windows 2000, Windows XP, and Windows .NET Server to establish a temporary network connection for the purpose of transferring files between machines. Direct Cable Connection is implemented using a serial null-modem cable or a standard parallel cable. One computer must be designated as the host (the server) and the other as the guest (the client). The desired resources must be shared on the host computer, and Dial-Up Networking must be installed on the guest computer.

If desired, the host computer can also act as a router that allows the guest computer to access resources on other computers on the host computer's network.

See Also null modem cable

directed frame

A frame that is being sent by one station to a specific destination station on the network.

Overview

On an Ethernet network, a directed frame is one that uses the hexadecimal MAC address of a specific target machine on the network as its destination address. The directed frame is picked up by the target machine and is ignored by all other machines on the network.

Directed frames are used for most network communication because they are the most efficient type of frame for communication. However, some services, such as network announcements, require that all stations on the network receive a frame. To send a frame to all stations on the network, you use a broadcast frame instead of a directed frame.

See Also broadcast frame , broadcast packet ,

directed packet

An Internet Protocol (IP) packet that is being sent by one host on a Transmission Control Protocol/Internet Protocol (TCP/IP) network to a specific destination host on the network.

Overview

A directed packet contains the IP address of a specific target host on the network as its destination address. The directed packet is picked up by the target host and is ignored by all other hosts on the network.

Directed packets are used for most network communication on a TCP/IP network because they are the most efficient method for communication. However, some services, such as network announcements, require that all hosts on the network receive a packet. To send a packet to all stations on the network, you use a broadcast packet instead of a directed packet.

See Also broadcast frame , broadcast packet ,

Direct Internet Message Encapsulation (DIME)

A protocol for encapsulating attachments to Simple Object Access Protocol (SOAP) routing protocol messages.

Overview

Direct Internet Message Encapsulation (DIME) is a protocol for encapsulating multiple payloads of arbitrary type and size for transmission in a single message construct. DIME is lightweight in nature and encapsulates binary information for transmission within SOAP messages. Since SOAP is based on Extensible Markup Language (XML), DIME performs a similar function to what Muiltipurpose Internet Mail Extensions (MIME) does for Simple Mail Transfer Protocol (SMTP) messages: provide a method for encoding binary attachments into text. The difference between DIME and MIME is that DIME is much simpler and faster to parse than MIME.

DIME can be used to encapsulate any binary information, including image files, XML data, and so on. DIME allows multiple logically connected records to be aggregated into a single message. DIME does not specify the type of messages being sent and can be used over any connection-oriented or virtual logical circuit.

Implementation

The DIME encapsulation format specifies only three additional pieces of information for each payload: the length of the payload in bytes, the type of payload (to allow routing of messages according to application type), and a payload identifier, which is basically a globally unique identifier (GUID).

See Also Simple Object Access Protocol (SOAP) ,XML

direct inward dialing (DID)

A service provided by a local exchange carrier (LEC) to a corporate client.

Overview

Direct inward dialing (DID) uses a Private Branch Exchange (PBX) that allows outside callers to dial individuals within the company directly. Typically, the LEC allocates a block of phone numbers to the company, usually differing only in the last two, three, or four digits. For example, a company with 50 employees who each need a separate phone number could be assigned the numbers 555-1201 through 555-1250. Outside callers could dial the employees directly using these numbers, which are routed through perhaps only eight trunk lines that service the PBX, supporting a maximum of eight simultaneous calls. Inbound calls are routed by the PBX to the appropriate extension.

See Also local exchange carrier (LEC) ,Private Branch Exchange (PBX)

directory

A tool designed to provide a single source for locating, organizing, and managing network resources within an enterprise.

Overview

Directories are used in enterprise networks to organize network resources so that they can easily be found and managed. These network resources typically include computers, disk volumes, folders, files, printers, users, groups, and other types of objects. Directories are typically used for one of two general purposes:

Network operating systems (NOSs) typically have their own directories built into them. This NOS directory functions much like the yellow pages of a phone book. For example, if you look up the word printers , you will find a list of available printers and information for accessing them. Examples of NOS directories include Windows NT Directory Services (NTDS) for Windows NT, Active Directory for Windows 2000, and Novell Directory Services (NDS) for Novell NetWare version 4 and higher.

Many applications use their own proprietary directories. An example is Microsoft Exchange 5.5, which uses a proprietary Exchange directory service based on the Microsoft Jet database. The more recent Exchange 2000 instead uses the Active Directory directory service of Windows 2000 for storing its hierarchy of information concerning Exchange directory objects.

An international standard has long existed for how directory services should operate and directory databases be structured. This standard, called X.500, was developed by the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) in the 1980s. The X.500 standard specifies a protocol called Directory Access Protocol (DAP) that enables users and applications to search and modify an X.500 directory. Because DAP is complex and difficult to implement, a simpler protocol called Lightweight Directory Access Protocol (LDAP) was developed later on and has become the de facto standard for directory protocols. Active Directory, the directory service used by Windows 2000 and Windows .NET Server, is LDAP-compliant.

To function at the enterprise level, a directory should have the following essential characteristics:

Active Directory for Windows 2000 and Windows .NET Server satisfies all these conditions.

Implementation

A directory typically consists of the following two components:

In addition to these two components, consideration must be given to where the directory database is stored. This database is typically located on directory servers that are located at different points on the network. The database can be stored in the following ways on these servers:

Marketplace

While Active Directory and NDS are the most popular general NOS directory products, there are also third-party directories that the enterprise architect may consider. One of the most popular of these is iPlanet's Directory Server, an LDAP server that is highly optimized for rapidly performing LDAP queries. Another is Novell's eDirectory, which is NDS uncoupled from the Novell NetWare NOS and made available for other platforms. Still another LDAP-compliant directory on the market is InJoin from Critical Path. Other popular directory vendors include Oracle Corporation and Innosoft International.

Some factors to consider when evaluating directory software for purchase for the enterprise environment include

Issues

The main issue confronting the enterprise concerning directories is interoperability. A large enterprise typically has many different directories implemented at various levels of its network, and consolidating all these directories into a single directory through migration is often impractical from the point of view of cost (large enterprises tend to be heterogeneous in many ways and for many reasons).

To allow directories from different vendors to share information with one another, a number of metadirectory products has appeared in the marketplace. Examples of metadirectory software include Microsoft Metadirectory Services (MMS) and Novell's DirXML.

Another issue regarding interoperability between directories is that the LDAP standard has certain limitations that require directory vendors to add their own proprietary extensions to LDAP if they want to give their LDAP directory products greater functionality. Thus even different LDAP-compliant directories from different vendors might not be able to interoperate to a desired degree. To solve this problem, an XML language called Directory Service Markup Language (DSML) is being developed to better allow different directories to communicate with one another. Other related initiatives include Simple Object Access Protocol (SOAP) and the Universal Description, Discovery, and Integration (UDDI) specification.

Finally, it must be mentioned that simply implementing a directory will not necessarily save costs for an enterprise or make the lives of administrators easier. Directories can be complex to implement, especially if legacy information needs to be migrated to them. Also, tools for managing directories can be difficult to use and require extensive training. As a result of such complexity, industry analysts estimated that as of 2000 less than 35 percent of all companies were using LDAP directories.

See Also Active Directory , Lightweight Directory Access Protocol (LDAP), metadirectory, Novell Directory Services (NDS), Simple Object Access Protocol (SOAP), Universal Description, Discovery, and Integration (UDDI), Windows NT Directory Services (NTDS), X.500

Directory Access Protocol (DAP)

A protocol for accessing information in a directory service based on the X.500 recommendations.

Overview

The Directory Access Protocol (DAP) specifies how an X.500 Directory User Agent (DUA) communicates with a Directory System Agent (DSA) to issue a query. Using DAP, users can view, modify, delete, and search for information stored in the X.500 directory if they have suitable access permissions.

DAP is a complex protocol with a lot of overhead, which makes it generally unsuitable for implementations in a Microsoft Windows environment. A simpler version called Lightweight Directory Access Protocol (LDAP) is growing in popularity and can be used to access and update directory information in X.500 directories. LDAP is more suitable than DAP for implementation on the Internet and has mostly superseded DAP as an access protocol for X.500-based directories (which are now often called LDAP directories).

See Also directory ,Lightweight Directory Access Protocol (LDAP) ,X.500

directory database

The central store of directory information on a network.

Overview

The directory database is one of two components of a typical directory application, the other being the directory service, which allows the directory database to be searched and modified.

In Microsoft Windows NT, Windows 2000, and Windows .NET Server, the directory database resides on the domain controllers, which manage all security- related aspects of the network. In Windows NT, the directory database is generally called the Security Accounts Manager (SAM) database. In Windows 2000 and Windows .NET Server, the directory database is the database component of Active Directory directory service.

Both the SAM database and the Active Directory directory service store information about objects on the network. The SAM is limited to storing information about users, groups, and computers that participate in the domain and security policy information such as password expiration policies and audit policies. The SAM stores its information in a privileged area of the registry. The practical upper limit for a SAM database is 40 megabytes (MB), which corresponds to approximately 26,000 user and computer accounts. If an enterprise has more than 26,000 users, the Windows NT directory database can be partitioned (split) into two or more portions and trust relationships can be configured according to a multiple master domain model.

Active Directory is considerably more flexible and powerful than the Windows NT SAM. Active Directory scales upward to tens of millions of objects, and these objects may be users, groups, computers, printers, shares, and many other forms of information.

See Also Active Directory , Security Account Manager (SAM) database

Directory Enabled Network (DEN)

An initiative toward a platform-independent specification for storing information about network applications, devices, and users in a directory.

Overview

Directory Enabled Network (DEN) was started by Microsoft Corporation and Cisco Systems in 1997 with the goal of developing standards for directory- based management of network resources. DEN is a policy-based mechanism that binds a user's name and network access profile to a policy. This procedure maps users to network services to make a more intelligent network that can better manage its resources, especially bandwidth. User rights on the network and bandwidth allocation can then be assigned to the user by using the profile.

DEN's goal is to unify the disparate network management platforms in existence today to provide a single, directory-based, cross-platform specification for the development of standard network management systems. The DEN initiative is now managed by the Distributed Management Task Force (DTMF) and is open to network device vendors, Internet service providers (ISPs), independent software vendors (ISVs), carriers, and others who might be interested.

Prospects

DEN has not materialized as fast as many had hoped, but the Desktop Management Task Force (DMTF) is still guiding it toward final release within the next few years. Part of this is because one of the major driving forces initially behind DEN was the hope of managing quality of service (QoS) policies using directories. This would allow bandwidth to be allocated to users and applications more easily. The pressure to do this has lessened in the last few years, however, as bandwidth costs continue to drop, especially with the emergence of Gigabit Ethernet (GbE). As a result, QoS issues can be sidestepped in many situations simply by overprovisioning bandwidth. Furthermore, simple Internet Protocol (IP) QoS techniques such as 802.1p have lessened the need for more complex directory-based QoS schemes.

Technical issues have also hindered the development of DEN as a real-life specification. One recent step forward involved the mapping of Common Information Model (CIM) schema to Lightweight Directory Access Protocol (LDAP) so that CIM object information could be stored and accessed in LDAP-compliant directories such as Microsoft's Active Directory directory service.

See Also Active Directory , Common Information Model (CIM) , Lightweight Directory Access Protocol (LDAP)

directory export

The process of exporting information from a directory to another application.

Overview

An example of directory export can be found in Microsoft Exchange Server, which supports the exporting of information about recipients stored in its directory database. This information can be exported to a comma-delimited text file (.csv file), edited, and then imported into another system.

Directory export. Configuring directory export for Exchange Server.

For example, you can import recipient information into a spreadsheet to print it out. Or you can export the information into a spreadsheet, use spreadsheet functions to mass-modify certain fields, and then use directory import for re-importing the modified account information back into Exchange. Using directory export/import is in fact the usual method in Exchange for modifying the properties of a large number of recipients at one time.

See Also directory ,directory import Exchange Server

directory hierarchy

The hierarchy of objects in a directory.

Overview

Directories typically store information about the objects they manage in a hierarchical fashion. As an example we can consider the containers and leaf objects in a Microsoft Exchange Server directory, which is displayed and configured using the Exchange Administrator program. This hierarchy is based on the directory recommendations given by X.500.

Directory hierarchy.

The Exchange directory hierarchy begins at a root object called the Organization container, and then branches down into sites, servers, connectors, recipients, and other objects. To configure any object in the directory hierarchy, you use its property sheet.

Objects in the directory hierarchy of Exchange come in two types:

See Also directory ,Exchange Server ,X.500

directory import

The process of importing information into a directory from another application.

Overview

An example of directory import can be seen in Microsoft Exchange. Here the information to be imported must be in a format that the importing system can understand, usually a delimited text file such as a .csv file. Microsoft Exchange Server allows recipient information to be imported into its directory database and allows recipients exported from other mail systems to be imported into Exchange. For example, you can use the Exchange Migration Wizard to extract mailbox information from a foreign mail system into a .csv file. You can then import this information into a spreadsheet program such as Microsoft Excel, modify it as needed, and import it into Exchange to create new mailboxes in your organization for users of the foreign mail system who are migrating to Exchange.

See Also directory ,directory export

directory replication (Windows 2000 and Windows .NET Server)

A process that ensures that all directory servers contain identical copies of the directory database.

Overview

In a typical implementation of a directory, the directory database is replicated to a number of servers called directory servers. These servers are located at different points around the network to provide accessible points for clients needing to search the directory for resources on the network.

Implementation

As an example of the directory replication process, consider the Active Directory directory service in Microsoft Windows 2000 or Windows .NET Server, which stores copies of the directory database on machines called domain controllers. In Windows 2000, directory replication is thus the process of replicating Active Directory updates to all the domain controllers on the network. Directory replication ensures that users have access to resources on the network by ensuring that information about users, groups, computers, file shares, printers, and other directory objects is current on all domain controllers in the network.

Directory replication of Active Directory on a Microsoft Windows 2000- or Windows .NET Server-based network takes place in two ways, depending on whether the participating domain controllers are in the same site.

See Also Active Directory ,

directory replication (Windows NT)

In Microsoft Windows NT, the replication of a tree of folders from one server to another using the Directory Replicator Service.

Overview

The Directory Replicator service is a Windows NT service for replicating files and folders over the network. The Directory Replicator Service simplifies the task of updating key network configuration files needed by all users, such as system policies and logon scripts. You can also use directory replication to load balance between multiple servers when a large number of users need access to specific files or folders.

You can perform directory replication in Windows NT to create and manage identical directory structures on different Windows NT servers and workstations. When a change is made to the master directory structure, such as modification of a file or addition of a directory, that change is replicated to the other computers. One use for directory replication is to provide a means for load balancing file system information across several servers. This allows more clients to efficiently access the data stored in the replicated directory structure because identical data is stored on different machines. For example, you might replicate a database of customer information across several servers in your network to provide easier access. You can also use directory replication to copy logon scripts from a primary domain controller (PDC) to a set of backup domain controllers (BDCs). You can configure replication to occur between different computers in a domain or from one domain to another.

Implementation

The Directory Replicator Service replicates files from an export computer to an import computer. The export computer must be running the Windows NT Server operating system, but the import computer can run Windows NT Server, Windows NT Workstation, or LAN Manager for OS/2 servers.

The export server is the computer that contains the master copy of the directory tree to be replicated. This export server must be a machine running Windows NT Server. The computers that will replicate with the export server are called import servers. Import servers can be machines running Windows NT Server, Windows NT Workstation, or LAN Manager Server.

Prior to configuring directory replication, you must create a new user account as a security context within which the replicator service will run. This account should have a password that never expires and should be a member of the Replicator, Domain Users, and Backup Operators groups. The account should be accessible from both export and import machines. Use the Services utility in Control Panel to configure the Directory Replicator Service to start automatically upon system startup and to use your new account for logging on.

Server Manager is the administrative tool used for configuring replication in Windows NT. You can configure replication to occur either immediately after a change is made to the directory structure or after a stabilization period. When configuring replication, you select one of the following options:

By default, the export directory in which the master copy of the replicated data is contained is located in the path %SystemRoot%\system32\repl\export. The default path to which the directory structure is imported on the import server is %SystemRoot%\system32\repl\import.

Notes

Do not use directory replication as a substitute for a regular program of tape backups. The Directory Replicator Service can create a lot of network traffic and should not be used for backing up data across a network. If the data you are replicating contains large files that change frequently, replication traffic can cause network congestion unless you watch it carefully. Be especially careful when you replicate directory structures over slower wide area network (WAN) links to avoid congestion that interferes with other essential forms of traffic such as logon traffic.

The Directory Replicator Service on Windows NT can export only one directory tree from a given export server. It is a good idea to leave the default export location as it is and move the directory structure and information you want to replicate to this default export location. This allows you to also replicate logon scripts because by default these are located on a PDC in the location %SystemRoot%\system32\repl\export\scripts and on a BDC in the location %SystemRoot%\system32\repl\import.

Since these script directories are located within the default export and import paths, they can be replicated along with other data.

directory service log

A log that contains events written by the Active Directory directory service on machines running Microsoft Windows 2000 or Windows .NET Server.

Overview

The directory service log exists only on domain controllers because these are the only computers that have copies of Active Directory. The directory service log contains events such as informational, warning, and error events concerning operations that have been performed on or by Active Directory. These events reveal the state of Active Directory and can be used for diagnostic and troubleshooting purposes.

Information in the directory service log can be displayed using Event Viewer, a Windows 2000 and Windows .NET Server administrative tool that runs as a snap-in for the Microsoft Management Console (MMC). Event Viewer for Windows 2000 and Windows .NET Server supports a number of different kinds of logs in addition to the three supported in the Windows NT version of Event Viewer, namely, the system log, security log, and application log. The actual types of event logs available in Windows 2000 and Windows .NET Server Event Viewer depend on which optional networking components are installed on the machine.

See Also Active Directory

Directory Service Manager for NetWare (DSMN)

An optional Windows 2000 utility for managing directory information stored on NetWare servers.

Overview

Directory Service Manager for NetWare (DSMN) enables Windows 2000 domain controllers to manage account information on NetWare 2.x , 3.x , and 4.x servers. It does this by copying NetWare account information to the primary domain controller (PDC) and then propagating changes back to the bindery on the NetWare servers. DSMN also synchronizes accounts across all NetWare servers allowing users to access any NetWare server using a single logon username. DSMN does not come with Windows 2000 but can be ordered separately from your Microsoft value-added reseller (VAR).

Notes

DSMN supports NetWare 4.x servers only when they are running in bindery emulation mode, not in Novell Directory Services (NDS) mode.

See Also Novell Directory Services (NDS)

Directory Service Markup Language (DSML)

A specification based on XML (Extensible Markup Language) that enables different directory applications to share information.

Overview

Directory Service Markup Language (DSML) is a markup language (such as Hypertext Markup Language [HTML]) whose function is to allow directory information to be represented using XML. Its purpose in doing this is to make XML the common language for exchange of information between different directory systems.

DSML is an open specification, and its membership includes Microsoft, IBM, Novell, and Oracle. A draft standard is being developed by the Organization for the Advancement of Structured Information Standards (OASIS).

Implementation

DSML is essentially a set of extensions to XML that is expressed as a schema. The DSML schema for XML provides mechanisms for accessing information in directories even if the actual format of the data is unknown. Using a set of tags specific to directory services, DSML allows information in one proprietary directory to be exchanged with another proprietary directory without the need to pay attention to the mechanics happening underneath or how the directories operate.

The original DSML 1 specification was limited to specifying how to use XML to describe a directory's contents. The newer DSML 2 specification adds standard mechanisms for using XML to locate, access, and manipulate information stored in directories. Using DSML 2, developers will be able to build transactional applications that use XML to find and modify objects stored in a directory, regardless of the vendor from which the directory is obtained.

Notes

A related initiative is the Directory Interoperability Forum (DIF), whose aim is to establish a standard that will enable use of the Lightweight Directory Access Protocol (LDAP) for performing data queries across multiple directories. DSML and DIF are separate initiatives, but they might eventually merge or be subsumed within a wider objective.

For More Information

You can find out more about DSML at www.dsml.org

See Also directory ,XML

Directory Service Migration Tool (DSMigrate)

A tool for migrating information from Novell NetWare networks to Microsoft Windows 2000.

Overview

Directory Service Migration Tool (DSMigrate) is a tool that you can use to migrate NetWare users, groups, files, and permissions to Active Directory directory service. DSMigrate can migrate both bindery-based NetWare 3.x servers and Novell Directory Services (NDS)-based NetWare 4.x and 5.x servers to Windows 2000.

DSMigrate lets you perform a test migration to assess any difficulties that might occur before you perform your final migration. DSMigrate is one of the NetWare tools included with Windows 2000.

See Also bindery ,Novell Directory Services (NDS)

directory synchronization (Microsoft Mail)

The process by which information stored in Microsoft Mail 3.x mail systems is replicated between postoffices.

Overview

There is only one directory server postoffice in a Microsoft Mail 3.x mail system; other postoffices that participate in directory synchronization are called requestor postoffices. Requestor postoffices send their address list updates to the directory server postoffice, which then sends cumulative changes back to the requestor postoffice. Directory synchronization also ensures that the global address list is updated.

When migrating legacy MS Mail systems to Microsoft Exchange Server, establishing directory synchronization between the old MS Mail directory and the new Exchange directory is a typical step in the process. To make this possible, Microsoft Exchange includes a dirsync component that emulates the MS Mail directory until the old directory information can be entirely migrated to the new system.

See Also Exchange Server

directory synchronization (Windows NT)

The process of directory synchronization within a Microsoft Windows NT domain.

Overview

Directory synchronization is the process whereby the domain directory databases of backup domain controllers (BDCs) in a Microsoft Windows NT domain are synchronized with the master directory database on the primary domain controller (PDC). Accurate and reliable directory synchronization is the foundation for effective operation of Windows NT Directory Services (NTDS).

In Windows 2000 and Windows .NET Server, this process of replicating directory information between domain controllers is called directory replication.

Notes

If directory synchronization must be performed over slow wide area network (WAN) links, you can adjust some registry parameters to make directory synchronization more efficient and prevent it from consuming excessive bandwidth.

See Also directory replication (Windows NT) ,domain controller Windows NT Directory Services (NTDS)

Direct Sequence Spread Spectrum (DSSS)

A combination of two transmission technologies (direct sequencing and spread spectrum) used in wireless networking and cellular communications.

Overview

Direct Sequence Spread Spectrum (DSSS) is the most popular transmission scheme used in wireless networking and is widely deployed in cellular communications systems also. DSSS is the basis of the 802.11b wireless local area network (LAN) standard used around the world.

Implementation

DSSS transmits data 1 bit at a time. Each bit is processed by modulating it against a pattern called a chipping sequence. For example, in 802.11b the chipping sequence is an 11-bit binary sequence 10110111000 called the Barker code. This sequence is chosen as it has certain mathematical properties that make it suitable for modulation of radio waves. The sequence is exclusively ORd (XORd) with each bit of the data stream, which has the effect of multiplying the number of bits transmitted by a factor of 11 (each 11-bit transmission is called a "chip" and represents only a single bit of actual data). While this seems wasteful in terms of bandwidth, it has the advantage that if some of the transmitted bits are lost, the original data stream can still be reliably reconstructed by processing the remaining information. This feature, in addition to spread spectrum transmission, which sends the data over multiple frequencies simultaneously, makes DSSS strongly resistant to interference due to extraneous radio sources, multipath reflection, and atmospheric conditions.

See Also 802.11b , cellular communications , spread spectrum, wireless networking

direct sequencing

A transmission method used for wireless networking.

Overview

Direct sequencing systems transmit data one bit at a time. Each bit of data is then transmitted simultaneously over a range of frequencies.

Direct sequencing. How direct sequencing works.

Direct sequencing is usually used in conjunction with spread spectrum, a transmission method where the data transmitted is spread over multiple frequencies to reduce signal loss due to noise and interference. The transmitter feeds each bit of the data stream into a signal spreader that multiplies the input, creating a wideband signal. The wideband signal is then amplified and broadcast by using an antenna.

Uses

Direct sequencing can be used for both wireless local area network (LAN) connections and as part of a cellular telephone system. Direct sequencing has a faster theoretical maximum data transmission rate than frequency hopping, another wireless transmission method, but in practice both methods provide similar throughput for wireless transmission of data because of protocol overhead in typical wireless communication systems.

See Also Direct Sequence Spread Spectrum (DSSS) ,frequency hopping ,spread spectrum ,wireless networking

disaster recovery

The process of recovering IT (information technology) operations after a failure or disaster.

Overview

Because IT is the lifeblood of today's e-business economy, it is essential to be able to recover IT operations quickly after a disaster. An IT disaster can be something as small as a failed disk drive within a server or something as big as a data center burning to the ground. Other common examples of disasters include power failures, destruction due to viruses, theft, sabotage, accidental commands issued by administrators that result in files being deleted or modified, earthquakes, insurrections, and so on. Being prepared for eventualities such as these is the responsibility of IT management and is critical to the success of modern businesses.

The key to successful recovery from these situations is a comprehensive, tested disaster recovery plan. This is a business plan that involves not just redundant hardware but personnel, procedures, responsibilities, and issues of legal liability.

Implementation

The first step in such a plan involves risk assessment to determine which components are most vulnerable and how the company can function in response to loss of data or critical IT services. It is also important to determine what is acceptable to management in terms of a recovery window-many businesses today would suffer significant loss if it should take more than a day or two to recover from failed systems.

Disaster recovery plans can be implemented in different ways depending on the size and needs of the company involved. Three common approaches to implementing disaster recovery plans are traditional tape backup, e-vaulting, and mirroring.

The traditional approach of small to mid-sized businesses to disaster recovery is to create full backups to tape daily and archive these tapes off-site in secure locations. Most computer systems in businesses use this method, primarily due to its low cost and its well- known procedures. However, tape backups must be verified, and test restores must be performed periodically to ensure that this system is in fact working. Furthermore, businesses must realize that it typically takes up to 48 hours to restore a failed disk or system from tape backup, a recovery window that may be unacceptable from an e-business standpoint.

E-vaulting (or electronic vaulting or data vaulting) takes a different approach. Instead of archiving data to tape, it is sent over high-speed leased lines or Internet connections to a remote data center for safe storage. E-vaulting makes it possible to do more frequent backups and rapid restores, but the weak link is the wide area network (WAN) connection, for if that goes down, a restore might be difficult or impossible.

To make e-vaulting more effective, companies sometimes arrange for backup servers to be running at the remote site and back up data directly to those servers. Then if a failure occurs with the company's main servers, control can be switched over to the backup servers and business can continue without interruption. The problem is that having duplicate systems in place is an expensive proposition and complex to manage, and companies often try to save money by placing older hardware at the remote site. When disaster strikes, this hardware may not be able to perform as hoped and business will go down.

Another version of e-vaulting involves using a mobile data center (a network in a moving van) at the remote site. When disaster strikes the primary site, the mobile data center can be brought on location and managed by your staff.

Regardless of which form of e-vaulting is used, the main disadvantage in the eyes of most companies is the cost, which is typically many times that of using traditional tape backup solutions.

Extending the idea of e-vaulting is a method called mirroring, in which identical hardware is placed at a remote site and data is kept synchronized between servers in the primary and remote site. This is a costly solution both from the point of view of the redundant hardware required and the throughput needed for WAN links, but the restore window is typically under an hour when disaster strikes, so it is a good option from the perspective of e-businesses. Financial institutions such as banks and credit unions often employ this solution to ensure maximum availability and reliability for customer access to accounts.

Marketplace

While smaller companies tend to manage their own tape backup and e-vaulting solutions in-house, large enterprises generally outsource their disaster recovery needs, particularly if mainframe systems and AS/400s are involved. Three companies handle the lion's share of disaster recovery business at the enterprise level: Comdisco Continuity Services, SunGard Recovery Services, and IBM's Business Continuity Recovery Services (BCRS).

E-vaulting companies abound and their range of services varies greatly. One example of such a company is Imation, which offers its LiveVault services for small and medium-sized companies.

Businesses that host their business services with Web hosting companies often make use of disaster recovery services provided by these services. An example is Exodus Communications, which provides mirroring services for customers at remote data centers.

See Also backup ,tape format

discrete multitone (DMT)

A line coding technique used in Asymmetric Digital Subscriber Line (ADSL).

Overview

Discrete multitone (DMT) describes a specification for the physical layer transport for ADSL and is an efficient mechanism for transmitting data at high speeds at frequencies above the voice cutoff on ordinary copper phone lines. DMT offers good performance and resistance to crosstalk and electromagnetic interference, and it can be used to dynamically adapt data flow to line conditions. DMT is the most popular ADSL line coding scheme in use today.

History

DMT was developed by Bellcore in 1987 as a line coding scheme of Asymmetric Digital Subscriber Line (ADSL) services. DMT evolved from V.34 modem technology and was first commercially implemented in ADSL modem technology by Amati Corporation. At the time, DMT was one of several line coding schemes that had been proposed by carriers offering ADSL services. Another competing scheme used was carrierless amplitude phase modulation (CAP), which is based on quadrature amplitude modulation (QAM). But while CAP was more widely deployed initially in ADSL rollouts, DMT was adopted by standards bodies in the United States and Europe and is supported by the International Telecommunication Union (ITU).

Implementation

CAP employs a band of frequencies from 40 kilohertz (KHz) to 1.1 megahertz (MHz), well above the 4 KHz cutoff for voice transmission over the copper local loop. CAP divides this band into 4 KHz wide channels, with each channel being modulated independently and all of them simultaneously carrying data in parallel. An ADSL modem that uses CAP can thus be thought of as a collection of hundreds of tiny modems operating in parallel.

Each CAP channel can carry up to 15 bits/symbol/hertz (Hz) of information, using QAM as a modulation scheme (actual throughput per channel may be less due to electromagnetic interference and other conditions). Upstream transmission uses 25 channels giving a maximum throughput of

25 channels x 15 bits/symbol/Hz x 4 KHz = 1.5 Mbps

while downstream uses 249 channels giving a maximum throughput of

249 channels x 15 bits/symbol/Hz x 4 KHz = 14.9 Mbps

See Also Asymmetric Digital Subscriber Line (ADSL) ,line coding ,modulation

discretionary access control list (DACL)

An access control list (ACL) that can be configured by administrators.

Overview

In Microsoft Windows 2000, Windows XP, Windows .NET Server, and Windows NT, a discretionary access control list (DACL) is an internal list attached to a file or folder on a volume formatted using the NTFS file system (NTFS). The administrator can configure the DACL, and it specifies which users and groups can perform different actions on the file or folder. In Windows 2000, Windows XP, and Windows .NET Server, DACLs can also be attached to objects in Active Directory directory service to specify which users and groups can access the object and what kinds of operations they can perform on the object.

Implementation

In Windows 2000 and Windows .NET Server, each object in Active Directory or file on a local NTFS volume has an attribute called Security Descriptor that stores information about the following:

The DACL for an object specifies the list of users and groups who are authorized to access the object and also what levels of access they have. The kinds of access that can be assigned to an object depend on the type of object under consideration. For example, a file object can have read access assigned to a user but a printer object cannot. (You cannot read a printer!)

The DACL for an object consists of a list of access control entries (ACEs). A given ACE applies to a class of objects, an object, or an attribute of an object. Each ACE specifies the security identifier (SID) of the security principal to which the ACE applies, as well as the level of access to the object permitted for the security principal. For example, a user or group might have permission to modify all or some of the attributes of the object, or might not even have permission to be aware of the object's existence. In common parlance, DACLs are sometimes simply referred to as access control lists or ACLs, although this is not strictly correct.

Notes

The owner of an object always has permission to modify its DACL by granting permissions to other users and groups.

See Also access control ,access control entry (ACE) ,access control list (ACL) ,system access control list (SACL)

disk duplexing

Disk mirroring using separate disk controllers.

Overview

Disk duplexing is a fault-tolerant disk technology that is essentially the same as disk mirroring, except that a separate disk drive controller is used for each mirrored drive. Disk duplexing thus provides two levels of fault tolerance:

Advantages and Disadvantages

Besides the additional level of fault tolerance, disk duplexing also provides slightly better read and write performance than disk mirroring because the two controllers provide two separate channels for data transmission. The down side is that disk duplexing is more expensive than disk mirroring since an extra controller card is required.

Microsoft Windows 2000, Windows XP, Windows .NET Server, and Windows NT Server support disk duplexing. You establish disk duplexing on Windows NT systems at the partition level and on Windows 2000, Windows XP, and Windows .NET Server at the volume level. In terms of system recovery and management, there is no difference between disk mirroring and disk duplexing.

See Also disk mirroring ,fault tolerance ,redundant array of independent disks (RAID)

disk imaging

A method for creating an exact duplicate of the software installed on a computer system.

Overview

Disk imaging (also called cloning) is frequently used in enterprise environments as a way of rapidly deploying workstations (and occasionally servers) on a network. The first step in disk imaging is to create the reference system, a computer that has its operating system and applications installed and configured exactly as desired. A snapshot or image of this system is then taken using disk imaging software, and the image is stored on a server on the network. Agent software is then used to boot fresh systems (that is, computers with no operating system installed), connect to the server, download the image, and recreate it bit for bit on the new systems. The resulting systems are exact duplicates of the reference system.

Advantages and Disadvantages

Disk imaging is a much faster way of installing operating systems and applications on machines than the traditional methods of running setup programs. A traditional installation of an operating system and suite of applications might take hours-disk imaging often takes only a few minutes.

The downside of the process is that disk imaging is usually successful only when the cloned machines have the exact same hardware configuration as the reference machine. Nevertheless, disk imaging is a speedy and easy way to roll out large numbers of identical workstations and has become widely popular among enterprise administrators.

One further issue that must be considered is licensing-before using disk imaging to roll out a network, make sure that the operating system vendor supports this operation and that it does not violate your licensing agreement.

Marketplace

One of the earliest and still popular disk imaging applications is Ghost from Symantec Corp. In fact, disk imaging is often referred to as "ghosting" in reference to this product's impact on the industry. Other popular tools include ImageCast IC3 from Innovative Software, DriveImage from PowerQuest Corporation, and RapiDeploy from Altiris.

See Also network management ,storage

disk mirroring

A fault-tolerant disk technology that employs two drives having identical information on them instead of just one.

Overview

In disk mirroring, both drives are controlled by the same disk controller, and when data is written to the controller, it is copied to both drives. Disk mirroring thus provides a measures of fault tolerance for disk subsystems, since if one drive in the mirror set fails, the other contains an up-to-date copy of all the data and can immediately take over without interruption of services.

Advantages and Disadvantages

In addition to fault tolerance, disk mirroring also provides better write performance than RAID-5 systems. On the other hand, disk mirroring read performance is worse than RAID-5 (though better than for a single disk).

Microsoft Windows 2000, Windows XP, Windows .NET Server, and Windows NT Server support disk mirroring. You establish disk mirroring on Windows NT systems at the partition level and on Windows 2000, Windows XP, and Windows .NET Server at the volume level. A more fault-tolerant form of disk mirroring is disk duplexing, which is discussed in its own article elswhere in the book.

See Also disk duplexing ,fault tolerance ,redundant array of independent disks (RAID)

Diskperf command

A Microsoft Windows NT, Windows 2000, Windows XP, and Windows .NET Server command for starting and stopping disk performance counters for Performance Monitor (System Monitor in Windows 2000, Windows XP, and Windows .NET Server).

Overview

Counters for the objects Logical Disk (partition) and Physical Disk (drive) are disabled by default because a performance hit of a few percent can occur if they are enabled. You must run the Diskperf command prior to monitoring disk activity with Performance Monitor. On some disk subsystems, enabling these counters might produce a small decrease in system performance, so you should disable them when monitoring of the system is completed. You must reboot the system after running Diskperf.

Diskperf -n sets the system to not use any disk performance counters.

For the full syntax of this command, type diskperf /? on the command line.

See Also Windows commands

disk quotas

A mechanism for managing how much data users can store on disks.

Overview

Disk quotas are a feature of Microsoft Windows 2000, Windows XP, and Windows .NET Server that administrators can use to track and control disk usage on a per-user basis for each NTFS file system (NTFS) volume that the user stores data on. Support for disk quotas is built into the new version of NTFS on Windows 2000, Windows XP, and Windows .NET Server.

Disk quotas are tracked independently for each NTFS volume even if several volumes are on the same physical disk. For purposes of managing disk quotas for users, disk space usage is based on file and folder ownership. Windows 2000, Windows XP, and Windows .NET Server ignore compression when it calculates how much disk space a user is utilizing. Whatever is unallocated in a user's disk quota is reported as free space for applications that allow the user to access disk space.

Depending on how disk quotas are configured, when a user exceeds the specified disk limit, one of two things can occur:

Implementation

To use disk quotas on an NTFS volume, you must first enable this feature when the volume is created and before any users have access to it. Typically, you will begin by setting more restrictive settings for all users, and then relax these settings for users who need more disk space or work with large files.

To enable disk quotas, set quota limits, and specify what happens when users exceed their quotas, use the Quota tab on the property sheet for an NTFS volume. To configure disk quotas for users, you essentially specify two values:

Disk quotas. Configuring disk quotas in Windows 2000.

For example, if the quota limit for a user is set to 10 megabytes (MB) while the quota threshold is specified as 8 MB, an event is logged when the user stores more than 8 MB of data on the volume, and the user is prevented from storing more than 10 MB of data on the volume.

To view the status of disk quotas on an NTFS volume for which this feature has been enabled, open the volume's property sheet and examine the traffic light icon. The light is

If you want to track disk usage by user but do not want to deny users access to a volume, you can enable disk quotas but specify that users can exceed their disk quota limit. Note also that enabling disk quotas incurs slight overhead in file system performance.

See Also NTFS file system (NTFS) ,storage

disk status

Information about whether a disk is healthy or has a problem.

Overview

In Microsoft Windows 2000, Windows XP, and Windows .NET Server, disk status is displayed in the Disk Management portion of the Computer Management console. The Status column shows the status indicators according to the following table. The letter B here indicates that the status indicator can apply to basic disks, and D indicates dynamic disks.

Disk Status

Disk Status

Description/Prescription

Online (BD)

The disk is accessible. There are no problems.

Online- Errors (D)

I/O errors detected. Try reactivating the disk to see whether the errors are transient.

Offline (D)

The disk is not accessible and might be corrupted, disconnected, or powered down. Try reactivating the disk, and check the cables and the controller.

Foreign (D)

The disk has been moved to this machine from another computer running Windows 2000 (same for Windows XP and Windows .NET Server). You must import the foreign disk before you can use it.

Unreadable (BD)

The disk is not accessible and might be corrupted, have I/O errors, or hardware failure. Try rescanning the disk or rebooting the system to see whether it recovers.

Unrecognized

The disk type is unknown, and the disk is probably from a different operating system, such as UNIX.

No Media

The drive is either a CD-ROM drive or some type of removable drive and has no media in it.

See Also storage

Distance Vector Multicast Routing Protocol (DVMRP)

A multicast routing protocol based on the spanning tree algorithm.

Overview

Distance Vector Multicast Routing Protocol (DVMRP) is one of three routing protocols used for multicast routing, the other two being Protocol Independent Multicast Dense Mode (PIM-DM) and Multicast Open Shortest Path First (MOSPF). DVMRP is used for dense mode multicasting where the hosts receiving the multicast are densely congregated in large pockets of the network. DVMRP assumes that all hosts on connected subnets want to receive the multicast. A typical scenario where DVMRP might be used would be for webcasting sports events, concerts, or corporate presentations.

Implementation

DVMRP floods the network with multicast packets to try to get information out as rapidly as possible to the furthest corner of the network. DVMRP assumes that there is sufficient bandwidth to support this operation. Once a multicast session is established, DVMRP then prunes the routing tree by cutting off multicasts to routers whose connected networks have no hosts receiving them. If a router determines using Internet Group Membership Protocol (IGMP) that no downstream hosts or routers require the multicast transmission, it contacts the upstream DVMRP router and asks to be removed from its list for the multicast session.

DVMRP routers check their routing tables frequently to determine the optimal route to nearby DVMRP routers. DVMRP uses broadcasts to keep routing tables up to date, which tends to consume a lot of network bandwidth. As a result, DVMRP should only be implemented on connections that have sufficiently high bandwidth to support it.

See Also dense mode ,Multicast Open Shortest Path First (MOSPF) ,multicasting ,Protocol Independent Multicast-Dense Mode (PIM-DM) ,routing protocol

distance vector routing algorithm

A routing algorithm used by certain types of routing protocols.

Overview

Also called the Bellman-Ford algorithm after its originators, the distance vector routing is an algorithm designed to enable routers to maintain up-to-date information in their routing tables. The main alternative to distance vector routing is link state routing, which is discussed in its own article elswhere in this book.

Implementation

Using the distance vector method, each router on the internetwork maintains a routing table that contains one entry for each possible remote subnet on the network. To do this, each router periodically advertises its routing table information to routers in adjacent subnets. Each routing advertisement contains the following information about each route in that routing table:

These router advertisements are performed independently by all routers (that is, no synchronization exists between advertisements made by different routers). In addition, routers receiving advertisements do not generate acknowledgments, which reduces the overhead of routing protocol traffic.

Routers select the route with the lowest cost to each possible destination and add this to their own routing tables. Routers in adjacent subnets then propagate this information to more distant subnets hop by hop until information from all routers has spread throughout the entire internetwork and convergence (agreement between routing tables on all routers in the internetwork) is attained. The end result is that each router on the network is aware of all remote subnets on the network and has information concerning the shortest path to get to each remote subnet.

Uses

The Routing Information Protocol (RIP), which is supported by Microsoft Windows 2000, Windows XP, and Windows .NET Server, is one example of a dynamic routing protocol that uses the distance vector routing algorithm. Other examples are described in the article "distance vector routing protocol" elsewhere in this chapter.

Advantages and Disadvantages

Distance vector routing protocols (that is, protocols based on the distance vector routing algorithm) are generally simpler to understand and easier to configure than link state routing algorithm protocols. To configure a router that supports distance vector routing, you basically connect the interfaces to the various subnets and turn the router on. The router automatically discovers its neighbors, which add the new router to their routing tables as required.

The main disadvantage of the distance vector routing algorithm is that changes are propagated very slowly throughout a large internetwork because all routing tables must be recalculated. This is called the Slow Convergence Problem. When convergence is slow, it is possible that routing loops can temporarily form, which forward packets to black holes on the network, causing information to be lost. Most distance vector routing protocols implement use a technique such as the split horizon method to ensure that the chances of routing loops being formed are extremely small.

Another disadvantage of distance vector routing is that routing tables can become extremely large, making distance vector routing protocols unsuitable for large internetworks, and that route advertising generates a large amount of traffic overhead.

As a result of these issues, distance vector routing is generally best used in internetworks having 50 routers or fewer and in which the largest distance between two subnets is less than 16 hops. Despite these limitations, distance vector routing protocols are more popular than link state routing protocols, mainly because they are easier to set up and maintain (everything is automatic) and because their CPU processing requirements are small, which allows such protocols to be implemented on low-end and mid-end routers.

See Also distance vector routing protocol ,link state routing algorithm ,routing protocol

distance vector routing protocol

Any routing protocol that is based on the distance vector routing algorithm.

Overview

The most popular routing protocols based on the distance vector routing protocol are

Advantages and Disadvantages

Distance vector routing protocols have some advantages over link state routing protocols: they are easier to configure and manage and their router advertisements are easier to understand and troubleshoot. On the other hand, distance vector routing protocols often have much larger routing tables, greater overhead of router traffic, long convergence times, and they scale poorly. As a result, they are most often used for small to mid-sized internetworks.

See Also distance vector routing algorithm ,Interior Gateway Routing Protocol (IGRP) ,link state routing algorithm ,Routing Information Protocol (RIP) ,routing protocol

distinguished name (DN)

A method for uniquely naming objects within a directory.

Overview

Distinguished names (DNs) are part of the X.500 directory specifications and are used for locating and accessing objects using the Lightweight Directory Access Protocol (LDAP), and directories based on the X.500 or LDAP specifications are hierarchical in structure. For example, in an LDAP directory, the root or top node is a container that contains child nodes that can be either leaf nodes (end nodes) or containers themselves. An object that is a leaf node has a common name (CN) that identifies it, but this common name is not necessarily unique. By including with the common name the names of the root node (o node) and any containers and subcontainers (ou nodes) along the branch to the leaf node, a unique name can be constructed for the leaf node. This unique name is called the DN and is unique to the entire directory.

The DN for an object is thus formed by concatenation of the common name or relative distinguished name (RDN) of the object together with the names of each ancestor of the object all the way to the root object of the directory. As a result, if an object is renamed or moved to another container within a directory, its DN changes. Since this is undesirable from the point of view of applications accessing a directory, real-life directories such as Microsoft Corporation's Active Directory directory service use an internal naming scheme for objects that is invariant under object renames and moves (Active Directory uses globally unique identifiers [GUIDs] for internally naming objects).

Examples

DNs are one of the addressing formats for objects within Active Directory in Microsoft Windows 2000. In Active Directory, every object in the directory requires a unique name. Several kinds of names can be used to define a specific object in Active Directory, including

For example, let's consider a User object within Active Directory. A User object is an example of a leaf object because it cannot contain other objects. User objects such as Jeff Smith are identified using CNs. A container is a directory object that can contain other objects. In Active Directory, containers are referred to as organizational units (OUs) because they are used to organize other objects into hierarchies of containers. For example, the user Jeff Smith would typically be contained within the Users container. At the top of the container hierarchy are the containers that represent different components of the domain itself. These components are called domain components (DCs). For example, if user Jeff Smith exists in the microsoft.com domain, the DN for this user is represented by the path DC=com,DC=microsoft,OU=Users,CN=Jeff Smith.

In Microsoft Exchange Server 5.5, which used its own proprietary directory instead of Active Directory, DNs were also used to identify recipients. Exchange automatically creates a DN for every recipient object in its directory database, including objects such as mailboxes, distribution lists, and public folders. For example, if a user Jeff Smith has a mailbox named JeffS located on an Exchange server in Redmond at the organization Microsoft, the DN for this user would be represented internally as O=Microsoft,OU=Redmond,CN=Recipients,CN=JeffS.

The Message Transfer Agent (MTA) uses a recipient's DN to determine how to route messages to that recipient within an Exchange organization.

See Also Active Directory ,globally unique identifier (GUID) ,Lightweight Directory Access Protocol (LDAP) ,X.500

distributed application

An application consisting of two or more parts that run on different machines but act together seamlessly.

Overview

In the simplest example of a distributed application, the user interface portion runs on a client machine while the processor- or storage-intensive portion runs on a server. This type of distributed application is called a client/server application. Examples of client/server applications include the following:

In a more complex scenario, three tiers are used to create a distributed application. The back end may be an SQL database containing customer or sales information. The front end might be a simple Web browser, used by the employee to request sales information from the database. The middle tier of the distributed application would be the Web application running on an IIS machine.

See Also client/server ,

Distributed Component Object Model (DCOM)

A Microsoft programming technology for developing distributed applications.

Overview

Distributed Component Object Model (DCOM) is a Microsoft technology developed in 1996 for component- based development of software that is network-aware. Using DCOM, developers can create network-aware applications using Component Object Model (COM) components. DCOM works under various network transports, including Transmission Control Protocol/Internet Protocol (TCP/IP), and is basically a set of extensions to COM. DCOM is the preferred method for building client/server applications in Windows 2000, Windows XP, and Windows .NET Server, and it was formerly known as Network OLE.

Implementation

DCOM functions as a client/server protocol that provides distributed network services to COM, allowing DCOM- enabled software components to communicate over a network in a similar fashion to the method by which COM components communicate among themselves on a single machine. DCOM client objects make requests for services from DCOM server objects on different machines on the network using a standard set of interfaces.

Distributed Component Object Model (DCOM). How DCOM works.

The client object cannot call the server object directly. Instead, the operating system intercepts the DCOM request and uses interprocess communication mechanisms such as remote procedure calls (RPCs) to provide a transparent communication mechanism between the client and server objects. The COM run time provides the necessary object-oriented services to the client and server objects. The COM run time also uses the security provider and RPCs to create network frames that conform to the DCOM standard.

In Microsoft Windows 2000, DCOM requests are sent using RPCs. Windows 2000 use security features such as permissions to enable software components to securely and reliably communicate over the network.

See Also client/server , Component Object Model (COM) ,

distributed computing

A programming model that distributes processing across a network.

Overview

The architecture of programming applications has gone through several evolutionary steps over the last few decades. The original mainframe computing model had all the processing done centrally on the mainframe, with dumb terminals used for input and displaying results. The next evolutionary step was client/server computing where the client and server shared different aspects of the processing. Applications had to be specially tailored to take advantage of the client/server processing model. Recently a three-tier model has become dominant, which augments the client/server paradigm with a middle tier that encapsulates business logic in programming constructs.

The next evolutionary step in business computing is the distributed computing model. In distributed computing, there is no single centralized server. Instead, network communications are used to connect servers, clients, handheld devices, and other smart devices into a processing fabric. The different components of this fabric may be scattered widely across a network.

Microsoft Corporation's new .NET platform is designed to maximize processing gains through implementing a distributed computing model for building Web services and applications. Distributed computing is poised to revolutionize business services and how they are developed, and .NET supports peer-to-peer (P2P) computing as one manifestation of its distributed computing model.

See Also client/server ,.NET platform ,peer-to-peer network

Distributed Denial of Service (DDoS)

A type of denial of service (DoS) attack that employs a large number of intermediate hosts to multiply the effect of the attack.

Overview

Although a DoS attack typically uses the attacker's machine or group of machines to launch it, a DDoS attack uses unwitting third-party hosts instead. The result can be devastating if hundreds-or thousands-of machines are involved.

Distributed Denial of Service (DDoS). How a DDoS attack works.

To launch a DDoS attack against a specific Web server, an attacker first finds a vulnerable network and installs DDoS tools on hundreds or even thousands of hosts on that network. Common tools used include Tribe FloodNet (TFN), TFN2K, trinoo, Stacheldraht, and many others easily downloaded from locations on the Internet. These compromised hosts become "zombies" under the attacker's control (other terms used to describe them include agents, daemons, or DDoS servers). Linux and Solaris systems have been the most popular systems exploited as zombies so far. A good indicator that an attacker might be trying to compromise your network by installing DDoS tools on hosts is a sudden increase in Rcp (remote copy protocol) traffic, which is often used to install these tools remotely. This kind of activity can be detected by a network intrusion detection system (firewall logs are less help, as attackers usually compromise a single host on the network and then use Rcp to install the tools on other hosts internally on the network from that point on).

Once the intermediate network is compromised and tools installed, the attacker launches the attack using a remote management console, and the zombies then attack the target host using a DoS attack method such as synchronous idle character (SYN) flooding, User Datagram Protocol (UDP) flooding, a smurf attack, or Internet Control Message Protocol (ICMP) flooding. The attack is usually overwhelming, usually bringing down the targeted server and often saturating the target's network so that other servers on the network are unable to communicate as well. Address spoofing is used to make it difficult to trace the machines launching the attack, and even if these zombies are identified, it is usually even more difficult to trace the single machine from which the attacker is managing the attack. An even more insidious version of DDoS makes it even harder to track the attacker by employing another layer of intermediate machines called masters, which are controlled by the attacker's management console and which themselves control the zombies. A master is typically the first host compromised on an intermediate network, and a full-scale DDoS attack may employ dozens of intermediate networks, each with their own master and hundreds of zombies.

Prospects

The most famous DDoS attack was instituted by Mafiaboy, a Canadian teenager who allegedly brought down a number of major Web sites for hours in February 2000, including Amazon.com, Buy.com, CNN.com, eBay, E-Trade.com, Yahoo!, and ZDNet. The attack was performed using tools that are easily downloadable from a number of sites on the Internet.

There is still no simple solution for protecting sites against DDoS attacks. The main problem is the large number of networks connected to the Internet that are poorly secured and are open to exploitation for using their machines as zombies for launching DDoS attacks on other, better-secured sites. Lack of vigilance on the part of the administrators of these vulnerable networks thus affects the success of everyone's sites on the Internet. Omissions such as allowing directed inbound broadcasts to networks and failing to implement ingress and egress filtering properly on routers on a network's border are typical mistakes by administrators that render corporate networks vulnerable to hijacking for performing DDoS attacks.

One step forward was the establishment in 2001 of the Information Technology Information Sharing and Analysis Center (ITISAC) by the U.S. Department of Commerce. ITISAC is designed to help companies share information about DDoS attacks to better defend against them. Another issue is the impending prospect of organizations being held liable for their poorly secured corporate networks should hackers establish zombies on these networks and use them to launch DDoS attacks against other sites. The Internet Engineering Task Force (IETF) is also involved and has established a working group RFC 2267+ to help strategize on the best way to deal with DDoS attacks. Despite these initiatives, the Internet remains vulnerable to such attacks and will likely continue to be so for the immediate future.

See Also denial of service (DoS) ,hacking ,network security

Distributed file system (Dfs)

A network file system that makes many shares on different file servers look like a single hierarchy of shares on a single file server.

Overview

Distributed file system (Dfs) is a feature of the Microsoft Windows 2000 and Windows .NET Server operating systems and a separately available add-on for the Windows NT operating system. Dfs allows file servers and network shares to be logically organized into a single Dfs directory tree. This simplifies management of network resources and makes it easier for users to locate and access network resources. From the user's perspective, Dfs makes it appear that there is only one server containing a hierarchical tree of resources, while in fact these resources might be distributed across multiple servers in different locations.

Dfs simplifies directory browsing, offers search tools that simplify locating network resources, and offers administrative tools for building and managing Dfs directory trees. It also eliminates the need for Windows 95, Windows 98, or Windows NT Workstation clients to form multiple persistent network connections because users require only one persistent connection to the directory tree.

Implementation

In the Windows 2000 and Windows .NET Server implementation, you first open the Dfs snap-in for Microsoft Management Console (MMC) to create a Dfs root node. You can then create Dfs child nodes under the root node. Each child represents a shared folder that can be located anywhere on the network. When users want to access a resource on the network, they navigate through the Dfs tree and do not need to know the particular server the resource is located on. Users must have Dfs client software installed on their machines. Dfs client software is included with Windows 2000, Windows XP, Windows .NET Server, Windows NT, and Windows 98. An optional Dfs client can be downloaded for Windows 95 from the Microsoft Web site.

You can configure Dfs to operate in two ways:

Notes

If a server containing Dfs shares fails, you can simply move the files to another machine, create new shares, and map the existing Dfs child nodes to the new shares. Your users will not even know that anything has changed. If you assign a user permission to access a shared folder, that person automatically has permission to access it through the Dfs tree as well.

See Also file system ,storage

Distributed Management Task Force (DMTF)

An industry consortium that develops specifications for cross-platform management standards.

Overview

The Distributed Management Task Force (DMTF) is a consortium of industry players that are developing specifications and standards for managing disparate systems using a common and ubiquitous tool, the Web browser. The DMTF was formerly known as the Desktop Management Task Force, and it has the support of most major networking and operating system vendors.

The DMTF has been working on several initiatives since its inception in 1996:

A complementary set of standards for cross-platform network and system administration is being developed by the Internet Engineering Task Force's (IETF) Policy Framework Working Group.

Notes

Windows Management Interface (WMI) is Microsoft Corporation's implementation of the DMTF'S WBEM architecture. WMI is a core feature of the Microsoft Windows 2000, Windows XP, and Windows .NET Server operating system platforms.

For More Information

You can find the DMTF at www.dmtf.org

See Also browser (Web browser) , Common Information Model (CIM) , network management, Web-Based Enterprise Management (WBEM), Windows Management Instrumentation (WMI)

distribution box

A fixed or freestanding miniature patch panel in an enclosure.

Overview

Typically, horizontal cable runs are connected to the punchdown blocks within the distribution box, and drop cables are plugged into the RJ-45 ports of the box. You can thus use distribution boxes to provide central cabling points away from walls. Stations can then be plugged and unplugged from an accessible location in the work area instead of from the back of the workstations (after you bend down and crawl behind the machine) or from the land drops in the wall (which are often hidden behind desks or other obstacles).

Uses

Use distribution boxes for classrooms and work areas in which computers frequently need to be moved around and rearranged.

See Also patch cable ,patch panel

distribution group

One of two types of groups within Active Directory directory service in Microsoft Windows 2000, the other type being security groups.

Overview

While security groups can be listed in discretionary access control lists (DACLs) for controlling access to resources or sending e-mail to users, distribution groups can be used only for e-mail purposes. By sending e-mail to a distribution group, you send e-mail to every member of that group.

Distribution groups can be converted to security groups and vice versa as long as the domain is in native mode. You cannot perform conversion if the domain is in mixed mode.

See Also group ,security group

distribution list

A grouping of recipients in Microsoft Exchange Server that you can use to send a single message to multiple users simultaneously.

Overview

When a message is sent to a distribution list, it is sent to all recipients on the list. Distribution lists provide a convenient way of performing mass mailings to users. For example, a marketing department might create several hundred custom recipients for regular customers outside the Exchange organization. These custom recipients can then be included as members within a single distribution list. When the department wants to send e-mail to its customers announcing new products or services, the e-mail can be sent to the distribution list. A computer running Exchange Server (configured to expand distribution lists) makes sure that each custom recipient receives a copy of the message.

distribution server

A server that contains the source files for a software product and that is used to perform remote installations.

Overview

As an example, if you want to perform remote installations of Microsoft Windows 2000 Professional on client machines, you can copy the I386 folder from the CD onto a folder called NTWKS on your file server, and then share this folder. Windows NT Workstation clients can then connect to this share and run the Setup program to upgrade their systems to Windows 2000 Professional.

Notes

When you copy files from the CD to a folder on the server, use the Xcopy command or, if you are using Windows Explorer, be sure to first choose Options from the View menu and select Show All Files. Otherwise, some hidden files will not be copied, and installation will fail.

DLC

Generally refers to the services that the data-link layer of the Open Systems Interconnection (OSI) reference model provides to adjacent layers of the OSI protocol stack. Specifically, a Data Link Control (DLC) is a specialized network protocol.

See Also Data Link Control (DLC)

DLL

Stands for dynamic-link library, a file containing executable routines that can be loaded on demand by an application.

See Also dynamic-link library (DLL)

DLT

Stands for Digital Linear Tape, a tape backup technology.

See Also Digital Linear Tape (DLT)

DMI

Stands for Desktop Management Interface, a standard for managing desktop systems developed by the Desktop Management Task Force (DMTF), now the Distributed Management Task Force.

See Also Desktop Management Interface (DMI)

DML

Stands for Data Manipulation Language, a subset of Structured Query Language (SQL)

See Also Data Manipulation Language (DML)

DMS

Stands for Defense Messaging System, a global messaging system for the U.S. Department of Defense (DoD).

See Also Defense Messaging System (DMS)

DMT

Stands for discrete multitone, a line coding technique used in Asymmetric Digital Subscriber Line (ADSL).

See Also discrete multitone (DMT)

DMTF

Stands for Distributed Management Task Force, an industry consortium that develops specifications for cross-platform management standards.

See Also Distributed Management Task Force (DMTF)

DMZ

Stands for demilitarized zone, a former name for perimeter network, a security network at the boundary between a corporate local area network (LAN) and the Internet.

See Also perimeter network

DN

Stands for distinguished name, a method for uniquely naming objects within a directory.

See Also distinguished name (DN)

DNA

Short for Windows DNA, an application development model from Microsoft Corporation for highly adaptable business solutions that use Microsoft's digital nervous system paradigm.

See Also Windows Distributed interNet Applications Architecture (Windows DNA)

DNS

Stands for Domain Name System, a hierarchical naming system for identifying Transmission Control Protocol/Internet Protocol (TCP/IP) hosts on the Internet.

See Also Domain Name System (DNS)

DNS client

A client machine configured to send name resolution queries to a name server.

Overview

A Domain Name System (DNS) client is also called a resolver because it uses name servers to resolve a remote host's name into its Internet Protocol (IP) address. To accomplish this, it sends such a request to a name server (a DNS server), which then returns the IP address of the remote host.

DNS client software is built into operating systems that support TCP/IP to enable client machines to issue DNS queries to name servers. For example, on Microsoft Windows platforms, the DNS client software makes possible the use of DNS names for browsing the Internet using Microsoft Internet Explorer.

In Windows operating systems, you must configure the IP address of the DNS server in the client's TCP/IP property sheet in order for the DNS client software to work properly. With dial-up networking connections to the Internet, this information can be communicated to the client machine during negotiation of the Point- to-Point Protocol (PPP) connection with the Internet service provider (ISP).

See Also Domain Name System (DNS) ,name server

DNS console

A snap-in for the Microsoft Management Console (MMC) in Microsoft Windows 2000 and Windows .NET Server that enables administrators to manage Windows 2000 Servers and Windows .NET Server running as Domain Name System (DNS) servers.

Overview

You can use the DNS console to

Notes

Windows 2000 Server and Windows .NET Server include a new command-line utility, Dnscmd, which can be used for managing certain aspects of DNS servers. This utility can be run from the command prompt or scripted into batch files to automate certain aspects of DNS administration. To use this command, you must install the Windows 2000 Support Tools from the \Support\Tools folder on the Windows 2000 product CD (Windows .NET Server also supports Dnscmd and is also part of the Windows .NET Server Support Tools). Type dnscmd /? to see the syntax for this command.

See Also Domain Name System (DNS) ,DNS server Microsoft Management Console (MMC)

DNS database

The collection of database files, or zone files, and associated files that contain resource records for a domain.

Overview

Zone files are stored on a name server and are used to provide name resolution in response to name lookup requests. On Berkeley Internet Name Domain (BIND) name servers and Microsoft Windows NT Domain Name System (DNS) servers, these DNS database files are stored as flat-file database (text) files, that is, simple ASCII files.

On a Windows NT server with the Microsoft DNS Service installed, DNS database files are located in the \System32\DNS directory. The DNS database files in this directory are

On a Windows 2000 or Windows .NET Server DNS server, DNS database information can be either stored in the preceding standard text files or can be integrated into Active Directory directory service, depending on how DNS is installed and configured on the machine. Using Active Directory for storing DNS database information has the benefits of Active Directory's enhanced security features and multimaster replication, providing faster and more efficient replication of DNS zone information than using standard DNS text files.

See Also Active Directory , zone

DNS namespace

The collection of domains, subdomains, and fully qualified domain names (FQDNs) within the Domain Name System (DNS).

Overview

DNS uses a namespace that is hierarchical in structure and is stored as a distributed database on servers called name servers. The term namespace can have two meanings:

Notes

Active Directory directory service in Microsoft Windows 2000 and Windows .NET Server requires that a DNS namespace be configured in a domain-based implementation of Windows 2000 or Windows .NET Server in an enterprise.

See Also Domain Name System (DNS) ,fully qualified domain name (FQDN) ,name server

DNS query

A request to a name server for name lookup.

Overview

DNS queries can occur two ways:

Queries can be answered by the queried name server from its local Domain Name System (DNS) database, from previously cached query results, or by a referral to another name server.

The three basic kinds of DNS queries are recursive queries, iterative queries, and inverse queries. For more information on these types of DNS queries, see their respective entries in this book.

See Also Domain Name System (DNS) ,host name resolution ,inverse query ,iterative query ,recursive query

DNSSEC

Stands for DNS Security Extensions, an enhancement of the Domain Name System (DNS) that supports cryptographic authentication of domain names.

See Also DNS Security Extensions (DNSSEC)

DNS Security Extensions (DNSSEC)

An enhancement of the Domain Name System (DNS) that supports cryptographic authentication of domain names.

Overview

DNS Security Extensions (DNSSEC) uses public-key encryption and digital signatures to enable name servers to verify the identity of domain names. DNSSEC is a set of enhancements to existing name servers that will allow users to be certain that the Web site they are seeing in response to looking up a URL is in fact the one they desired and not some other "spoofed" site.

History

DNS has long been viewed in many quarters as the weak link in the Internet's chain of protocols. DNS's lack of security makes spoofing domain names easy, allowing malicious users to hijack Web traffic and redirect it to sites different from the ones users intended to access.

The most famous incident of Web site spoofing occurred in 1997. An IT (information technology) consultant in Washington State named Eugene Kashpureff set up servers to hijack Web traffic on the Internet and redirect it away from Network Solution's InterNIC name servers to his own name servers, dubbed AlterNIC. As a result, a large segment of legitimate traffic on the Internet was disrupted for about a week, with significant financial loss to many involved.

Many other Web site spoofing incidents have occurred since then, highlighting the fundamental weakness with the security of DNS and driving forward the search for solutions that culminated in RFC 2535, which defines DNSSEC.

Implementation

DNSSEC works by enabling name servers to verify the information they exchange with each other using public key cryptography and digital signatures. Typically, a user enters a URL into his or her Web browser to access a site on the Internet. The browser's Hypertext Transfer Protocol (HTTP) GET request is forwarded to the local name server, which is either a corporate name server in a large enterprise or a name server at a small to medium-sized company's Internet service provider (ISP). This local name server then contacts a root name server to locate a name server authoritative for the top-level domain listed in the URL, which then directs the local name server to the name server authoritative for the second-level domain in the URL, and so on until the Internet Protocol (IP) address of the desired Web site can be looked up and communications established. The role of DNSSEC in all this is that when these exchanges are going on between name servers, digital signatures are issued and verified at each stage to verify that the responses are being received from the actual name servers rather than spoofing name servers.

DNS Security Extensions (DNSSEC). How DNSSEC works.

DNSSEC works at both the zone and resource record levels to secure DNS information. DNSSEC requires several new resource records in name server databases:

Marketplace

The first DNS name server software to support DNSSEC is also the most popular in the Internet community, the Berkeley Internet Name Domain (BIND). DNSSEC is included in BIND 9, the latest release of the name server software, and is being packaged with most major UNIX operating systems including those from Sun Microsystems, Hewlett-Packard, and Red Hat Linux. The funding for developing DNSSEC for BIND 9 was provided by the U.S. Defense Information Systems Agency (DISA) and the work was done by NAI Labs and the Internet Software Consortium (ISC).

Although larger enterprises will likely move quickly to upgrade their name servers to support DNSSEC, smaller businesses may balk at the expense involved, particularly in the management of DNSSEC name servers. To meet this need, companies such as Nomimum and UltraDNS Corporation are planning to offer outsourced DNSSEC services to such businesses.

Issues

A number of issues may delay the full-scale deployment of DNSSEC on name servers all across the Internet (DNSSEC must be widely deployed for it to work reliably):

Despite all of these issues, there is still tremendous pressure for DNSSEC to go ahead, and it is likely to be fully deployed across the Internet in the next few years.

Prospects

DNSSEC is seen by many as an essential upgrade for the Internet's DNS infrastructure. Governments, B2B exchanges, financial servers, and others are likely to be early adopters of this technology. The U.S. military is already taking steps to implement DNSSEC by enhancing existing BIND 8 name servers with additional software to make the .mil top-level domain more secure.

See Also Berkeley Internet Name Domain (BIND) , name server

DNS server

A server that is used to resolve host names or fully qualified domain names (FQDNs) into Internet Protocol (IP) addresses on a TCP/IP network.

Overview

A Domain Name System (DNS) server, which is also called a name server, accomplishes name resolution by accepting DNS queries from DNS clients (called resolvers) and by performing DNS queries among other DNS servers, depending on how the servers have been configured.

The most common type of name server in use on the Internet is the Berkeley Internet Name Domain (BIND) name server, which runs on the UNIX operating system. For Windows 2000 and Windows .NET Server deployments using Active Directory directory service, Microsoft Windows 2000 Server and Windows .NET Server can function as a DNS server and are managed using an administrative tool, DNS console, which is a snap-in for the Microsoft Management Console (MMC). Windows 2000 and Windows .NET Server DNS servers include advanced capabilities such as dynamic update, which allows DNS servers to update their DNS database files automatically using Dynamic Host Configuration Protocol (DHCP). Another feature of Windows 2000 and Windows .NET Server is tight integration of DNS and Active Directory. For example, when a Windows 2000, Windows XP, or Windows .NET Server client needs to locate a Windows 2000 or Windows .NET Server domain controller, the NetLogon service uses the DNS server's support for the SRV resource record to allow registration of domain controllers in the local DNS namespace.

Notes

DNS servers can provide a simple means of load balancing connections to heavily used file or application servers such as Internet Information Services (IIS). The method is called Round Robin DNS, and it works as its name implies. Say you have three Web servers hosting identical content and you want to load balance incoming Hypertext Transfer Protocol (HTTP) requests across these servers. You can create three A records in the DNS zone file, each with the same host name but different IP addresses, one IP address for each Web server, as shown in this example:

www.northwind.microsoft.com        172.16.8.33 www.northwind.microsoft.com        172.16.8.34 www.northwind.microsoft.com        172.16.8.35

When a DNS client requests resolution of the name www.northwind.microsoft.com into its IP address, the DNS server returns all three IP addresses (.33, .34, .35), and the client chooses the first address (.33) and sends the HTTP request to the Web server associated with this address. The next time the DNS server receives the same name resolution request, it rotates the IP addresses in round-robin fashion (.34, .35, .33) and returns them to the client. The client picks the first address, which is now .34. This way, each DNS name resolution returns a different IP address, and the load is balanced between the Web servers.

The drawback to using Round Robin DNS is that if a server fails, DNS will continue to return the address of the failed server.

See Also Berkeley Internet Name Domain (BIND) ,

DOCSIS

Stands for Data Over Cable Interface Specification, a specification defining standards for implementation of cable modem systems.

See Also Data Over Cable Interface Specification (DOCSIS)

Document Object Model (DOM)

A set of programming interfaces for developing and managing Extensible Markup Language (XML) documents.

Overview

Document Object Model (DOM) has been a major driving force behind the development and proliferation of XML tools and standards. This is because DOM provides programmers with standard application programming interfaces (APIs) for representing and manipulating XML data. XML parsers based on DOM are widely implemented in Web browsers and other Web client tools, and the evolution of DOM is in large part driving the development of these tools.

DOM is a recommendation of the World Wide Web Consortium (W3C), and in its original version, it lets programmers model the data hierarchy of XML documents as objects that can be manipulated in Java, JavaScript, C++, and other languages. Using these and other languages, programmers can write scripts and applications that can dynamically modify the content, style, and structure of XML documents.

DOM was originally targeted toward small XML documents that could be loaded in entirety into memory and manipulated using repetitive operations. Because XML documents can be relatively large, a newer version called DOM 2 has been developed that includes a number of new features not supported by the original DOM, including support for

The W3C DOM Working Group is currently at work on DOM 3.

Marketplace

Support for DOM is included in products from many vendors of XML products, including Microsoft Corporation, Oracle Corporation, IBM, SoftQuad, and others. Microsoft Internet Explorer 5.5 also supports key aspects of DOM 2, including APIs for custom namespaces and cascading style sheets.

See Also World Wide Web Consortium (W3C) ,XML

document type definition (DTD)

A file that defines the allowed structure of an Extensible Markup Language (XML) document.

Overview

Document type definitions (DTDs) are used in XML to specify the structure of documents. For example, Hypertext Markup Language (HTML) documents are defined by the HTML DTD, which is implemented in Web browsers such as Microsoft Internet Explorer and Web page editing software such as Microsoft FrontPage. The DTD specifies which kinds of tags are mandatory and which are optional, and also the range of allowed values for each type of tag.

DTD originated as part of the Standard Generalized Markups Language (SGML) specification developed by the International Standards Organization (ISO) in the 1980s. DTD is now a basic part of the XML specification.

Implementation

DTDs are implemented as plain-text files. They use a highly condensed language or syntax, which is very concise and can be parsed rapidly and efficiently. Schemas are another feature of XML and provide functionality similar to DTDs. The main difference is that schemas are more verbose than DTDs since they are written in XML instead of the more concise (but less readable) DTD language.

See Also Hypertext Markup Language (HTML) ,XML

DoD model

A networking model developed by the U.S. Department of Defense (DoD).

Overview

The DoD model was developed in the 1970s as the networking model for the Transmission Control Protocol/Internet Protocol (TCP/IP), which was first deployed by in the ARPANET project and is now the worldwide de facto standard for the Internet. The DoD model is older than the seven-layer Open Systems Interconnection (OSI) model, which was developed in the 1980s by the International Organization for Standardization (ISO) to provide a more detailed framework for developing network protocols.

Architecture

The DoD model consists of four layers. These layers map loosely to the seven-layer OSI model in the fashion described below. Describing the DoD layers from the top down, they are

See Also OSI model ,Transmission Control Protocol/Internet Protocol (TCP/IP)

DOM

Stands for Document Object Model, a set of programming interfaces for developing and managing Extensible Markup Language (XML) documents.

See Also Document Object Model (DOM)

Domain Admins

A built-in group on Microsoft Windows NT, Windows 2000, and Windows .NET Server networks for users who need administrator-level access to systems.

Overview

The Domain Admins group simplifies administration of users on the network. It is a global group and does not have any preassigned system rights. The initial membership of the group is the sole user account called Administrator. Other user accounts that are added to this group gain rights and privileges equivalent to those of the Administrator account and can perform actions similar to those of the Administrator account. All network administrators in a given domain should be members of this group.

Notes

On Windows 2000- and Windows .NET Server-based networks, the Domain Admins group is created by default in the Users organizational unit (OU) within Active Directory directory service.

See Also built-in group

domain blocking

The ability to block traffic from a specific Domain Name System (DNS) domain.

Overview

Domain blocking is a security technology used in Microsoft Internet Information Server (IIS) Web servers to protect them from undesirable traffic. It was first introduced with IIS4.

Domain blocking allows IIS administrators to grant or deny access to content on the server based on a client's Internet Protocol (IP) address, subnet, or Internet domain name. This is a useful security feature for protecting machines running Internet Information Server from repeated attack by hackers. Domain blocking can be applied at various levels, including the following:

See Also domain name ,IP address ,subnet

domain controller

A server running Microsoft Windows NT, Windows 2000, or Windows .NET Server that manages security for the domain.

Overview

Users and computers that need to obtain access to network resources within the domain must be authenticated by a domain controller in the domain. Windows NT domain controllers are the foundation of Windows NT Directory Services (NTDS), and Windows 2000 and Windows .NET Server domain controllers are based on Active Directory directory service.

In a Windows NT-based network, the domain controllers form a hierarchy. There are two types of Windows NT domain controller:

A Windows 2000 or Windows .NET Server domain controller is any Windows 2000 server or Windows .NET Server with the optional Active Directory installed. Windows 2000 and Windows .NET Server domain controllers contain a complete, writeable copy of the Active Directory information for the domain in which they are installed. Run the Active Directory Installation Wizard to promote any Windows 2000 or Windows .NET Server member server to the role of a domain controller. A domain controller manages information in the Active Directory database and enables users to log on to the domain, be authenticated for accessing resources in the domain, and search the directory for information about users and network resources. A Windows 2000 or Windows .NET Server domain controller contains a writeable copy of the domain directory database.

Unlike in a Windows NT-based network, where domain controllers are in a hierarchy, all domain controllers in a Windows 2000- or Windows .NET Server-based network are equal, and changes to the domain directory database can be made at any domain controller. Replication of directory information between Windows 2000 and Windows .NET Server domain controllers follows a multimaster model. In this configuration, each domain controller acts as a peer to all other domain controllers. In other words, there are no primary or backup domain controllers in Windows 2000 or Windows .NET Server, only domain controllers.

Although most domain controller options (sending and receiving updates that add, move, copy, and delete objects from the directory) are multimaster in nature, a few domain controller operations function on only certain domain controllers. These special functions are called flexible single-master operations (FSMO) and perform functions such as modifying the schema, assigning pools of relative identifiers to other domain controllers within a domain, emulating a Windows NT PDC for downlevel compatibility with Windows NT domain controllers, updating security identifiers and distinguished names in cross-domain objects when such an object is moved or deleted, and ensuring that all domain names within a forest are unique.

Uses

An important issue regarding domain controllers in Windows 2000- and Windows .NET Server-based networks is where to place them. After an administrator implements Active Directory and populates its initial information, most Active Directory-related traffic will come from users querying for network resources. The key to optimizing user queries is in how you locate the domain controllers and the global catalog servers on your network. Placing a domain controller at each physical site optimizes query traffic but increases replication traffic between sites. Nevertheless, the best configuration is usually to place at least one domain controller at each site with a significant number of users and computers.

In a pure Windows 2000 networking environment, all domain controllers can be configured to run in native mode. If you have a mix of Windows NT 4 and Windows 2000 domain controllers, the Windows 2000 domain controllers must be configured to run in mixed mode.

Notes

To upgrade a Windows NT-based network to Windows 2000, upgrade the PDC first. This allows the domain to immediately join a domain tree, and administrators can administer the domain using the administrative tools of Windows 2000 and create and configure objects in Active Directory.

If you need to move a Windows NT domain controller to a new domain, you must reinstall Windows NT. Domain controllers cannot migrate from one domain to another because when you create a domain, a unique security identifier (SID) is created to identify the domain, and domain controllers have this SID hard-coded into their domain directory database.

You can use the administrative tool Active Directory Users and Computers to convert a Windows 2000 domain controller from mixed mode to native mode. However, domain controllers running in native mode cannot be changed to mixed mode. If you create a new domain controller for an existing Windows 2000 domain, this new domain controller is referred to as a replica domain controller. Replica domain controllers are typically created to provide fault tolerance and better support for users who access resources over the network.

See Also Active Directory , flexible single-master operation (FSMO)

domain (DNS)

A collection of related hosts in the hierarchical Domain Name System (DNS).

Overview

Domains are the building blocks of DNS. A domain consists of a group of nodes in the DNS namespace that have the same domain name. Domains are organized hierarchically in the DNS namespace, with the topmost domain called the root domain.

DNS domains can be classified as one of the following:

Domain names can include only the characters a-z, A-Z, and 0-9, the dash (-), and the period. A name that completely identifies a host in the DNS namespace is called a fully qualified domain name (FQDN).

See Also Domain Name System (DNS)

domain forest

Also called simply a forest, a logical structure formed by combining two or more Microsoft Windows 2000 or Windows .NET Server domain trees.

Overview

Forests provide a way of administering enterprise networks for a company whose subsidiaries each manage their own network users and resources. For example, a company called Contoso, Ltd. might have a domain tree with the root domain contoso.com, and a subsidiary company called Fabrikam, Inc. might have a domain tree with the root domain fabrikam.com. Note that these two companies do not share a contiguous portion of the DNS namespace; this is typical of trees in a forest. The two companies might want to administer their own users and resources but make those resources available to each other's users. They can combine the two domain trees into a forest by establishing a two-way transitive trust between the root domains of the two trees.

Domain forest. Example of a domain forest.

All trees in a forest must share a common directory schema and global catalog. The global catalog holds information about all objects in all domains of the forest and acts as an index of all users and resources for all domains in the forest. By searching the global catalog, a user in one domain can locate resources anywhere in the forest. The global catalog contains only a subset of the attributes of each object. This ensures fast searches for users trying to locate network resources.

See Also domain (Microsoft Windows) ,domain tree

Domain Guests

A built-in group on Microsoft Windows 2000 and Windows .NET Server networks.

Overview

The Domain Guests group simplifies administration of users on the network. It is a global group and does not have any pre-assigned system rights. The initial membership of the Domain Guests group is the sole user account called Guest. Other user accounts that are added to this group gain the rights and privileges equivalent to those of the Guest account and can perform actions similar to those of the Guest account. Domain Guests are typically users who are given occasional, temporary access to network resources.

Notes

The Domain Guests group is created by default in the Users organizational unit (OU) within Active Directory directory service. Normally, the only member of this group is the Guest account, but when Internet Information Services (IIS) is installed, additional guest accounts are created for use by IIS.

See Also built-in group

domain local group

A type of group in a Microsoft Windows 2000- or Windows .NET Server-based network.

Overview

Windows 2000 and Windows .NET Server use groups to organize users or computer objects for administrative purposes. Groups can have different scopes, or levels of functionality. The scope of a group can be a single domain, a group of domains connected by trust relationships, or the entire network.

Domain local groups are Windows 2000 and Windows .NET Server groups whose scope is restricted to the specific domain in which they are defined. Domain local groups are used to provide users with access to network resources and to assign permissions to control access to these resources. Domain local groups have open mem bership, which means that you can add members from any domain to them.

To use a domain local group, you first determine which users have similar job responsibilities in your enterprise. Then you identify a common set of network resources in a domain that these users might need to access. Next, you create a domain local group for the users and assign the group appropriate permissions to the network resources. This procedure is called A-G- DL-P (access, group, domain local, permissions), which is a variation of the AGLP administration paradigm used in Windows NT-based networks.

Notes

If network resources within a domain are used only within the domain, you can group users in the domain using domain local groups. If your scope of resource usage is several domains linked by trust relationships, use global groups instead. If your network is a pure Windows 2000- or Windows .NET Server-based network and your domain controllers are running in native mode, you can use universal groups as well.

See Also global group ,universal group

domain master browser

A role of a browser computer on a Microsoft Windows-based network.

Overview

Domain master browser is one of the browser roles for the computer browser service on Windows networks. A domain master browser must be a machine running Windows NT, Windows 2000, or Windows .NET Server.

The role of the domain master browser is to collect the master list of available network resources in the domain. The domain master browser then distributes this list to master browsers on each subnet. If the domain has only one subnet, the domain master browser is also the master browser for that subnet.

Notes

A Windows NT domain has only one domain master browser, which is always the primary domain controller (PDC).

See Also Computer Browser service

domain (Microsoft Windows)

A model developed by Microsoft Corporation for grouping computers together for administrative and security purposes.

Overview

Computers on a Microsoft Windows NT, Windows 2000, or Windows .NET Server network that are in the same domain share a common directory database of security information such as user accounts, passwords, and password policies. Domains can span geographical boundaries and networks; an enterprise can have branches in several continents with all machines belonging to a single domain. Alternatively, a single network or location can have multiple domains installed, with or without trust relationships between them.

Domain-based networks have the following features:

Implementation

Domains can contain different types of computers, that is, computers performing various different roles. Typically, the following kinds of computers are members of the domain:

A Windows NT or Windows 2000 network can be installed as either a domain or a workgroup. The domain model is preferable because it allows computers to share a common security policy and a common domain directory database. Machines running Windows Millennium Edition (Me), Windows 98, Windows 95, and legacy 16-bit Windows machines can also participate in domain security on Windows NT and Windows 2000 networks but are not considered full members of the domain because they have no computer accounts within the domain directory database.

A Windows NT domain requires only one primary domain controller (PDC) and can have a number of backup domain controllers (BDCs). By creating a PDC, you create a new domain. Windows NT member servers and workstations can join a domain. Other systems, such as computers running Windows 95 and Windows 98, can participate in a domain but are not considered members of the domain because they have no computer accounts in the domain directory database.

Domain (Microsoft Windows). This illustration shows a Windows 2000 domain.

Windows 2000 domains use peer domain controllers, which are all equal in status. In Windows 2000, domains are core entities within Active Directory and act as a boundary for network security and for the replication of directory information over the network. If you establish a security policy in one domain, the settings, rights, and discretionary access control lists (DACLs) of that policy are limited to that domain. Domains are also the fundamental containers for all network objects within them. Domains contain users, groups, computers, and other directory objects. These objects can be grouped together using a hierarchy of organizational units (OUs).

See Also Active Directory , workgroup

domain model

A model for building an enterprise-level network using Microsoft Windows NT or Windows 2000 domains.

Overview

Windows NT uses domain models for building enterprise networks radically different from those used by Windows 2000 and Windows .NET Server. In Windows NT networks, four main domain models can be implemented:

Because of their two-way transitive trusts between domains, Windows 2000 and Windows .NET Server are capable of building more flexible domain structures than Windows NT. In addition to the single domain model above, Windows 2000 and Windows .NET Server domains can be linked together hierarchically in domain trees, and domain trees can be joined at their roots to form domain forests.

Windows NT, Windows 2000, and Windows .NET Server can be scaled for implementation in enterprise-level businesses that support thousands of users and cover geographically diverse regions, with Windows 2000 and Windows .NET Server being the more scalable platforms. Choosing the correct domain model for implementing your network can greatly simplify administration of your network.

See Also domain (Microsoft Windows) ,domain tree forest

domain modes

A mode of operation for domain controllers in Microsoft Windows 2000- and Windows .NET Server-based networks.

Overview

Windows 2000 and Windows .NET Server domain controllers are computers that contain a writeable copy of Active Directory directory service. You can convert a Windows 2000- or Windows .NET Server-based server to a domain controller by running the Active Directory Installation Wizard on that machine. You can run Windows 2000 or Windows .NET Server domain controllers in either of two modes (though .NET Server introduces a new mode called "Windows .NET version 2002," which is exclusively for .NET Server-based networks):

Notes

By default, Active Directory is installed on a Windows 2000 server or Windows .NET Server in mixed mode. You can change a domain controller from mixed mode to native mode, but not vice versa. Use the administrative tool Active Directory Users and Computers to perform the change.

See Also Active Directory , mixed mode, native mode

domain name

A name for a domain within the Domain Name System (DNS).

Overview

Domain names are used by companies and organizations to provide a uniform naming scheme for hosts on their Transmission Control Protocol/Internet Protocol (TCP/IP) internetworks and for providing friendly names for Web servers, mail servers, and other servers that are exposed to the Internet.

Domain names must be registered with a domain name registry before they can be used. Formerly this meant registering with the Internet Network Information Center (InterNIC), and later on Network Solutions. Today a number of different domain name registrars exist, each of which must be accredited by the Internet Corporation for Assigned Names and Numbers (ICANN) in order to operate. ICANN is the ultimate controlling organization for domain names and determines both generic and country-specific top-level domains (TLDs).

Uses

Owning a domain name is essential in today's business world as the dot-com enterprise becomes the standard model for business. A company's domain name typically reflects the company's trademark name or logo-for example, the microsoft.com domain owned by Microsoft Corporation. Because company names can be registered at the state or federal level, companies in different states or countries might want to register identical domain names. Unfortunately, the DNS was not established with consideration of these trademark issues, and the courtroom has become a common arena for resolving domain name ownership disputes.

The most popular domain names that companies register are the .com domain names. These were originally intended for commercial enterprises only, but they are also used by individuals, nonprofit organizations, and other entities that want to establish a clear presence on the Internet. About 75 percent of all registered domain names belong to the .com top-level domain. As of 2000, there were about 15 million registered domain names worldwide, and this figure is climbing rapidly.

Prospects

The Internet and its Domain Name System were developed in the United States, and the fact that the Internet is now a worldwide entity has put new pressures on DNS. Countries and regions have pressed for modifications to DNS to allow domain names to be registered in languages other than English, including languages that do not use Roman alphabet characters, such as Chinese. The IETF has established an Internationalized Domain Name (IDN) working group to develop such a system, which is likely to be based on the Unicode standard for internationalization of alphabets. The goal is to implement non-English DNS with minimal modifications to existing DNS itself.

ICANN is also in the process of creating additional top-level domains as alternatives for companies that are unable to register suitable .com domain names and for other purposes such as personal home pages. For the latest developments in this regard, see the ICANN Web site at www.icann.org

Notes

In an interesting development, a commercial company called New.net has created its own new set of TLDs and is selling domain names based on them without the ICANN's approval. This is probably an attempt to apply market pressure to what is often perceived as ICANN's slow, politically based decision-making process of creating new TLDs. New.net is collaborating with commercial DNS provider UltraDNS to provide support for these new TLDs on name servers. To do this, Internet service providers (ISPs) install special software on their own name servers that allows client lookups for these new TLDs to be forwarded directly to UltraDNS's name servers. Users whose ISPs do have such software installed on their name servers can download a free plug-in from New.net to allow them to access sites using these TLDs.

For More Information

You can find a list of ICANN-accredited domain name registrars at www.ispworld.com/isp/ICANN.htm.

See Also Domain Name System (DNS) ,fully qualified domain name (FQDN) ,top-level domain (TLD)

Domain Name System (DNS)

A hierarchical naming system for identifying Transmission Control Protocol/Internet Protocol (TCP/IP) hosts on the Internet.

Overview

The Domain Name System (DNS) is a distributed, hierarchical system that provides

History

Until 1983 the Internet computers used locally stored Hosts files to perform name resolution. Hosts files are text files that are basically lists of FQDNs and their associated IP addresses for remote hosts that the local host may need to communicate with. With the rapid growth of the Internet, however, updating these Hosts files became an unmanageable chore-each machine had to have its own local Hosts file that needed to be kept up to date, and whenever a host was added to the Internet or removed from it, the Hosts file needed an appropriate modification.

To solve this problem, the hierarchical DNS naming system was invented in 1984 and name servers were deployed across the Internet. The original RFCs 882 and 883 that defined DNS have since been replaced by RFCs 1034 and 1035, and additional RFCs have been added for additional features added to DNS.

Today DNS functions like a backbone for the Internet-without DNS, network communications over the Internet would not be possible.

Uses

Although the primary use of DNS is as the naming system for the Internet, large private TCP/IP internetworks sometimes used DNS internally with their own private name servers. This practice is rare now, however, as most private networks are now connected to the Internet.

DNS is essential to the operation of Active Directory directory service in Windows 2000 and Windows .NET Server. Active Directory uses the DNS as its naming system for Windows 2000 and Windows .NET Server hosts, and DNS servers provide name resolution services to enable network communications to take place on a Windows 2000- or Windows .NET Server-based network.

Not all name servers support the necessary DNS features that Active Directory requires. In order to implement Active Directory, name servers must support SRV records described in RFC 2052bis (SRV records enable Windows 2000 and Windows .NET Server clients, including Windows XP, to locate a domain controller for network authentication purposes). In addition, the following advanced DNS features are recommended for name servers that will support Active Directory:

Windows 2000 and Windows .NET Server DNS service supports all of the above features by default. If you want to use Berkeley Internet Name Domain (BIND) name servers to support an Active Directory implementation, you must use at least BIND 8.1.2 and preferably BIND 8.2 or higher. Make sure you also disable name checking so your BIND name servers will ignore the illegal underscore character used by Active Directory's version of DNS.

Note that in BIND, name servers zone information is stored in text files. In Windows 2000 and Windows .NET Server, zone information can be stored either in text files or stored in and replicated using Active Directory.

Implementation

For DNS to work, four elements are required:

The DNS naming system assigns a unique name to each TCP/IP host on the Internet. Here the word host typically refers to servers, workstations, routers, TCP/IP printers, and similar devices. This unique name for a host is called the fully qualified domain name (FQDN) of the host.

The DNS namespace is defined as the collection of all FQDNs for all hosts on the Internet. This DNS namespace is hierarchical in structure, beginning with the root domain, which branches to top-level domains, then second-level domains, and so on to the individual host name. The FQDN barney.northwind.microsoft.com can be broken down as follows:

The root domain has a null label and is not expressed in the FQDN.

The DNS namespace is stored as a distributed database on name servers located at various points on the Internet. Each name server on the Internet is responsible for a subset of the DNS namespace known as a zone. Each zone can consist of one or more domains and subdomains over which the zone is said to be authoritative.

The most important name servers on the Internet are the dozen or so root name servers, which are responsible for maintaining the infrastructure of the domain name system. These root name servers are maintained mostly by the Internet Network Information Center (InterNIC) and by U.S. military agencies (because the Internet evolved from the ARPANET project of the U.S. Defense Department in the 1970s).

Name servers typically store their DNS information in text files called zone files, which consist of a series of resource records (but Microsoft Windows 2000 and Windows .NET Server can do this differently, as discussed later in this article). There are many types of resource records, but the most common type is the A (address) record, which maps the host name or FQDN of a single host to its IP address.

The main function of a name server is to answer queries from DNS clients called resolvers. A resolver contacts a name server, asking it for the IP address associated with a given host name or FQDN. The name server then replies to the resolver with the IP address of the host or contacts a different name server for this information if it is unable to provide it itself.

All name servers can answer queries from resolvers or forward these queries to other name servers. However, name servers also function in four specific roles within the Domain Name System:

Finally, companies and organizations can obtain domain names for their networks from domain name registrars. These registrars are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), which used to maintain the DNS system itself until recently.

Marketplace

Most ISPs run their own name servers (usually BIND) and implement registered DNS domains for a small fee to allow companies to run Web servers and mail servers accessible to the Internet. A new type of player is the pure-play DNS service provider, which runs its own dedicated DNS name servers and provides managed, high-availability DNS services to companies for a monthly service charge. Once such company is UltraDNS, which uses a network of redundant DNS servers for failover support and locates these servers near the Internet's backbone for best performance.

Issues

Security is the most important issue regarding DNS. Since DNS was designed to be completely open (any host on the Internet may query any name server), it is inherently insecure and open to attack. The most important type of attack is the denial of service (DoS) attack, which floods a name server with so many false DNS queries that it is unable to respond to real queries. Access to a company's Web server can be completely stopped through DoS attacks on name servers.

Since DNS is critical to the operation of the Internet, it is also viewed sometimes as a prize target by hackers seeking to disrupt the Internet's operation (actually these are more like anarchists than hackers, since a true hacker needs DNS in order to intrude into networks connected to the Internet). BIND name servers, which are the primary form of name server used on the Internet, have had a number of well-publicized vulnerabilities discovered in recent years, and an essential task of Internet service providers (ISPs) is keeping up with patches for their BIND name servers to protect them against attack.

Prospects

DNS will continue to play an essential role for the Internet, but with the transition to the new IPv6 addressing scheme, DNS needs a few modifications. The most important modifications are the inclusion of two new resource record types:

In addition, reverse name resolution using PTR (pointer) records is different in IPv6 DNS. Specifically, while IPv4 DNS uses the in-addr.arpa domain for such reverse name resolution, IPv6 DNS uses a new domain called IP6.INT.

Finally, during the transition between IPv4 to IPv6 both versions of DNS will need to be supported (probably for many years), and to make this possible, the AAAA record was created to enable IPv4 name servers to provide IPv6 addresses to resolvers performing name lookups.

AAAA records are supported by BIND 8.1 and higher, while IPv6 DNS is supported by BIND 9.

Notes

On smaller TCP/IP networks, Hosts files can be used instead of DNS to support name lookups, while on legacy Windows NT-based networks, Windows Internet Name Service (WINS) provided NetBIOS name resolution as an alternative to DNS. WINS is also supported by Windows 2000 and Windows .NET Server but only for downlevel compatibility with Windows NT in networks that have not been fully upgraded from Windows NT to Windows 2000 or Windows .NET Server.

Windows 2000 and Windows .NET Server also use other naming conventions besides DNS. For example, to locate objects within Active Directory, the following naming conventions can be used:

See Also domain (DNS) ,domain name fully qualified domain name (FQDN), hosts file, name server, resource record (RR), root name server, zone

domain tree

A hierarchical grouping of Microsoft Windows 2000 or Windows .NET Server domains.

Overview

Domain trees are created by adding one or more child domains to an existing parent domain. Domain trees are used to make a domain's network resources globally available to users in other domains.

Domain tree. Example of a domain tree.

In a domain tree, all domains share their resources and security information to act as a single administrative unit. A user who logs on anywhere in a domain tree can access file, printer, and other shared resources anywhere in the tree if he or she has appropriate permissions. A domain tree has only one Active Directory, but each domain controller in a tree maintains only the portion of Active Directory directory service that represents the objects in that particular domain.

Domains in a domain tree are joined using two-way transitive trusts. These trusts enable each domain in the tree to trust the authority of every other domain in the tree for user authentication. This means that when a domain joins a domain tree, it automatically trusts every domain in the tree.

For child domains to be part of a domain tree, they must share a contiguous namespace with the parent domain. The namespace of a Windows 2000 or Windows .NET Server domain is based on the Domain Name System (DNS) naming scheme. For example, in the illustration, the child domains proseware.contoso.com and adatum.contoso.com share the same namespace as the parent domain contoso.com. In this example, contoso.com is also the name of the root domain- the highest-level parent domain in the tree. The root domain must be created first in a tree.

All domains in a domain tree have their directory information combined into a single directory: Active Directory. Each domain provides a portion of its directory information to an index on the domain controllers. By searching this index, users can locate and access shared resources, applications, and even users anywhere in the domain tree.

Notes

Two or more domain trees that do not share a contiguous namespace can be combined into a domain forest.

See Also Active Directory , namespace

domain user account

One of two types of user accounts available on a Microsoft Windows 2000- or Windows .NET Server-based network.

Overview

User accounts enable users to log on to domains or computers and access any resources in the domain for which they have appropriate permissions. This is in contrast to local user accounts, which are used only for logging on to a specific machine (such as a member server) and accessing resources on that machine.

Domain user accounts are created in Active Directory directory service and stored in organizational units (OUs). Domain user account information is replicated to all domain controllers in a domain using directory replication. This replication enables the user to quickly and easily log on from any part of the domain.

You create domain user accounts using the administrative tool called Active Directory Users and Computers, a snap-in for the Microsoft Management Console (MMC). You can create domain user accounts in the default Users OU or in any other OU that you have created in Active Directory.

Notes

Windows 2000 and Windows .NET Server also include a number of built-in accounts that simplify the task of administering users on a network. The two built-in user accounts are the Administrator and Guest accounts.

Domain Users

A built-in group on Microsoft Windows NT, Windows 2000, and Windows .NET Server networks.

Overview

The Domain Users group simplifies administration of users on the network. It is a global group and does not have any preassigned system rights. Its initial membership is empty until ordinary network users are created for the domain. User accounts that are added to this group gain the rights and privileges that are assigned to ordinary users in the network, such as the right to log on over the network. All ordinary users on the network should be members of this group.

Notes

On Windows 2000- and Windows .NET Server-based networks, the Domain Users group is created by default in the Users organizational unit (OU) within Active Directory.

See Also built-in group

DoS

Stands for Denial of Service, any attack conducted against a system that tries to prevent legitimate users from accessing the system.

See Also denial of service (DoS)

DOS

Stands for Disk Operating System, which is short for Microsoft Disk Operating System (MS-DOS), the venerable operating system created by Microsoft Corporation for the first IBM personal computer in 1981.

See Also MS-DOS

down

The state of a network when some or all network communications are disrupted.

Overview

Common reasons for networks being down include

Indications that the network might be down include

See Also network troubleshooting

drain wire

An uninsulated wire included in shielded cabling that runs the length of some coaxial cabling or shielded twisted-pair (STP) cabling.

Overview

The drain wire makes contact with the foil sleeve or mesh along the wire. The externally exposed portion of the drain wire should be connected to a secure ground connection. This ensures that the wire is properly grounded and that the shielding in the wire operates effectively. It also helps to maintain the two ends of the wire at the same voltage with respect to ground. If voltage differences form between the ends of a network cable, they can lead to a sudden voltage surge or discharge that can damage attached networking devices.

See Also coaxial cabling ,shielded twisted-pair (STP) cabling

drop

Another name for a wall plate or some other receptacle for connecting workstations to a local area network (LAN).

Overview

For example, a network administrator might say, "This room has 24 drops, and 6 are still available." This means that there are 24 wall plate connections on the walls of the room, and 18 of them have drop cables attached to them to connect them to computers in the room. The other end of the drops usually terminates at a patch panel in the wiring closet. Another name for a drop is a LAN drop .

See Also premise cabling ,wall plate

drop cable

In Standard Ethernet networks, a cable connecting a computer's network interface card (NIC) to a transceiver attached to a thicknet cable.

Overview

In Standard Ethernet networks, a drop cable is also called a transceiver cable. More generally, a drop cable is any short cable connecting a computer's NIC to a wall plate. Drop cables allow computers to be easily disconnected and reconnected from the network so that you can move them around in the room. Drop cables are generally needed because horizontal cabling connecting patch panels in wiring closets terminates at wall plates in the work areas, but computers in the work areas are distributed throughout the entire room. In a more permanent networking configuration, wall plates might be located on floors and very short drop cables might be used to connect the computers to the network.

See Also Ethernet

Dr. Watson

A Microsoft Windows utility that intercepts software faults and provides the user with information on which software faulted and why the fault occurred.

Overview

In earlier versions of Windows, this information was terse and cryptic, but it was greatly expanded and reorganized in the version of Dr. Watson included with Windows 98, as shown in the screen capture. However, this information is usually not helpful to the person running the software. Dr. Watson is primarily of interest to the providers of the software to determine what caused the software to crash. A piece of software that frequently generates Dr. Watson messages can be considered buggy, and you should contact your software vendor for a fix or a replacement.

Dr. Watson. A report generated by Dr. Watson.

DS

Stands for Differentiated Services, a system for service classification of network traffic.

See Also Differentiated Services (DS)

DS-0

Also known as DS0, stands for Digital Signal Zero, a transmission standard for digital telecommunications.

Overview

DS-0 defines a transmission rate of 64 Kbps and can carry either a single voice channel or data. Telecommunication carriers transmit digital signals in multiples of DS-0 called DS-1, DS-2, and so on. These multiples differ depending on whether you are dealing with the T-carrier system of North America or the E-carrier system of Europe and other continents. The following table lists the common DS-series transmission rates and their T-series or E-series equivalents (when defined). For example, you can see that a T1 data transmission is equivalent to 24 DS-0 transmissions multiplexed together and can transmit data at a rate of 1.544 Mbps.

DS-Series Transmission Rates

DS Type

Multiple of DS0

Data Rate

T- Series

E- Series

DS0

1

64 Kbps

N/A

N/A

DS1

24

1.544 Mbps

T1

N/A

N/A

32

2.048 Mbps

N/A

E1

DS1C

48

3.152 Mbps

N/A

N/A

DS2

96

6.312 Mbps

T2

N/A

N/A

128

8.448 Mbps

N/A

E2

N/A

512

34.368 Mbps

N/A

E3

DS3

672

44.736 Mbps

T3

N/A

N/A

2048

139.264 Mbps

N/A

E4

DS4

4032

274.176 Mbps

N/A

N/A

N/A

8192

565.148 Mbps

N/A

E5

DS-1

Also known as DS1, a transmission standard for digital telecommunications.

Overview

A DS-1 circuit consists of 24 DS-0 circuits multiplexed together into a circuit. The bandwidth of a DS-1 circuit is 1.544 Mbps, and a common name for such an arrangement is a T1 line. DS-1 is also a typical signaling rate for frame relay links.

Implementation

DS-1 circuits can be implemented singly or multiplexed together to form fatter pipes for carrying data faster. Multiple DS-1 circuits can be bonded together to create NxDS-1 circuits. Here N can be from 2 up to 8, providing a maximum throughput of 12 Mbps. In this arrangement, the first DS-1 circuit is the one responsible for managing the link, and if this circuit goes down the entire link fails.

DS-1. Implementing DS-1 and NxDS-1 on a Frame Relay network.

NxDS-1 is typically used when you do not expect your data requirements to radically increase for several years. If you are expecting much faster growth in data rates, consider moving to DS-3 or fractional DS-3 instead of NxDS-1.

Issues

When you want to upgrade form DS-1 to NxDS-1, you typically need to give up your old DS-1 circuits and obtain all new ones. This is because carriers terminate single and multiplexed DS-1 circuits differently at their end. Furthermore, it is a good idea to upgrade to the highest value of N you might require immediately because adding additional DS-1 circuits later could cause problems if these circuits follow different paths and experience different latencies at the multiplexor.

You may also need to upgrade your router if you move from DS-1 to NxDS-1. This is because the V.35 interface on most Cisco routers only supports speeds up to 3 Mbps, which corresponds to 2xDS-1. For higher speeds, obtain a router with High Speed Serial Interface (HSSI).

See Also DS-0 ,DS-3 frame relay, High-Speed Serial Interface (HSSI), T1, V.35

DS-3

Also known as DS3, a transmission standard for digital telecommunications.

Overview

A DS-3 circuit consists of 672 DS-0 circuits multiplexed together into a circuit. The bandwidth of a DS-3 circuit is 44.736 Mbps, and a common name for such an arrangement is a T3 line.

Implementation

DS-3 circuits are typically connected to enterprise local area networks (LANs) using an Asynchronous Transfer Mode (ATM) switch. DS-3 is rarely available for frame relay links and is usually reserved for dedicated leased lines only.

DS-3 is usually available in several forms:

Notes

Most companies do not have a need for DS-3, and a slower (and cheaper) alternative is NxDS-1, which uses multiplexed DS-1 circuits.

See Also DS-0 ,DS-1 frame relay

DSL

Stands for Digital Subscriber Line, a group of broadband telecommunications technologies supported over copper local loop connections.

See Also Digital Subscriber Line (DSL)

DSLAM

Stands for Digital Subscriber Line Access Multiplexer, the DSL termination device at a telco central office (CO).

See Also Digital Subscriber Line Access Multiplexer (DSLAM)

DSMigrate

Stands for Directory Service Migration Tool, a tool for migrating information from Novell NetWare networks to Microsoft Windows 2000.

See Also Directory Service Migration Tool (DSMigrate)

DSML

Stands for Directory Service Markup Language, a specification based on Extensible Markup Language (XML) that enables different directory applications to share information.

See Also Directory Service Markup Language (DSML)

DSMN

Stands for Directory Service Manager for NetWare, an optional Microsoft Windows 2000 utility for managing directory information stored on NetWare servers.

See Also Directory Service Manager for NetWare (DSMN)

DSN

Stands for data source name, a unique name used to create a data connection to a database using open database connectivity (ODBC).

See Also data source name (DSN)

DSSS

Stands for Direct Sequence Spread Spectrum, a combination of two transmission technologies, direct sequencing and spread spectrum, used in wireless networking and cellular communications.

See Also Direct Sequence Spread Spectrum (DSSS)

DSTP

Stands for Data Space Transfer Protocol, a new protocol for rapidly transporting large amounts of information.

See Also Data Space Transfer Protocol (DSTP)

DSU

Stands for Data Service Unit, a digital communication device that works with a Channel Service Unit (CSU) to connect a local area network (LAN) to a telecommunications carrier service.

See Also Data Service Unit (DSU)

DTD

Stands for document type definition, a file that defines the allowed structure of an Extensible Markup Language (XML) document.

See Also document type definition (DTD)

DTE

Stands for data terminal equipment, any device that is a source of data transmission over a serial telecommunications link.

See Also data terminal equipment (DTE)

DTMF

Stands for Dual Tone Multiple Frequency, the audio signaling method used by Touch-Tone phones.

See Also Dual Tone Multiple Frequency (DTMF)

DTR

Stands for Dedicated Token Ring, a high-speed Token Ring networking technology.

See Also Dedicated Token Ring (DTR)

dual boot

A computer that can boot one of several operating systems by means of a startup menu.

Overview

An example of a dual boot configuration is a machine on which Windows 98 and then Windows 2000 has been installed. The user can utilize the Windows NT boot loader menu to choose which operating system to run at startup.

Windows 2000, Windows XP, and Windows .NET Server support dual booting with other operating systems, but this is neither recommended nor supported by Microsoft Corporation. Dual boot systems are typically used in hobbyist and test networks in which a variety of operating systems are used to test different networking functions or when the number of available machines is fewer than needed to perform the tasks.

Notes

The Windows 2000, Windows XP, and Windows .NET Server boot loader menus can include up to 10 operating systems.

See Also boot

Dual Tone Multiple Frequency (DTMF)

The audio signaling method used by Touch-Tone phones.

Overview

Each DTMF signal generated by pressing a key on a Touch-Tone phone generates two simultaneous audible tones of different frequencies, as shown in the table below. The advantage of this scheme is that faster and better accuracy of tone recognition (and hence recognition of what number is dialed) is achieved than if single tones were used.

Frequencies of DTMF Signals

697 Hz

770 Hz

852 Hz

941 Hz

1209 Hz

1

4

7

*

1336 Hz

2

5

8

0

1477 Hz

3

6

9

#

DTMF was developed by AT&T; the term Touch-Tone was originally an AT&T trademark.

Uses

Microsoft Corporation's Telephony Application Programming Interface (TAPI) can recognize and interpret Dual Tone Multiple Frequency (DTMF) signals, allowing Microsoft Windows-based applications to integrate with telephony. Some networking vendors also supply hardware devices called DTMF/ASCII converters, which convert DTMF tones directly into different ASCII characters, which can then be fed as input into a program that routes telephone calls accordingly.

See Also Telephony Application Programming Interface (TAPI)

duplex

A telecommunications term referring to bidirectional communication.

Overview

In full-duplex communication, both stations send and receive at the same time, and usually two communication channels are required. However, you can also achieve full-duplex communication using a multiplexing technique whereby signals traveling in different directions are placed into different time slots. The disadvantage of this technique is that it cuts the overall possible transmission speed by half.

In half-duplex communication, only one station can transmit at any given time while the other station receives the transmission. The opposite of duplex communication is simplex communication, which can occur only in one direction.

DVMRP

Stands for Distance Vector Multicast Routing Protocol, a multicast routing protocol based on the spanning tree algorithm.

See Also Distance Vector Multicast Routing Protocol (DVMRP)

DWDM

Stands for dense wavelength division multiplexing, a multiplexing technology for achieving extremely high data rates over fiber-optic cabling.

See Also dense wavelength division multiplexing (DWDM)

dynamic disk

In Microsoft Windows 2000, Windows XP, and Windows .NET Server, a new kind of disk management technology for hard disks.

Overview

Dynamic disks are different from basic disks, which are disk systems that function similarly to earlier versions of Windows (basic disks are also supported by Windows 2000, Windows XP, and Windows .NET Server). Dynamic disks support advanced features such as online management, disk reconfiguration, and fault tolerance.

While basic disks are divided into partitions, a dynamic disk is divided into volumes. A simple volume consists of one or more regions of space on a dynamic disk (these regions need not be contiguous). Although a multidisk system using basic disks has a single partition table describing the data structure of all disks on the system, things are different with dynamic disks. When a system contains multiple dynamic disks, each dynamic disk reserves 1 MB of space at the end of the drive, and this space stores configuration information concerning all of the disks in the system. This provides a measure of fault-tolerance, for if one disk fails the configuration information is not lost concerning the other disks in the system.

When changes are made to a dynamic disk, such as extending a volume to add more space for storage, no reboot is required for the changes to take effect (basic disks must be rebooted when changes are made). You can also use dynamic disks in hot-swappable systems to add or remove drives without requiring a reboot.

Uses

Dynamic disks are intended for use on servers where high-availability and fault tolerance are essential. Workstations gain little by using dynamic instead of basic disks, and dynamic disks are not supported on laptops since they would bring no benefit there.

Dynamic disks are not supported by removable drives such as Zip and Jaz drives, USB and firewire drives, SyQuest drives, and so on.

Dynamic disks also cannot be used on dual-boot systems, as earlier versions of Windows cannot recognize them even if they are formatted using FAT or FAT32.

Implementation

You create and manage dynamic disks in three ways:

Note that the only way to convert a dynamic disk back to a basic disk is to back up all your information, reinitialize the disk by creating partitions, and restore.

See Also basic disk ,partition (disk) ,storage ,volume

dynamic DNS (DDNS)

A feature of the Domain Name System (DNS) that enables DNS clients to automatically register their DNS names with name servers.

Overview

Ordinary DNS must be administered manually, with administrators typically making changes directly to zone files on name servers. Information in zone files needs to be updated whenever a new host appears on the network or a host leaves, so the more the network changes, the more work this entails for DNS administrators. Once a zone file is updated on a name server, the updated information is then propagated to other name servers by zone transfers, which are typically scheduled to occur periodically.

Dynamic DNS (DDNS) lets changes to zone files to be made automatically instead of manually, saving DNS administrators the labor-intensive work of maintaining up-to-date zone information concerning all hosts on their network. To accomplish this, DDNS uses a process called dynamic updates outlined in RFC 2136.

Uses

DDNS in Microsoft Windows 2000, Windows XP, and Windows .NET Server is used in two scenarios:

Implementation

DDNS is supported by the Windows 2000 and Windows .NET Server implementation of DNS and greatly simplifies the administration of DNS zone information in Windows 2000 and Windows .NET Server networks. DDNS is recommended for Windows 2000- and Windows .NET Server-based networks because Active Directory directory service uses DNS as its name locator service for locating hosts on the network and DDNS also reduces the chances of error that occur when DNS is administered manually.

Dynamic update lies at the heart of Active Directory because domain names in Windows 2000 and Windows .NET Server are also DNS names. For example, northwind.microsoft.com can be both a legal DNS name and the name of a Windows 2000 domain. When DNS is integrated with Active Directory and configured for dynamic updates, the root zone and forward lookup zones are created and configured automatically for each domain, but administrators must enable and manage reverse lookup zones.

DDNS is similar to ordinary DNS in that zone update operations occur using primary or master servers only. DDNS, however, allows primary servers to receive updates initialed by a specified list of "authorized servers," which can include secondary zone servers, domain controllers, and other servers that perform name registration services, such as Windows Internet Name Service (WINS) or DHCP servers.

Most name servers on the Internet either do not yet support or are configured not to support DDNS because of the extra security issues involved when using it. DDNS is currently most popular in corporate networks using Windows 2000 and Active Directory.

Notes

You can use the DNS Manager snap-in for the Microsoft Management Console (MMC) to enable Active Directory integration on an existing DNS server. The zone file information will be written into Active Directory.

Note that by default DDNS on Windows 2000 and Windows .NET Server does not automatically scavenge (purge) old resource records for clients no longer on the network-you need to manually enable and schedule scavenging.

See Also Active Directory , name server, zone

Dynamic Host Configuration Protocol (DHCP)

A protocol that enables the dynamic configuration Internet Protocol (IP) address information for hosts on an internetwork.

Overview

Dynamic Host Configuration Protocol (DHCP) is an extension of the bootstrap protocol (BOOTP). DHCP is implemented as a client-server protocol that uses DHCP servers and DHCP clients.

Dynamic Host Configuration Protocol (DHCP). How a DHCP client leases an IP address from a DHCP server.

A DHCP server is a machine that runs a service that can lease out IP addresses and other Transmission Control Protocol/Internet Protocol (TCP/IP) information to any DHCP client that requests them. For example, on Microsoft Windows 2000 servers, you can install the Microsoft DHCP Server service to perform this function. The DHCP server typically has a pool of IP addresses that it is allowed to distribute to clients, and these clients lease an IP address from the pool for a specific period of time, usually several days. Once the lease is ready to expire, the client contacts the server to arrange for renewal.

DHCP clients are client machines that run special DHCP client software enabling them to communicate with DHCP servers. All versions of Windows include DHCP client software, which is installed when the TCP/IP protocol stack is installed on the machine.

DHCP clients obtain a DHCP lease for an IP address, a subnet mask, and various DHCP options from DHCP servers in a four-step process:

  1. DHCPDISCOVER: The client broadcasts a request for a DHCP server.

  2. DHCPOFFER: DHCP servers on the network offer an address to the client.

  3. DHCPREQUEST: The client broadcasts a request to lease an address from one of the offering DHCP servers.

  4. DHCPACK: The DHCP server that the client responds to acknowledges the client, assigns it any configured DHCP options, and updates its DHCP database. The client then initializes and binds its TCP/IP protocol stack and can begin network communication.

DHCP lease renewal consists only of steps 3 and 4, and renewal requests are made when 50 percent of the DHCP lease time has expired.

Implementation

When you implement DHCP on a network, you should consider the following:

Marketplace

Although network operating system platforms such as Microsoft Windows 2000 and Novell NetWare include their own DHCP services, enterprise network architects may also want to consider self-contained DHCP applications from third-party vendors as well. Some of these third-party DHCP products include additional features such as query and reporting tools that make them attractive to use in an enterprise environment. Some examples of such products include Shadow Ipserver from Network TeleSystems (recently acquired by Efficient Networks), IP AddressWorks from Process Software, NetID from Nortel Networks, network Registrar from Cisco Systems, and Meta IP from CheckPoint Software Technologies.

For More Information

Find out more about DHCP at www.dhcp.org

See Also BOOTP , IP address

Dynamic HTML (DHTML)

A proposed World Wide Web Consortium (W3C) standard developed by Microsoft Corporation for creating interactive multimedia Web content.

Overview

Developers can use Dynamic HTML to make Web pages look and behave more like typical desktop applications. Dynamic HTML supports features such as

See Also cascading style sheets (CSS) ,

dynamic-link library (DLL)

A file containing executable routines that can be loaded on demand by an application.

Overview

Dynamic-link libraries (DLLs) offer the advantage of providing standard services for many different calling applications, and they simplify and modularize application development by providing component-based services. DLLs are loaded into RAM only when needed by the calling application, which reduces the memory requirements of large applications. DLLs are files that have the extension .dll.

See Also application programming interface (API)

dynamic packet filtering

The process of filtering packets according to real-time criteria.

Overview

Dynamic packet filtering is a feature of firewalls, routers, and proxy servers such as Microsoft Proxy Server. Using dynamic packet filtering, a system can

Implementation

In Microsoft Proxy Server, dynamic packet filtering involves two components:

In a typical scenario, a client with the Winsock Proxy client might attempt to connect to an Internet server using Telnet. The Winsock Proxy client intercepts the Telnet connection request and remotes the request to the Winsock Proxy server, which verifies that the client has proper Microsoft Windows NT permissions to use Telnet to access servers on the Internet and opens a local socket. The Winsock Proxy server then informs the Packet Filter Manager that an outbound connection request from the socket to a remote Telnet service has been approved, and the Packet Filter Manager orders the Packet Filter Driver to open the socket and the Winsock Proxy server to start a Telnet session on behalf of the client. When the Winsock Proxy determines that the client has closed the Telnet session, it tells the Packet Filter Manager to close the socket and thus blocks any further packets from the remote system.

See Also firewall ,packet filtering ,proxy server ,router

dynamic routing

A routing mechanism for dynamically exchanging routing information among routers on an internetwork.

Overview

Dynamic routing operates using a dynamic routing protocol, such as Routing Information Protocol (RIP) or Open Shortest Path First (OSPF) Protocol. Routers that use dynamic routing are sometimes called dynamic routers.

For dynamic routing to work, the routing protocol must be installed on each router in the internetwork. The routing table of one router is manually seeded with routing information for the first hop, and then the routing protocol takes over and dynamically builds the routing table for each router. Dynamic routers periodically exchange their routing information so that if the network is reconfigured or a router goes down, the routing tables of each router are automatically modified accordingly.

Advantages and Disadvantages

Dynamic routers are much simpler to administer than static routers, but they are sometimes less secure because routing protocol information can be spoofed. If the network is reconfigured or a router goes down, it takes a certain period of time for this information to propagate between the various routers on the network. This router reconfiguration process is usually referred to as convergence. However, getting a dynamic router up and running is often as simple as connecting the interfaces and turning it on-routes are discovered automatically by communications with other routers on the network. Dynamic routers are also fault-tolerant, for when a router fails the other routers soon learn about it and adjust their routing tables accordingly to maintain communications across the network.

Using dynamic routing protocols also creates additional network traffic due to routing table updates and exchanges, and different dynamic routing protocols offer their own advantages and disadvantages in this regard.

Dynamic routers cannot exchange information with static routers. To configure static and dynamic routers to work together on the same internetwork, you must add manual routes to the routing tables of both types of routers.

Notes

You can configure a multihomed Microsoft Windows 2000 or Windows .NET Server server as a dynamic RIP router by selecting Enable IP Forwarding on the Routing tab of the Transmission Control Protocol/Internet Protocol (TCP/IP) property sheet and then using the Services tab of the Network property sheet to add the RIP for Internet Protocol (IP) service to the server. Another example of a dynamic router is a multihomed computer running Windows 2000 Server with Routing and Remote Access Service (RRAS) and either RIP or OSPF configured.

See Also router ,routing table ,static routing

dynamic routing protocol

A protocol that enables dynamic routing to be used to simplify management of a routed network.

Overview

If we focus on Internet Protocol (IP) routing, which is the standard for the Internet and most corporate networks, several kinds of dynamic routing protocols can be deployed. These routing protocols can be classified in a hierarchical scheme as shown in the illustration.

First, dynamic routing protocols can be one of two types:

IGPs can be further classified according to the type of algorithm used to build and distributed routing table information. Specifically, IGPs can employ the

See Also autonomous system (AS) , Border Gateway Protocol (BGP) , Enhanced Interior Gateway Routing Protocol (EIGRP), exterior gateway protocol (EGP), interior gateway protocol (IGP), Interior Gateway Routing Protocol (IGRP), link state routing algorithm, Open Shortest Path First (OSPF), Routing Information Protocol (RIP)

dynamic volume

In Microsoft Windows 2000, Windows XP, or Windows .NET Server, a volume created on a dynamic disk.

Overview

Windows 2000, Windows XP, and Windows .NET Server support several kinds of dynamic volumes including

You can create dynamic volumes using the Disk Management portion of the Computer Management administrative tool. You can create dynamic volumes only on dynamic disks.

See Also basic disk , basic volume , RAID, storage, volume



Microsoft Encyclopedia of Networking
Microsoft Encyclopedia of Networking
ISBN: 0735613788
EAN: 2147483647
Year: 2002
Pages: 36
Authors: Mitch Tulloch, Ingrid Tulloch
BUY ON AMAZON

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net