Stands for Logical Link Control and Adaptation Layer Protocol, the data-link layer protocol for Bluetooth wireless networking.
See Also Logical Link Control and Adaptation Layer Protocol (L2CAP)
Stands for Layer 2 Forwarding, a media-independent tunneling protocol developed by Cisco Systems.
See Also Layer 2 Forwarding (L2F)
Stands for Layer 2 Tunneling Protocol, a wide area network (WAN) protocol used for virtual private networking (VPN).
See Also Layer 2 Tunneling Protocol (L2TP)
Stands for Layer 2 Tunneling Protocol (L2TP) over Internet Protocol Security (IPsec), which is the normal way of using L2TP for creating secure virtual private networks (VPNs).
See Also Layer 2 Tunneling Protocol (L2TP)
Stands for Local Area Data Channel, a telco service for transmitting data using line drivers.
See Also Local Area Data Channel (LADC)
An emerging optical switching technology.
Overview
Lambda switching is an emerging technology that represents the next stage in the development of high-speed switched optical networks. Lambda switching uses a combination of dense wavelength division multiplexing (DWDM), multiprotocol label switching (MPLS), and Resource Reservation Protocol (RSVP) to automatically connect the endpoints of an optical networking system without the setup and configuration required by DWDM by itself. Light paths between endpoints can be established on an ad hoc basis, providing better mechanisms for managing traffic flows through fiber-optic networks.
Like DWDM, lambda switching is likely to find its place of deployment mainly in long-haul fiber managed by inter-exchange carriers (IXCs). Several vendors are developing lambda switching equipment called optical cross connects (OCXs), with AT&T being one major player in this arena. Deployment of lambda switching technologies should eventually provide bandwidth on demand for the enterprise wide area network (WAN) and should speed up the provisioning process of OC-48 and higher WAN links.
See Also dense wavelength division multiplexing (DWDM) ,inter-exchange carrier (IXC) ,Multiprotocol Label Switching (MPLS) ,Resource Reservation Protocol (RSVP)
Stands for local area network, typically a group of computers located in the same room, on the same floor, or in the same building, that are connected to form a single network.
See Also local area network (LAN)
Stands for LAN Emulation, a technology that enables local area network (LAN) traffic such as Ethernet frames to be carried over an Asynchronous Transfer Mode (ATM) network.
See Also LAN Emulation (LANE)
A technology that enables local area network (LAN) traffic such as Ethernet frames to be carried over an Asynchronous Transfer Mode (ATM) network.
Overview
LAN Emulation (LANE) was designed to allow connectionless traffic on Ethernet networks to be transported over connection-oriented ATM backbones. LANE accomplishes this by fragmenting and encapsulating variable-length Ethernet frames into fixed-length ATM cells and by configuring mappings between ATM and Ethernet addresses. Using LANE, you can use ATM as a backbone transport for connecting widely separated Ethernet LANs. Since LANE operates at the data-link layer, it can use ATM to connect LANs using any network-layer protocol, such as Internet Protocol (IP) and Internetwork Packet Exchange (IPX). LANE can also be used for connecting Token Ring networks using an ATM backbone.
Implementation
LANE is implemented utilizing Emulated LANs (ELANs), which are essentially subsets of an ATM cloud in which all stations see each other as neighbors as if they were on the same LAN. An ELAN is thus a kind of "virtual" LAN, but do not confuse LANE ELANs with Virtual LANs (VLANs) in Ethernet switching-they are two different technologies. Stations on an ELAN are known in LANE terminology as LAN Emulation Clients (LECs) and can include servers with special ATM network interface cards (NICs), Ethernet switches such as Cisco Catalyst switches, or routers such as Cisco 7000 series routers. Two LECs on the same ELAN can communicate with each other using LANE, but LECs on different ELANs must communicate through a router.
LAN Emulation (LANE). The architecture of a simple LANE implementation.
To implement LANE on a combined ATM and Ethernet network, several special components must be present:
LAN Emulation Server (LES): This component maintains a list of media access control (MAC)- to-ATM address mappings for LECs on the particular ELAN it manages. In other words, an LES maps ATM endpoint addresses (which are 20-byte network service access point or Network Service Access Protocol [NSAP] addresses) to non-ATM endpoint Ethernet MAC addresses. The LES is connected to the LECs on the ELAN it manages using a point-to-multipoint ATM virtual circuit (VC).
LAN Emulation Configuration Server (LECS): This component tells new LECs that appear on the network how to find the LES for their particular ELAN. A typical LANE deployment will have one LECS for the entire ATM cloud and as many LESs as the cloud is divided into ELANs.
Broadcast and Unknown Server (BUS): Since ATM is a connection-oriented technology that does not inherently support broadcasts, some way of implementing broadcasts is required for interoperability with Ethernet. The BUS accomplishes this, and there is one BUS for each ELAN to handle broadcasts from LECs on that ELAN.
A basic LANE communication session between two LECs on an ELAN takes place something like this: when LEC#1 wants to communicate with LEC#2, it first contacts the LES for that ELAN and requests the ATM address that corresponds to the MAC address of LEC#2. The LES responds to the request, and LEC#1 then establishes an ATM virtual circuit with LEC#2 and begins to transmit.
Prospects
Just as ATM is a technology that never really established itself in the LAN and has largely been superseded by Fast Ethernet and Gigabit Ethernet (GbE), so also LANE is no longer widely used in enterprise networking. Part of this is due to ATM's complexity, but another reason has been limitations with LANE itself. The main problem is the lack of redundancy in LANE, where a single LECS for the ATM cloud and a single LES and BUS for each ELAN make for single points of failure. To resolve this, a new version of LANE called LANE2 has been developed that supports the following enhanced features:
Multiple LECS per ATM cloud, using a single master LECS that communicates with multiple slave LECS using a new protocol called LANE Network-to-Network Interface (LNNI).
Redundant LESs and BUSs on each ELAN. In LANE2, when a LEC tries to contact a LES or BUS and fails, it contacts the LECS again to find another LES or BUS to service its needs.
Notes
Microsoft Windows 2000 supports LANE and comes with a LEC that is installed automatically if an ATM network adapter is detected during startup.
See Also Asynchronous Transfer Mode (ATM) ,Ethernet ,MAC address
The integration of local area network (LAN)-based networks that use protocols such as Transmission Control Protocol/Internet Protocol (TCP/IP) or Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) with host systems such as IBM mainframes and AS/400 systems that use Systems Network Architecture (SNA).
Overview
SNA-based networks originally developed separately from LANs, and as a result both architectures have unique adapters, cabling, and networking protocols. Today's LANs are typically built around the Ethernet networking architecture, use TCP/IP as their networking protocol, and run over structured cabling using Category 5 (Cat5) unshielded twisted-pair (UTP) cabling. LANs can be joined together into wide area networks (WANs) using routers, which provide access to telco communications networks. By contrast, SNA networks typically employ mainframe host systems with front- end controllers connected over serial links to remote terminals. As a result of these two different architectures, many large companies have developed a two-tier network, consisting of a traditional LAN-based Ethernet network and an entirely separate SNA host-based network. However, because of the cost of maintaining separate networks, many have merged their SNA-only networks with non-SNA networks.
Early attempts at LAN-host integration involved directly connecting PC computers to IBM host systems using SNA hardware adapters and SNA protocols across a dedicated SNA network. Each PC was connected to a local IBM control unit such as an IBM 3174 or IBM 5294 using coaxial or twinax cabling. Standards were developed to allow SNA and non-SNA protocols to share the same network, but networking engineers soon found that mixing SNA and TCP/IP was like mixing oil and water, especially with regard to WAN connections, in which Data Link Control (DLC) timeouts and other difficulties made network management complex.
One solution is to install a TCP/IP protocol stack directly on the mainframe host, but this often results in degradation of host performance and additional challenges in terms of IP address administration. A more workable solution is the LAN-to-SNA gateway. The gateway computer lets desktop PCs access applications and data on the mainframe host using traditional LAN protocols. TCP/IP is used to connect the desktop PC and the SNA gateway, and SNA is used to connect the SNA gateway and the mainframe host. This LAN-to- SNA gateway solution has become the de facto standard for providing host access to LAN-based PCs. An example of an SNA gateway application is Microsoft Host Integration Server, which provides LAN-to-SNA gateway services over a variety of network protocols that include NetBIOS Enhanced User Interface (NetBEUI), TCP/IP, IPX/SPX, Banyan VINES, and AppleTalk.
See Also Host Integration Server ,Systems Network Architecture (SNA)
A manual switch that can be used to physically disconnect two or more local area network (LAN) segments.
Overview
LAN security switches are typically used in high-security networking environments that must meet the highest government or military security standards. For example, a network supervisor can use a LAN switch at the end of the day to physically disconnect a portion of the network that includes servers that store sensitive data, thus preventing users from accessing the servers during off hours. This is generally more convenient and safer than going into the server room and unplugging connectors from a hub.
LAN security switch. Two ways of using a LAN security switch.
LAN security switches are available for both copper cabling and fiber-optic cabling. A fiber-optic LAN security switch has a small mirror inside that rotates when you manually flip a switch or rotate a dial to open or close the connection.
LAN security switches work by creating a physical break in a circuit, thus preventing the flow of data between connected LAN segments. LAN security switches must be operated manually-you cannot operate them remotely using electronic means.
See Also network security
A physical portion of a local area network (LAN), usually separated from other portions by bridges or routers.
Overview
LANs such as Ethernet networks are often "segmented" using bridges in order to improve network performance. Segmentation improves performance by reducing the number of stations in each segment that must compete with one another for access to the network. Bridges are generally used for segmenting smaller LANs because they are cheaper than routers and require no special configuration. Bridges are smart devices that build media access control (MAC)-level routing tables that forward network traffic on the basis of each frame's destination MAC address. If a frame's destination address is a machine in the local LAN segment, bridges attached to that segment will not allow the frame to pass; this reduces unneeded network traffic in other segments attached to the bridge. In a typical scenario, you would place a bridge between your department or workgroup hub and the main network backbone to improve traffic on your local LAN segment.
LAN segment. Segmenting a LAN using a bridge.
See Also bridge ,Ethernet ,router
Another name for Ethernet switch, a multiport device based on bridging technologies that is used mainly to segment Ethernet networks.
See Also Ethernet switch
Stands for Link Access Protocol, D-channel, the data-link layer protocol for Integrated Services Digital Network (ISDN).
See Also Link Access Protocol, D-channel (LAPD)
The current configuration information for drivers and services when a user successfully logs on to a Microsoft Windows 2000 or Windows NT system.
Overview
In Windows 2000, the configuration information from the last successful logon is copied to the LastKnownGood control set in the registry. You can then use this configuration to recover your system if later on you find that you cannot log on to the system. This may occur, for example, if you add or upgrade a driver that is incorrect for your particular hardware configuration. If you modify your system and are unable to log on again, you can restart your system, press F8 at the beginning of the boot process in Windows 2000, and follow the prompts to select Last Known Good configuration to reset the Windows configuration information for your system.
See Also boot, logon
Stands for Local Access and Transport Area, service boundaries for local exchange carriers (LECs).
See Also Local Access and Transport Area (LATA)
A collision on an Ethernet network that is detected late in the transmission of the packet.
Overview
Signals on a network cable do not travel instantaneously from point to point; they travel at a fixed speed, which is near the speed of light on copper cabling. If segments of an Ethernet network are too long, collisions can occur that are not properly detected by the stations on the network. This can result in lost or corrupted data, and it can degrade network performance.
Collisions themselves are natural and inevitable on an Ethernet network and occur when two stations transmit their signals simultaneously or almost simultaneously. When two transmitting stations detect a collision (the concurrent signal from the other transmitting station), they both stop their transmission and wait a random time interval before attempting retransmission. The Ethernet standard, however, specifies that if a station on the network is able to transmit 64 bytes or more before another signal is detected, the first station is considered to be "in control" of the wire and can continue to transmit the remainder of its frame, while the second station must stop transmitting and wait. If the distance between two transmitting stations exceeds Ethernet specifications, the stations might not become aware soon enough that another station already has control of the wire. The resulting collision is called a late collision, and results in a data packet that is more than 64 bytes in length (which in itself is allowable) but which contains cyclical redundancy check (CRC) errors. Transmission errors and unreliable communication between the stations are the result.
Late collisions can result from defective Ethernet transceivers, having too many repeaters between stations, or exceeding Ethernet specifications for maximum node- to-node distances.
See Also collision ,Ethernet
The delay that occurs when a packet or signal is transmitted over a communications system.
Overview
Latency is the amount of time it takes for information to travel between two stations on a network. A network with high latency causes users to experience unpredictable delays in transmission of voice, data, and video signals. This can lead to awkward conversations and time-outs in data transmissions that can cripple network performance. Latency is usually measured in milliseconds (msec) for computer networking and telecommunications systems.
Latency can be a serious issue, especially in voice communications where delays of more than about 250 msec make conversations awkward by creating pauses that cause parties to interrupt one another when speaking (The G.114 recommendation from the International Telecommunication Union [ITU] specifies that round- trip latency in a voice communications system should be less than 300 msec). Similar latency in multimedia transmission can be compensated for by buffering. Latency in data transmission is less serious, although excessive latency can result in Transmission Control Protocol (TCP) connections being closed and retransmissions occurring, which slows down overall network performance.
Types
Latency is an inevitable aspect of any communications system and generally has several possible causes:
Intrinsic latency in a transmission is caused by the finite transmission speed of electrical signals through wires (or of light signals through fiber- optic cabling). Intrinsic latency cannot be eliminated but is usually quite small in a local area network (LAN) and usually less than a microsecond. In a wide area network (WAN), intrinsic latency is also generally small when transcontinental trunk lines or undersea cables are used and is usually between 10 and 100 msec. For satellite WAN links, however, latency can be 500 msec or even higher, and such high latency can sometimes be frustrating for users of satellite-based Internet access systems.
Latency is also introduced into a communications path by devices used to switch or modulate signals. The amount of latency varies greatly with the type of device. For example, the latency for a bridge (the time delay between the moment when the packet enters one port of the bridge and the moment when it leaves another port) is usually between 5 and 50 microseconds (a microsecond is one-thousandth of a millisecond). By contrast, the latency that is usually introduced into a network by routers and gateways that process packets and perform protocol conversion is usually an order of magnitude higher. Latency for signals passing through analog modems is typically about 150 msec due to signal modulation, compression, and error correction processes. See the table below for a comparison of latency introduced into communications paths by different kinds of networking and communications devices.
Devices that establish a connection can introduce even greater amounts of latency into a communications channel. Integrated Services Digital Network (ISDN) terminal adapters typically take 1 to 2 seconds to establish a connection, but it can take as much as 15 to 30 seconds to establish an analog modem connection. The term latency , however, is sometimes restricted to delay over a preestablished communications channel, in which case these scenarios would not normally be identified as latency.
Device | Typical Latency |
Network interface card | < 5 msec |
Asymmetric Digital Subscriber Line (ADSL) modem | 5-10 msec |
Plain Old Telephone Service (POTS) landline | < 20 msec |
Integrated Services Digital Network (ISDN) terminal adapter | 20-30 msec |
V.90 analog modem | ~ 150 msec |
Global System for Mobile Communications (GSM) cellular | ~ 150 msec |
Voice over IP (VoIP) system | 80-500 msec |
Notes
Latency for bridges and LAN switches can actually be broken down into two types:
Bit-forwarding devices: Latency here is measured from the moment the first bit of an incoming frame arrives at one port to the moment the first bit of an outgoing frame departs at another port.
Store and forward devices: Latency for these devices is measured from the moment the last bit of an incoming frame arrives at one port to the moment the first bit of an outgoing port departs at another port.
For more information on these types of latency, see RFC 1242 (www.faqs.org/rfcs/rfcs1242.html).
See Also bridge , Ethernet switch ,jitter , noise, router, signal, wide area network (WAN)
A media-independent tunneling protocol developed by Cisco Systems.
Overview
Layer 2 Forwarding (L2F) can be used to create virtual private networks (VPNs) that tunnel information securely over public networks such as the Internet using wide area network (WAN) data-link protocols such as Point-to- Point Protocol (PPP) or Serial Line Internet Protocol (SLIP). L2F supports such features as Remote Authentication Dial-In User Service (RADIUS), dynamic allocation of addresses, and quality of service (QoS).
L2F has been superseded by the newer Layer 2 Tunneling Protocol (L2TP), an Internet Engineering Task Force (IETF) standard that provides a vendor-neutral tunneling solution for virtual private networking. L2TP is an extension of PPP and supports the best features of the Point- to-Point Tunneling Protocol (PPTP) and the L2F protocol.
Implementation
As an example, when PPP is used with L2F, PPP provides the connection between a dial-up client and the network access server (NAS) that receives the call. A PPP connection initiated by a client terminates at a NAS located at a PPP service provider, usually an Internet service provider (ISP). L2F allows the connection's termination point to be extended beyond the NAS to a remote destination node, so the client's connection appears to be directly to the remote node instead of to the NAS. The function of the NAS in L2F is simply to project or forward PPP frames from the client to the remote node. This remote node is called a home gateway in Cisco's Internetwork Operating System (IOS) networking terminology.
See Also Common Object Request Broker Architecture (CORBA) ,Point-to-Point Protocol (PPP) ,Point-to-Point Tunneling Protocol (PPTP) ,virtual private network (VPN)
An Ethernet switch that forwards frames according to Layer-2 addresses.
Overview
Layer 2 switches operate at the data-link layer (Layer 2) of the Open Systems Interconnection (OSI) reference model. Layer 2 switches are essentially multiport bridges that forward frames based on their destination MAC address without any concern for the actual network protocol being used. Layer 2 switches operate near wire speed and have very low latency compared to Layer 3 devices such as routers.
Layer 2 switching originated in the 1980s with vendors such as Kalpana, which was acquired by Cisco Systems. Although originally developed for a variety of local area network (LAN) architectures including Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI), by far the most widespread use of these switches is in switched Ethernet, Fast Ethernet, and Gigabit Ethernet (GbE) networks. Layer 2 switches have displaced bridges, hubs, and routers in much of today's enterprise network.
Uses
There are two main kinds of Layer 2 switches, and each type is optimized for its own particular role in the network. These types are
Segmentation switches: These are used to segment large networks into collections of smaller collision domains to reduce congestion and improve network performance. Routers have traditionally been used for segmenting enterprise networks, but Layer 2 switches are cheaper than routers, easier to deploy and manage, and have lower latency. As a result, most large companies have replaced much of their router infrastructure with Layer 2 switches, relegating routers mainly to the role of wide area network (WAN) access devices. In addition to segmenting large networks, such Layer 2 switches are also used to connect LANs across a campus network and to build collapsed backbone networks together with their more powerful cousins, Layer 3 switches.
Workgroup switches: These are used to provide high-throughput switched connections to servers and high-performance workstations. A typical workgroup switch would have 12 or 24 autosensing 10/100 megabits per second (Mbps) or 100/1000 Mbps ports with one or more gigabit uplink ports for connection to the LAN backbone. Hubs have traditionally been used for concentrating servers and workstations into workgroups, but hubs use a shared-media approach that cannot match switches in throughput and latency. As a result of falling port prices in Layer 2 switches, most large companies have migrated their legacy hub infrastructure to Layer 2 switches.
Implementation
Layer 2 switches can generally be installed transparently into networks with no configuration required unless virtual LANs (VLANs) are needed. When Layer 2 switches first appeared, the mantra "switch when you can, route when you must" was promoted (this originated with Synoptics, which later became Bay Networks and has now been acquired by Nortel Networks).
Once installed, a Layer 2 switch dynamically learns about connected hosts and networks by examining the source addresses of frames it receives. The switch continually builds a cache or database of mappings between each port on the switch and the various MAC addresses of hosts and networks connected to that port. Then, when a frame arrives at a given switch port, the switch reads the destination MAC address and forwards the frame to the switch port to which the destination host or network is connected. If the frame's destination address is unfamiliar or if it is a broadcast frame, the switch forwards the frame to all of its ports except the port through which the frame originally entered.
If a Layer 2 switch has ports of different media types (for example, Ethernet and FDDI), frame forwarding is complicated. Frames that must be forwarded to destination ports having different media types must first be reformatted at Layer 1 (the PHY or physical layer) before being forwarded to their destination ports.
Layer 2 switches avoid routing loops by implementing a method first used in bridged networks, namely, the spanning tree protocol. This protocol works automatically to ensure that frames are not endlessly switched around in loops, which would make other network communications impossible.
Advantages and Disadvantages
Advantages of Layer 2 switches over traditional hubs and routers include higher throughput, lower latency, cheaper cost per port, and easier management. Disadvantages of such switches include the danger of broadcast storms and greater complexity in troubleshooting network problems (traditional packet sniffers work on shared media LANs where they can monitor traffic simultaneously from large numbers of stations). A broadcast storm occurs when broadcasts become so common that other forms of network communication are prevented. Broadcasts typically occur due to network advertisements from servers, routers, and other devices. Since Layer 2 switches (and bridges) allow broadcasts to pass, a network built entirely of Layer 2 switches represents a single broadcast domain. When the number of stations in a broadcast domain reaches several hundred, broadcast storms are likely to occur. Layer 2 switches by themselves therefore do not represent a scalable solution for building large enterprise networks. You can solve this broadcast problem in several ways:
By combining newer Layer 2 switches with traditional routers in the network infrastructure. This brings some of the benefits of switches but leaves some of the problems relating to routers such as cost, latency, throughput, and so on.
By configuring separate VLANs for ports or groups of ports on each Layer 2 switch. VLANs let you logically segment the network independently from its physical topology. This approach is functionally equivalent to flattening the network into a number of smaller broadcast domains, and it works well with traditional enterprise networks that followed the 80/20 rule of network traffic distribution. But with the ubiquity of Internet-related technologies in today's network, the pattern of network traffic has shifted to become more like 20/80 (20 percent of traffic is local and 80 percent travels along the backbone), and VLANs do not work well in such a situation, as the optimal network configuration becomes using a single VLAN again-which takes us back to the broadcast storm problem! Another issue with VLANs is that most approaches to creating them are vendor specific, and until the new 802.1Q VLAN standard becomes widely supported, VLANs will be difficult to implement unless all Layer 2 switches are obtained from a single vendor. Finally, VLANs make troubleshooting switched Ethernet networks even harder.
By building collapsed network backbones using a combination of Layer 2 (bridging) switches with Layer 3 (routing) switches or by using switches combining Layer 2/3 functionality (called multiplayer switches). This is the most popular solution in today's enterprise-see the article "Layer 3 switch" elsewhere in this book for more information.
See Also bridge , Ethernet switch , Open Systems Interconnection (OSI) reference model
A wide area networking (WAN) protocol used for virtual private networking.
Overview
The Layer 2 Tunneling Protocol (L2TP) was developed as a vendor-neutral tunneling protocol that supersedes proprietary tunneling protocols such as Microsoft Corporation's Point-to-Point Tunneling Protocol (PPTP) and Cisco Systems' Layer 2 Forwarding (L2F) protocol. L2TP can be used to encapsulate Point-to-Point Protocol (PPP) frames for transmission over a variety of network transports, including Transmission Control Protocol/Internet Protocol (TCP/IP), X.25, frame relay, and Asynchronous Transfer Mode (ATM). L2TP is an Internet Engineering Task Force (IETF) standard defined in RFC 2661 and is typically used for creating virtual private networks (VPNs) to securely tunnel network traffic over the Internet and other public data networks.
L2TP supports the same authentication options supported by PPP, including Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), and Microsoft Challenge Handshake Authentication Protocol (MS-CHAP). L2TP is not a secure protocol in itself, however, for although secure authentication is supported, data encryption is not. As a result, L2TP is usually combined with Internet Protocol Security (IPsec), a Layer 3 protocol that performs encryption to ensure data integrity during transmission. This combination is sometimes referred to as L2TP over IPsec or L2TP/IPsec, but since this form is almost always used, it is more common to simply refer to the combination as L2TP.
L2TP is supported by both Cisco access routers and by the Routing and Remote Access Service (RRAS) of Microsoft Windows 2000.
Comparison
Both L2TP and PPTP are commonly used tunneling protocols in virtual private networking. L2TP has some advantages over PPTP, however:
Although PPTP can only be used to create IP tunnels, L2TP supports a much wider variety of WAN transports, including X.25, frame relay, and ATM.
Although PPTP supports only one tunnel between two endpoints, L2TP supports multiple tunnels between two points, each of which can have its own quality of service (QoS) level defined.
L2TP has less overhead because L2TP headers are only 4 bytes in length and are compressed and PPTP uses uncompressed 6-byte headers.
L2TP can also support multilink configurations in which each link terminates at a different L2TP server at the service provider. This provides more flexibility than Multilink PPP (MPPP), in which all the links from the customer premises must terminate at the same MPPP server at the service provider.
The main disadvantage of L2TP compared to PPTP is that it requires more processing power due to its use of compression and IPsec encryption.
L2TP is also a significant improvement over Cisco's earlier L2F tunneling protocol. Some of the differences between L2TP and L2F include
Although L2F has no defined client, L2TP uses a well-defined client.
Although L2F functions in compulsory tunnels only, L2TP can also use voluntary tunnels.
L2TP provides additional features, such as flow control and Attribute Value Pair (AVP) hiding, which are not supported by L2F.
Layer 2 Tunneling Protocol (L2TP). How L2TP is used to encapsulate an IP datagram.
Architecture
As its name suggests, L2TP operates at Layer 2 of the Open Systems Interconnection (OSI) reference model. When used on IP networks, L2TP uses User Datagram Protocol (UDP) datagrams on port 1701 for both the establishment and management of tunnels and for data transmission. In other words, L2TP transmits its control information using in-band signaling through the same tunnel that data is transmitted over. To transmit data, an IP packet is first wrapped in a PPP frame, which is then encapsulated into a UDP datagram. An L2TP header is then added for transmission over the WAN. In a typical VPN, IPsec is then used to add security by encrypting the data and adding an additional header and trailer to the L2TP frame.
Implementation
Implementing L2TP requires an L2TP client and an L2TP server that both support IPsec. A VPN constructed using L2TP can be initiated in two ways:
The client can initiate the tunnel in a similar fashion to PPTP tunnels. For example, Windows 2000 clients can initiate L2TP tunnels and connect with routers that support L2TP, such as Cisco routers. In a typical scenario, a dial-up tunnel is initiated by a client who connects with a network access server (NAS) at the client's telco central office (CO) or Internet service provider (ISP). The NAS performs the server-side function of PPP termination and acts as the receiver of incoming connections. In some implementations, the NAS is referred to as an L2TP access concentrator (LAC). The LAC then forwards its L2TP traffic to a remote node called an L2TP network server (LNS).
A NAS can initiate the tunnel, enabling telcos and ISPs to provide corporate customers with complete VPN solutions. In this scenario, the remote client acts as the LNS.
See Also Internet Protocol Security (IPsec) , Multilink Point-to-Point Protocol (MPPP), Point-to-Point Protocol (PPP), Point-to-Point Tunneling Protocol (PPTP), User Datagram Protocol (UDP), virtual private network (VPN)
An Ethernet switch that forwards frames according to Layer 3 addresses.
Overview
Layer 3 switches operate at the network layer (Layer 3) of the Open Systems Interconnection (OSI) reference model. Layer 3 switches have many of the characteristics of traditional routers and forward datagrams based on their network layer addresses. For example, on a Transmission Control Protocol/Internet Protocol (TCP/IP) network, Layer 3 switches forward IP packets based on their destination IP addresses, but on a legacy Internetwork Packet Exchange/Sequenced Packet Exchange (IPX/SPX) network, they forward IPX packets based on their destination IPX addresses.
Layer 3 switching originated in the early 1990s and was developed mainly in response to scaling problems with Layer 2 switches and the difficulty traditional routers had with keeping up with backbone traffic flows. 3Com Corporation pioneered the development of Layer 3 switching by incorporating routing functions in some of its Layer 2 switches in 1992. These early switches were mainly software-based and were superseded in the mid-1990s by hardware-based switches such as the popular CoreBuilder 3500 and 9000 series of switches, which employed application specific integrated circuits (ASICs) dedicated to high-speed bridging and routing functions.
Layer 3 switches are currently available from a wide variety of vendors for Ethernet, Fast Ethernet, and Gigabit Ethernet (GbE) networks. Layer 3 switches are also available for other networking architectures, including Token Ring, Fiber Distributed Data Interface (FDDI), and Asynchronous Transfer Mode (ATM), but these are not as common as Ethernet switches.
Uses
Using a combination of Layer 2 and Layer 3 switches, you can easily build and operate highly scalable collapsed backbones for enterprise-level networks. Due to their routing functionality, Layer 3 switches have largely replaced traditional routers in campus backbones and other large networks, relegating the router mainly to the role today of wide area network (WAN) access device.
Layer 3 switch. Building a switched network backbone using Layer 2 and Layer 3 switches.
A typical switched network backbone is built in a hierarchical fashion in several levels using a combination of Layer 2 and Layer 3 switches. At the periphery of the network is the access level, which uses multiple Layer 2 switches to provide connection points for workgroup collections of servers and workstations. These Layer 2 switches represent collision domains but pass broadcasts; consequently, to prevent broadcast storms from occurring they need to be consolidated using Layer 3 switches. These Layer 3 switches represent the distribution level of the network, as they are used to distribute traffic to different access points for workgroup connections. Layer 3 switches represent broadcast domains, and the number of connected devices downstream from each switch must be small enough to prevent broadcast storms from occurring (a good rule of thumb is a maximum of 2000 connected devices). Finally, these Layer 3 switches at the distribution level are connected using high-speed Layer 2 switches, which form the network's core level. The network's hierarchical physical structure is thus equivalent to a logical star network with multiple broadcast domains connected using one or more Layer 2 switches (see diagram).
If you are migrating a legacy routed network to a modern switched one, you can also simply deploy a Layer 3 switch anywhere in your network a traditional router is used.
Implementation
This discussion will focus on Layer 3 switching in TCP/IP running on Ethernet, where the primary Layer 3 protocol is IP. Layer 3 switches essentially do the same thing traditional routers do-route (forward) packets to their destination based on their Layer 3 address (see the table for a comparison of routers and Layer 3 switches). However, traditional routers really perform two different functions:
Route calculation: This is the process of building routing tables and usually takes place dynamically using a routing protocol such as Routing Information Protocol (RIP) or Open Shortest Path First (OSPF).
Packet forwarding: This involves examining a packet's destination IP, looking up this address in the routing table, and determining which port (connected network) to forward the packet to so that it can eventually reach its destination.
The difficulty with traditional routers is that they are software-based devices that are slow and unable to keep up with today's gigabit speed networks. Vendors have found different ways around this problem, all of which generally fall under the umbrella name of Layer 3 switching. Some of the common solutions are
Wirespeed routers: This involves replacing traditional software-based routing technology with hardware-based routing by using specialized ASICs developed specifically to perform route calculation and packet forwarding. In other words, a wirespeed router is simply a router that performs its routing functions using preprogrammed hardware instead of installable routing software. Wirespeed routers are typically an order of magnitude or more faster than traditional software routers and can keep up with gigabit data flows and route millions of packets per second. Another name for these devices is Packet-by-Packet Layer 3 (PPL3) switches since they individually examine and forward each packet in a train of packets. The name switch is really a misnomer in this case-these devices are actually routers, not switches.
Cut-through switches: These devices operate on the principle of "Route once, switch many" and work by separating the two functions of traditional routers: calculating routes and forwarding frames to their destination. When a train of packets (a data stream or communications flow between two hosts or networks) reaches a port on a cut-through switch, the switch examines the train's first packet, reads the destination IP address, looks up the best route to the destination in its routing table, and determines which port to forward the packet to. The device then switches the entire series of packets in the train to the outgoing port without examining any further packets in the train. In other words, routing is performed on the first packet of the train, and switching is performed on the remaining packets. Switching means that an internal logical circuit is set up between the incoming and outgoing ports on the switch. Since switching is much faster than routing, cut-through switches perform significantly better than routers while accomplishing the same function: forwarding packets on the basis of their destination IP addresses.
Within the realm of cut-through switches, there are also many differences in operation depending on vendors. For example, some cut-through switches operate similarly to bridges in the sense that they dynamically learn the addresses of attached hosts and networks by "listening" to traffic. The difference is that instead of learning the Layer 2 (MAC) addresses of devices attached to each port, which is what bridges and Layer 2 switches do, these Layer 3 switches learn the Layer 3 (IP) addresses of connected devices instead. Such Layer 3 "learning" switches build their routing tables dynamically by listening to traffic on their ports, but they cannot exchange this information with similar switches the way routers do using routing protocols. The main use for this type of switch is to "front-end" for traditional routers. For example, if a packet arrives at the switch and the switch does not know what to do with it, the switch simply forwards the packet to the router for handling. Traditional routers are also still needed for routing legacy protocols such as DECnet and AppleTalk and for providing access to the WAN.
By contrast, some cut-through switches perform virtually all the functions that a traditional router performs, such as using the packet's checksum to verify its integrity, updating the packet's Time to Live (TTL) information after each hop, processing any option information in the packet's header, and sharing their routing table information with similar switches using a standard routing protocol, such as RIP or OSPF, so that they become aware of the network's overall topology. In short, they are identical to routers except that they route only the first packet of any train and switch the remainder to improve performance.
A common feature of Layer 3 switches is support for Layer 2 switching or frame forwarding. This combination is called either a Layer 2/3 switch, a multiplayer switch, a routing switch, or simply a Layer 3 switch. These devices are becoming so common in the enterprise that many networking professionals simply call them switches, without any further qualifier.
Deploying Layer 3 switches in a network is usually as easy as doing so with Layer 2 switches, which are essentially plug-and-play in their simplicity, and it is much simpler than configuring a router. As a result, Layer 3 switches are rapidly displacing traditional routers in the enterprise, except in the role of WAN access devices where routers still predominate. Layer 3 switches are generally easy to manage also, and they typically support Simple Network Management Protocol (SNMP) and Remote Monitoring (RMON) management protocols.
Feature | Router | Layer 3 Switch |
Local area network (LAN) protocols supported | IP, IPX, AppleTalk | IP, IPX, AppleTalk |
Packet-forwarding method | Software- based | Hardware- based |
Throughput | Lower | Higher |
Definition of subnet | Per port | Per Layer 2 switching domain |
Support for policy- based routing | Less | More |
Relationship with bridges | Peer | Layered |
Cost | Higher | Lower |
Marketplace
Layer 3 switches are typically sold in two configurations: fixed and modular. Fixed switches are simpler and cheaper and usually have 8, 12, or 24 autosensing 10/100 megabits per second (Mbps) or 10/100/1000 Mbps Ethernet ports. Modular switches are more complex and costly and typically consist of a chassis that supports various kinds of modules, each providing one or more ports of Ethernet, FDDI, Token Ring, or ATM connectivity.
Popular Layer 3 switches include offerings from large vendors, such as Cisco's popular Catalyst series of routing switches, and switches from second-tier vendors such as Allied Telesyn International, Asante Technologies, D-Link Systems, Hewlett-Packard Company, Network Peripherals, and many others. For core switching in network backbones, popular Layer 3 switches that are widely used in enterprise networks include the Catalyst 5000 and 6000 series of switches from Cisco, BigIron 4000 switches from Foundry Networks, Black- Diamond from Extreme Networks, Passport 8600 from Nortel Networks, and the SmartSwitch Router from Enterasys Networks.
Prospects
The growth of Layer 3 switches (and all types of LAN switches) is one of the most significant trends in the networking market. While overall sales of 10 Mbps Ethernet switches peaked in 1998 with 23 million ports sold, sales of Fast Ethernet and GbE switches continue to rise steadily. Sales of Fast Ethernet switches amounted to over 88 million ports sold in 2000 and are increasing at more than 33 percent a year. GbE switches sold almost 6 million ports in 2000 but are expected by some analysts to increase at a rate of more than 200 percent for the next few years. Sales of traditional routers would be expected to suffer in this market as enterprises migrate their router-based infrastructures to switched backbones, but router sales actually continue to increase, driven largely by the growth of the Internet and the needs of large Internet service providers (ISPs) for such devices and their enduring use as WAN access devices. Nevertheless, the days of routed networks are fading and the switch has risen to the dominant position in the infrastructure of enterprise networks, largely replacing traditional hubs, bridges, and routers.
See Also Ethernet switch , Open Systems Interconnection (OSI) reference model, router
An Ethernet switch that forwards frames according to Layer 4 header information.
Overview
Layer 4 switches operate at the transport layer (Layer 4) of the Open Systems Interconnection (OSI) reference model. Layer 4 switches are capable of examining frames and reading their Layer 4 header information, such as Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) port numbers, and then forwarding packets based on this information. In Transmission Control Protocol/Internet Protocol (TCP/IP), these port numbers are used to identify different applications-for example, Hypertext Transfer Protocol (HTTP) typically uses port 80 and Simple Mail Transfer Protocol (SMTP) typically uses port 25. By being able to switch packets according to port numbers, Layer 4 switches are thus able to prioritize the flow of IP traffic according to application. For example, to prioritize HTTP traffic, a Layer 4 switch would be configured to allocate greater bandwidth to IP packets where the Layer 4 header indicates TCP port number 80. Alternatively, a Layer 4 switch could be configured to block or restrict traffic over certain ports, fulfilling the role of a firewall. The key difference between Layer 4 switches and switches that operate at Layer 2 or Layer 3 is that Layer 4 switches provide the ability to prioritize traffic according to application.
Uses
Layer 4 switches are not intended to replace Layer 2 and Layer 3 switches as the basic building blocks of collapsed backbones in enterprise networks. Instead, they can be selectively deployed in the distribution level of the network (see the article "Layer 3 switch" elsewhere in this book) to add traffic prioritization and firewall services for the network when these features are needed or desired.
Implementation
Layer 4 switches typically perform bridging (forwarding by Layer 2 or MAC addresses) and routing (forwarding by Layer 3 or IP addresses) also. In other words, a typical Layer 4 switch is really a multilayer switch or Layer 2/3/4 switch. Alternatively, Layer 3 switches are sometimes marketed as featuring additional Layer 4 switching capabilities. Layer 4 switches thus need to be able to store a large number of mappings for each switch port, namely
MAC addresses for each connected device
IP addresses for each connected device
Multiple port numbers for each connected device
As a result, Layer 4 switches require a great deal of processing power and memory and are more expensive than their Layer 2/3 cousins. Because of their ability to switch according to port numbers, Layer 4 switches enable multiple logical network connections to be established for each path to a device having a Layer 3 address. The combination of a Layer 3 address, such as an IP address and a port number, is usually called a socket, and a train of frames all having the same socket information is referred to as a flow. Most Layer 4 switches can enforce Layer 3 traffic flows on a policy basis, making management of complex traffic flows simple to implement.
Marketplace
Vendors of Layer 4 switches include Cisco Systems, with its Catalyst series of switches; Extreme Networks, with its Summit series; 3Com Corporation, with its CoreBuilder series; and companies such as Alteon WebSystems (acquired by Nortel Networks in 2000), F5 Networks, Foundry Networks, and Radware.
See Also bridge , Ethernet switch , Open Systems Interconnection (OSI) reference model, router
An Ethernet switch that forwards frames according to header information found in Layer 4 and higher.
Overview
Layer 4+ switches combine bridging (Layer 2), routing (Layer 3), and port mapping (Layer 4) with functionality derived from layers higher than 4 of the Open Systems Interconnection (OSI) reference model. What this usually means in reality is that they can use Layer 7 (application layer) information found in packet headers and make routing decisions based upon this information. Hence Layer 4+ switches are effectively just Layer 7 switches with a different name. For more information, see the article "Layer 7 switch" below.
See Also bridge , Ethernet switch , Open Systems Interconnection (OSI) reference model, router
Also called Layer 4+ or Layer 4/7 switch, an Ethernet switch that forwards frames according to Layer 4 header information.
Overview
Layer 7 switches operate at the application layer (Layer 7) of the Open Systems Interconnection (OSI) reference model. In this respect, they fill a niche in e-business networks that Layer 4 switches cannot. Considering Hypertext Transport Protocol (HTTP) traffic or Web traffic as an example, Layer 4 switches can use port numbers to distinguish Web traffic from other types of Internet traffic such as Simple Mail Transport Protocol (SMTP) traffic, but they cannot distinguish static HTTP traffic (which can be cached) from dynamic traffic (which is not easily cached). When Layer 4 switches are used to front-end a Web server farm, the result is that all forms of HTTP traffic get cached, which is wasteful and adds unnecessary latency to dynamic Web applications.
Enter Layer 7 switches, which can examine Layer 7 HTTP headers in detail and route traffic according to request type, Uniform Resource Locators (URLs), cookies, and other information. Layer 7 switches are thus capable of switching Web traffic to different servers according to URLs requested (called URL switching), caching different types of traffic differently (cache switching), and directing traffic to different servers in the farm to distribute the load (load balancing) or to servers hosting different types of content (content distribution). As a result, Layer 7 switches are variously marketed by different vendors under names such as Web switches, Web content switches, URL switches, and load balancers.
Uses
Layer 7 switches need be deployed only sparingly in an enterprise. A typical use for such a switch is to act as a front-end for a group of Web servers in a server farm used to support an e-commerce application. The Layer 7 switch provides a single Internet Protocol (IP) address for users to access over the Internet using their browsers, making the server farm appear to the user as a single Web server. The Layer 7 switch typically interfaces with a router that provides connectivity to the Internet (see diagram).
Layer 7 switch. Using a Layer 7 switch to front-end a Web server farm.
Marketplace
Most Layer 7 switches are special purpose devices dedicated to managing HTTP traffic more effectively for e-commerce systems-true Layer 7 switches that are fully configurable for any form of application traffic are rare. A Web content switch should typically support such additional features as application redirection, content-intelligent switching, local and global load balancing, packet filtering, quality of service (QoS) traffic routing, and Secure Sockets Layer (SSL) acceleration.
Popular makers of Layer 7 switches include Alteon WebSystems (acquired by Nortel Networks in 2000) with its Alteon 184 switch, ArrowPoint Communications (acquired by Cisco Systems) with its CS-800 switch, Foundry Networks with its ServerIronXL switch, F5 Networks with its Big-IP series of switches, and Intel Corporation with its NetStructure E-Commerce Director switches. Some vendors also claim that their Layer 2, 3, or 4 switches include limited Layer 7 switching capabilities, so the marketplace can be confusing to the newcomer.
Prospects
An emerging use for Layer 7 switches is to provide QoS for Voice over IP (VoIP) systems and load balancing for IP Private Branch Exchange (PBX) equipment. These switches use Layer 7 information in IP headers to identify packets containing H.323 and Session Initiation Protocol (SIP) voice traffic as payload and help to prioritize the forwarding of these packets in such a way as to provide better quality voice communications over IP networks. Two examples of such switches are AppSwitch from Top Layer Networks and a VoIP module for Web Server Director from Radware.
See Also bridge , Ethernet switch , Open Systems Interconnection (OSI) reference model, router, Voice over IP (VoIP)
Stands for Link Control Protocol, the portion of Point-to-Point Protocol (PPP) responsible for link management.
See Also Link Control Protocol (LCP)
Stands for Lightweight Directory Access Protocol, a standard protocol for accessing information in a directory.
See Also Lightweight Directory Access Protocol (LDAP)
A permanent link leased from a telecommunications carrier.
Overview
Leased lines are dedicated point-to-point circuits that are installed between a customer premises and a telco central office (CO). Examples of leased line services include switched 56, T1, fractional T1, and T3 lines. Integrated Service Digital Network (ISDN) is also sometimes considered a leased line service, but ISDN is really a dial-up service unlike always-on T-carrier services. These services are provisioned, installed, and leased from a telecommunications carrier such as a local exchange carrier (LEC) or inter-exchange carrier (IXC).
Uses
Leased lines have been around since the 1980s and have long been the primary means for enterprises to connect their remote offices in a wide area network (WAN). In a traditional WAN the larger branch offices would be connected to headquarters by T1 or fractional T1 lines, sometimes with ISDN lines as backup, and smaller branch offices would get by with dial-up modem connections used for nightly transfers of batch jobs. With the explosion of the Internet in the 1990s, this picture has changed, and the rapid rise of WAN traffic has driven large companies to find solutions other than deploying additional expensive leased lines.
The main alternatives to leased lines in the 1990s included frame relay and Asynchronous Transfer Mode (ATM) circuits, but these technologies are complex and expensive to deploy and have not gained the same popularity in the enterprise as leased lines. Recently, Digital Subscriber Line (DSL) has emerged as a promising alternative to leased lines. By combining a DSL connection to the Internet with virtual private networking (VPN) technology, corporate networks can tunnel through the public Internet to connect securely with remote offices and mobile workers.
In the age of the Internet and e-commerce, leased lines are finding other popular uses:
Providing reliable high-speed Internet access for corporate networks.
Connecting e-businesses with hosting centers that host their e-commerce Web sites.
Advantages and Disadvantages
Leased lines have several advantages over competing WAN technologies:
Security: Leased lines are dedicated point-to-point links between the customer premises and the telco, and are therefore difficult to eavesdrop on. By contrast, a VPN over DSL solution has to employ encryption to ensure data security and integrity, and holes have sometimes been found in encryption schemes-for example, in the Point-to-Point Tunneling Protocol (PPTP).
Availability and Reliability: Leased lines have been around for a long time and are well understood. As a result, they are easy to set up and troubleshoot, and they provide virtually 100 percent uptime. Frame relay and ATM are more complex, and there have been well-publicized failures of the frame relay and ATM backbones of some of the largest carriers in recent years. And since a VPN over DSL solution uses the Internet as its transport, the reliability of a link depends on the reliability of the Internet itself-something that is open to question in the age of massive Distributed Denial of Service (DDoS) attacks and Internet backbone congestion.
Low latency and jitter: Leased lines offer virtually latency-free connections, suitable for transmission of data, voice, and multimedia. By contrast, a VPN over DSL solution employs Internet Protocol (IP), which can have significant latency and jitter that can adversely affect voice and multimedia communications.
By contrast, the main disadvantage of leased lines is their cost, which has remained high over the last decade. For example, a T1 line can cost well over $1,000 a month compared to perhaps $50 to $100 per month for a faster DSL connection. Despite these huge costs, leased lines still remain the most popular way for enterprises to connect their remote offices in a WAN, primarily because of their high reliability. For mission-critical or time-sensitive traffic, leased lines still rule-for less critical traffic VPN over DSL is a good solution, however.
Leased line. How a leased line is provisioned.
Implementation
Provisioning a leased line involves steps at both the customer premises and telco ends:
Customer premises: Special equipment is required to connect the customer's local area network (LAN) to the line termination point at the customer premises. This is typically either a combination of router and channel service unit/data service unit (CSU/DSU) or an access server (also called integrated access device, or IAD) that combines the functionality of a router and a CSU/DSU into a single device.
Telco CO: The telecommunications carrier dedicates certain switches in its switching fabric to set up a permanent circuit between the local and remote customer premises. Data then flows along a single permanent path between the two locations.
Marketplace
Leased lines are generally available from both LECs and IXCs. Provisioning and service quality vary greatly but in general are good for large Regional Bell Operating Companies (RBOCs) such as BellSouth Corporation. IXCs such as AT&T and Sprint are logical choices for large enterprises wanting to deploy leased lines across the United States, but these carriers rely on LECs for the actual provisioning at the local loop level, and this introduces a second tier into the service end, which can sometimes cause delays and difficulties. For enterprises having a global presence in several countries and regions around the world, the largest player in the leased line market is Concert, a joint venture of AT&T and British Telecom (BT).
Charges for a leased line are typically based on the distance of the line and not on the bandwidth used-usually you pay for the available bandwidth even if you do not use it all. Actual provisioning of a leased line from time of order to deployment typically takes four to eight weeks, depending on demand.
See Also Asynchronous Transfer Mode (ATM) , central office (CO) ,circuit-switched services ,customer premises ,Digital Subscriber Line (DSL) ,frame relay ,Integrated Services Digital Network (ISDN) ,inter-exchange carrier (IXC) , T-carrier, virtual private network (VPN), wide area network (WAN)
Stands for local exchange carrier, any telco in the United States that provides telephone and telecommunication services for subscribers within a geographical region.
See Also local exchange carrier (LEC)
A legal authorization to use software in a given networking scenario.
Overview
Purchasing most business software typically involves two steps:
Purchasing media containing the installation files from which the software can be installed on your computers.
Determining how your software will be used and how many users will use it, and then purchasing licenses to use the software legally in the manner you have chosen.
Licensing is a complex issue and varies from vendor to vendor and across product lines, but it is important to take licensing into account in the planning stage before purchasing software, as the cost of licenses can add significantly to the overall deployment cost.
History
Early licensing programs for commercial software usually used a per-server basis-that is, licenses were applied to the client instead of the server. A single per- server license authorized a single client connection to the server for file, print, or whatever server services were being licensed. Per-server licensing meant that administrators had to determine the maximum expected number of users who might want to access the server concurrently (simultaneously) and to purchase sufficient per-server licenses to ensure that licensing requirements were being met. The main problem with per-server licensing was that it did not scale well from a financial perspective. For example, if no more than five users were expected to access a file server at any given time, five per-server licenses would need to be purchased at a cost of x dollars. If two more file servers were added to the company network, 3 x 5 = 15 licenses would be required (five for each server), at a total licensing cost now of 3x dollars.
Per-seat licensing was developed to address the scalability issue of per-server licensing. With per-seat licensing, the client machines are licensed, rather than the servers (although the servers usually require additional licenses of their own). Each client machine requires only a single per-seat license to authorize it to access any server on the network. Per-seat licensing scales better than per-server licensing, and it is simple to calculate-a company requires as many per-seat licenses as it has client machines or users.
Prospects
With the widespread penetration into business networks of Internet technologies such as the World Wide Web, existing licensing schemes have begun to show signs of strain. Client machines are no longer simply desktop computers used by employees-they might also include users who connect to your company Web site over the Internet to access resources on your network. Both the per-server and per-seat licensing schemes break down here-licensing Internet applications would logically mean either buying billions of per-server licenses for each of your servers or buying a per-seat license for every individual on the planet!
One way around this issue is to offer Internet services freely to clients, with no licensing costs involved. While this may work with simple Web servers that serve static content, it fails to solve the problem that arises when Web servers host applications that access back-end databases. To address the issue of how Internet and e-commerce technologies are affecting traditional licensing systems, new models are starting to evolve:
Usage-based licensing: Administrators pay monthly licensing fees instead of one-time fees. These monthly fees are based either on the number of users or on the number of transactions performed against the vendor's applications.
Leasing applications: Instead of buying shrink- wrapped software and installing it on company servers, companies lease the e-commerce applications and services they need from application service providers (ASPs) and pay a flat monthly fee along with a surcharge based on usage.
Value-based licensing: This scenario envisions licensing fees calculated on actual business results related to using the software-for example, based on number of units sold and number of solid sales leads generated.
The advantage of these new licensing paradigms are that licensing costs can be drastically reduced by basing costs upon actual rather than expected maximum usage-in other words, you pay only for what you use. The downside is that these schemes are more difficult to plan from an accounting perspective and make the IT (information technology) budgeting process more complex.
For More Information
For current information about Microsoft licensing practices, visit www.microsoft.com/licensing.
A standard protocol for accessing information in a directory.
Overview
The Lightweight Directory Access Protocol (LDAP) defines processes by which a client can connect to an X.500-compliant or LDAP-compliant directory service to add, delete, modify, or search for information, provided the client has sufficient access rights to the directory. For example, a user could use an LDAP client to query a directory server on her network for information about specific users, computers, departments, or any other information stored in the directory.
The term LDAP can mean three different things, depending on the context:
LDAP data format: This defines the manner in which information is stored and recalled in an LDAP-compliant directory. This is called the LDAP Data Interchange Format (LDIF).
LDAP protocol: This defines the processes involved when an LDAP client and LDAP server interact with each other-for example, when a client queries a server for some information.
LDAP API: This is a set of application programming interfaces (APIs) that defines how applications can programmatically interact with LDAP servers.
LDAP was developed by the Internet Engineering Task Force (IETF) and its current version, LDAPv3, is defined in RFC 2251. LDAP is designed to run over Transmission Control Protocol/Internet Protocol (TCP/IP) and is a subset of Directory Access Protocol (DAP), part of the X.500 recommendations from the International Telecommunication Union (ITU).
History
Directories store information in a hierarchical fashion. The first attempt at defining a directory standard that could scale to global proportions was X.500, which was developed by the ITU in a series of recommendations spanning 1984 to 1994. Unfortunately, these recommendations were so complex that they were seen as too difficult to implement on most computing systems of that era. For example, the Directory Access Protocol (DAP), a part of X.500 that defined how X.500 clients would communicate with X.500 directory services, was so complex that its footprint would be too large and too slow to implement on a standard PC workstation. As a result of these problems with X.500, a process was launched to develop a simpler directory standard that would be easy to implement on PCs while remaining fully backward-compatible with X.500. The result of this development process was an open standard called LDAP.
Initial work on the LDAP standard and the first LDAP- compliant server, called SLAPD, was developed by researchers at the University of Michigan in conjunction with PSINet and the ISODE Consortium. LDAP began as a simple replacement for DAP that was intended to work with X.500 directory services-it was not intended to define a separate standard for directory services themselves. Version 2 of LDAP, defined in RFC 1777, took things a step further by divorcing LDAP from the X.500 standard proper, and it allowed for the development of stand-alone LDAP directory servers to replace the more complex directory service agents (DSAs) of a full X.500 directory service. LDAPv2 directory servers were stand-alone in the sense that they could not perform referrals. For example, if a client queried an LDAPv2 server for information that the server did not possess, the server would return a negative response to the client and would not be able to refer the client to other LDAPv2 servers that might have the information. This limitation severely affected the scalability of LDAPv2 directory service systems.
To overcome the scalability issues and other limitations of LDAPv2, a new version, LDAPv3, was developed by the Internet Engineering Task Force (IETF) in 1997 and was standardized as RFC 2251. LDAPv3 is the version of LDAP widely used in today's directory products, including Microsoft's Active Directory directory service, and it is a superset of LDAPv2 that remains backward-compatible with DAP and X.500. LDAPv3 has the following enhancements over earlier versions:
Internationalization: Support for Unicode instead of the American National Standards Institute (ANSI) used in previous versions.
Referrals: An LDAP server that cannot answer a client's query can refer the client to a different LDAP server that knows the answer. This makes LDAPv3 a highly scalable standard that can be used in large enterprises.
Security: LDAPv3 supports both Transport Layer Security (TLS) and Kerberos security protocols.
Extended operations: LDAPv3 is extensible and supports extended searching operations.
Advantages and Disadvantages
Some of the qualities of LDAP that have enabled it to gain widespread popularity include
Open: LDAP is an open standard running on TCP/IP that specifies a protocol for communication between LDAP clients and servers but leaves implementation details for servers up to the vendors themselves, who develop various LDAP products.
Secure: LDAP supports various kinds of security to preserve the integrity of directory data on a network.
Extensible: The LDAP schema can be extended by defining new classes of objects and attributes to support applications that need these.
Programmable: LDAP employs a standard set of APIs written in C/C++ and defined by RFC 1832. These APIs specify how LDAP client applications can programmatically query and obtain information from LDAP servers.
Scalable: LDAPv3 supports referrals that allow LDAP directory services to scale easily to millions of objects for enterprise deployments.
Architecture
To understand how LDAP works, you need to know its terminology first. A good way of understanding LDAP terminology is to compare it with terminology for relational databases. Despite the similarities between these two systems, LDAP is not intended to replace relational databases because it lacks features such as locking, transactional processing, reporting, and efficient storage of binary large objects (BLOBs). The following are the basic LDAP terms and concepts:
Object: Also called entries, objects are anything about which you want to store data in the directory. Objects are typically users, groups, computers, printers, organizational units, and domains. Objects are for LDAP what records are for a relational database.
Attribute: Information about an object. For example, a user object might have attributes such as last name, first name, address, phone number, and e-mail address. An attribute can sometimes have more than one value-for example, a user might have several e-mail addresses. Attributes are for LDAP what fields are for relational databases. An LDAP server stores the values of an object's attributes in name/value pairs.
Classes: Define what attributes go with what objects. Classes are for LDAP what tables are for relational databases-but unlike tables, classes are extensible and allow additional attributes to be defined for objects as required.
Schema: The collection of classes and attributes used for a particular implementation of an LDAP directory service. LDAP schemas are extensible and allow new classes and attributes to be defined as required. Schemas have no counterpart in relational database terminology.
Directory information tree (DIT): The tree of objects stored within an LDAP directory.
Directory service agent (DSA): An LDAP directory server.
Distinguished name (DN): A unique name identifying an object within an LDAP directory.
Namespace: The collection of possible names for objects within a specific LDAP directory.
Organizational unit (OU): Containers within a namespace. OUs can contain leaf objects and other OUs. OUs are typically departments within a company, regions within a geographical area, and so on.
Domain: Top-level names within a namespace. Domains can contain OUs and leaf objects. Domains are typically names of things such as companies and countries.
Leaf object: Any LDAP object that is not a domain or an OU. Examples of leaf objects include individual users, computers, and printers.
LDAP operates as a client/server protocol in which an LDAP client connects to an LDAP server over TCP port 389, issues a query, receives a response, and disconnects from the server. LDAP clients on LDAP servers can perform six operations:
Binding (authenticating)
Searching (querying)
Adding an entry
Modifying an entry
Removing an entry
Comparing entries
LDAP servers can perform three additional operations among themselves:
Referral
Replication
Encryption
Examples
To understand LDAP naming conventions, the distinguished name (DN) for the object representing user Jeff Smith within Active Directory (Microsoft's implementation of an LDAP server) might be:
DC=com,DC=Microsoft,OU=Users,CN=Jeff Smith
You can read this as
Domain = com
Domain = Microsoft
Organizational unit (OU) = Users container
Common Name (CN) of user-type leaf object = Jeff Smith
LDAP Uniform Resource Locators (URLs) are another naming convention that can be used to allow LDAP clients to access objects in an LDAP directory. An LDAP URL is formed by appending the distinguished name of the directory object to the fully qualified DNS domain name (FQDN) of the server containing the LDAP directory. The LDAP URL for referencing the above object would thus be
LDAP://Server7.Microsoft.com/CN=Jeff Smith/OU=Users/DC=Microsoft/DC=com
Marketplace
According to analysts, as of 2000, there were about 1.6 million LDAP servers deployed in enterprises worldwide. This figure is expected to grow to more than 4 million by 2003. The more popular LDAP products on the market include
Active Directory, Microsoft's LDAP-compliant server that is part of the Microsoft Windows 2000 operating system platform
iPlanet, formerly Netscape's Directory Server
eDirectory, Novell's latest version of Novell Directory Services (NDS)
eTrust, an X.500 directory with LDAP interface from Computer Associates
Oracle Internet Directory, an LDAP directory built upon the Oracle database platform
IDDS and Directory Portal, from Innosoft International and Sun Microsystems
DirX, from Siemens
Global Directory Server, from Critical Path
There is also an open source LDAP directory called OpenLDAP that is based on the University of Michigan's SLAPD server.
Notes
In Windows 2000, the standard C/C++ LDAP APIs of RFC 1823 are implemented in a library called Wldap32.dll. Applications can programmatically query Active Directory using either Active Directory Services Interface (ADSI), which is implemented as a Component Object Model (COM) interface that is layered on top of the LDAP library, or by employing the LDAP APIs directly.
See Also Active Directory ,directory ,Directory Access Protocol (DAP) ,distinguished name (DN) ,Uniform Resource Locator (URL) ,X.500
A high-capacity tape backup technology.
Overview
Linear Tape Open (LTO) was developed in 1998 by a consortium that includes Hewlett-Packard, IBM, and Seagate Technology. LTO is intended to be an open, multivendor tape architecture comparable in speed and capacity to the more proprietary SuperDLT architecture developed by Quantum Corporation. LTO is intended for heterogeneous enterprise networking environments and has been implemented in both stand-alone tape drives and robotic tape libraries.
Architecture
LTO has a native data capacity of up to 100 gigabytes (GB) per tape uncompressed or 200 GB compressed with a standard 2:1 compression ratio. Using a standard single-reel drive architecture, LTO supports data transfer rates of 10-20 megabytes (MB)/sec uncompressed or 20-40 MB/sec compressed. LTO can also be implemented as a dual-reel system that offers the advantage of fast restores-data that is required can be usually retrieved from tape in less than 10 seconds.
Marketplace
More than 30 different storage vendors have licensed the LTO tape format for implementation in their tape backup products. The first vendor to ship an LTO tape library was Exabyte Corporation with their 110L tape library, which has since been replaced by the 221L tape library.
See Also backup ,tape format
A device for connecting peripherals to computers when distances are longer than cabling normally supports.
Overview
Line boosters work by regenerating or boosting the signal strength so you can use longer cables than specifications usually allow. Because the signal strength in a serial or parallel transmission line decreases with the length of the cable being used, line boosters are sometimes needed to connect peripherals to computers when the distances involved exceed normal cabling capabilities.
Line booster. Using a line booster to connect a computer to a printer.
For example, a typical line booster used with a serial RS-232 interface can generally double the distance over which an attached peripheral can transmit, increasing the limit from 49 feet (15 meters) to 98 feet (30 meters). For parallel printers, line boosters can typically increase allowed distances from 19.5 feet (6 meters) to 40 feet (12 meters).
Notes
When deploying line boosters, be sure to install them at the midway point between the computer and the peripheral, not at one end of the connection.
See Also RS-232 ,serial transmission
Any algorithm for transforming binary information into discrete (digital) signals.
Overview
A line coding mechanism specifies a mathematical relationship between the binary information in a bitstream of data and the square-wave signal variations on the medium the signals are transmitted over. For copper wires, these signals are expressed as time-varying voltages, but on fiber-optic cabling, they are transmitted as discrete light pulses.
Types
Many types of line coding mechanisms are used in computer networking and telecommunication technologies. Some of these schemes are simple to encode and decode but unreliable in their transmission, and others are reliable even in the presence of external noise but require CPU-intensive processing to encode. Selecting the right line coding scheme for a particular technology requires a good understanding of information theory, electromagnetic theory, and engineering.
Encoding schemes are classified into two basic categories: digital signal codes and block codes. Digital signal codes specify the details of how the voltages (or light intensities) vary with time in copper (or fiber-optic) cabling. Common digital signal codes include
NRZ: This stands for Non-Return to Zero and simply means that the discrete electrical signals (or light pulses) use two different voltages (or intensities) but not zero voltage (or intensity). NRZ is employed in serial interfaces such as RS-232 and by many of the block coding schemes described below, including those for Integrated Services Digital Network (ISDN), Fast Ethernet, and Gigabit Ethernet (GbE). NRZ requires that signal pulses all have equal duration with no gaps between them and that the transmitter and receiver be synchronized in order to communicate with each other.
NRZI: This stands for Non-Return to Zero Inverted and uses a transition (either low-high or high-low) to indicate a binary 1, while a lack of transition indicates a zero. This scheme is more reliable than NRZ and requires less power to transmit.
Bipolar-AMI: In this scheme, both a positive and negative pulse represents binary 1 and no pulse indicates a zero. Unlike NRZ and NRZI, there is no DC component to the signal. Bipolar-AMI is easy to synchronize between the transmitter and receiver and a lost pulse can be easily recovered. On the other hand, it requires more power to transmit than NRZ/NRZI.
Pseudoternary: This is just the opposite of Bipolar-AMI and has the same advantages and disadvantages.
Manchester encoding: . This scheme is the one used by standard 10 megabits per second (Mbps) Ethernet. In Manchester encoding, a low-high transition in the middle of a pulse interval represents binary 1 and a high-low transition represents binary zero. This kind of digital signal encoding scheme is known as a biphase scheme and has the advantage that no prior synchronization between transmitter and receiver is required, as in NRZ/NRZI. Man- chester encoding is an inefficient scheme that encodes one bit of information into two baud or code bits, but this "wastefulness" actually makes Manchester simple to encode and decode and thus easy to implement in electronic devices. (An encoding scheme that is more efficient in converting bits to baud is said to be "rich," but the downside is that the richer the encoding scheme, the more complex and processor-intensive the actual encoding mechanism becomes.)
Differential Manchester encoding: This is an offshoot of Manchester encoding in which either a high-low or low-high transition at the start of a pulse indicates binary 1 and no transition indicates binary 0.
By contrast to the digital signal codes above, the following line coding schemes are sometimes called block codings. This is because their purpose is to transform a block of data (collection of bits from a bitstream) into a block of electrical or light pulses. Some common examples of these encoding schemes include
4B/5B: This scheme is used by the 100BaseX form of Fast Ethernet. The name 4B/5B stands for "4 binary, 5 baud" and indicates in shorthand that four bits of data require five baud or code bits for transmission. The 4B/5B scheme is thus (5-4)/4 = 25% wasteful in terms of bandwidth compared to the (2-1)/1 = 100% wastefulness of Manchester encoding. Another way of describing it is to say that 4B/5B has a coding efficiency of 5/4 = 1.25 baud/bit. The 4B/5B scheme is also used by the Fiber Distributed Data Interface (FDDI) network architecture.
5B/6B: This scheme is used by 100VG-AnyLAN and has a coding efficiency of 1.2 baud/bit.
8B/6T: This scheme is used by the 100BaseT4 form of Fast Ethernet. The name 8B/6T stands for "8 binary, 6 ternary" and indicates that eight bits of data require six ternary or three-level signals for transmission. By contrast, 4B/5B uses two-level signals in which there are only two voltages, not three, for each signal pulse. The 8B/6T scheme has a coding efficiency of 0.75 baud/bit, which means that more than one bit is crammed into each signal pulse.
PAM 5x5: This is used by 100BaseT2 Fast Ethernet and encodes four bits of data into a two-dimensional 5x5 = 25 code point space. This scheme has a coding efficiency of 0.50 baud/bit.
8B/10B: This scheme is used by GbE and Fibre Channel and is patented by IBM. The coding efficiency is 1.25 baud/bit.
2B1Q: This stands for "2 binary, 1 quaternary" and is used by the U interface of the Basic Rate Interface ISDN (BRI-ISDN) flavor of ISDN. The U interface is located at the line termination point at the customer premises, where a two-wire metallic cable terminates with an RJ-11 jack. The 2B1Q encoding scheme is actually used only in the United States, as European ISDN uses a different 4 binary, 3 ternary (4B3T) for BRI-ISDN.
B8ZS: This stands for Bipolar with 8 Zero Substitution and is used by the U interface of the Primary Rate Interface ISDN (PRI-ISDN) flavor of ISDN. The B8ZS encoding scheme is actually used only in the United States, as European ISDN uses a different High Density Bipolar 3 (HDB3) for PRI-ISDN.
Line coding. How the 2B1Q line coding scheme works.
Other common line coding schemes include Discrete Multitone (DMT), Carrierless Amplitude/Phase (CAP) modulation, and Quadrature Amplitude Modulation (QAM), all of which are used in various implementations of Asymmetric Digital Subscriber Line (ADSL) communications.
Examples
This example briefly considers one line coding scheme in more detail, namely BRI-ISDN's 2B1Q scheme.
In this scheme, a block of two binary bits can represent four different values: 00, 01, 10, and 11. These four values are mapped to one quaternary value, which is encoded using four different voltages. The first bit represents a positive or negative voltage, and the second bit represents either 1-volt or 3-volt line potential. The following table lists the four possible combinations. The result of using 2B1Q line coding for BRI-ISDN is that a single electrical pulse represents two binary bits instead of one binary bit. This effectively doubles the possible bandwidth of the communication channel, as shown in the illustration.
Binary Data Represented | Voltage of Electrical Pulse |
00 | -3 |
01 | -1 |
10 | +3 |
11 | +1 |
See Also 100VG-AnyLAN ,Asymmetric Digital Subscriber Line (ADSL) ,Ethernet ,Fast Ethernet ,Fiber Distributed Data Interface (FDDI) ,Fibre Channel ,Gigabit Ethernet (GbE) ,Integrated Services Digital Network (ISDN) ,Manchester coding ,signal
Any device that is used to prevent undesirable electrical signals from damaging computer, networking, or telecommunication equipment and to guard against data loss due to electrical noise, sags, and surges.
Overview
Line conditioners contain circuitry that enables them to filter out noise caused by electromagnetic interference (EMI) and other sources. They typically contain isolation transformers that electrically isolate the circuitry from unwanted DC voltages, impedance-matching circuitry for reducing unwanted signal reflections, and surge suppressors to guard against high-voltage surges (6000 volts or more) caused by lightning strikes and power failures. Line conditioners can also correct sags (drops) in voltages caused by momentary brownouts, but they are not meant to replace or supply power during a power loss. They often include fault indicators and audible alarms.
Line conditioners also ensure that a transmission's signal parameters remain within specifications for the medium or interface being used, even over excessively long or noisy transmission lines. By maintaining signal integrity, line conditioners can thus allow communication devices to function at higher throughput rates than would normally be supported and ensure data integrity over noisy lines. Another common name for line conditioners is line shapers.
Uses
You typically use line conditioners in the following places:
In power supplies and in uninterruptible power supply (UPS) systems (power conditioners) to protect computers and networking devices from AC surges coming through power lines.
In local area networks (LANs) to protect hubs, routers, and other networking equipment from EMI and unwanted noise coming through the networking cables and to maintain the integrity of network data signals.
In offices to protect modems, telephones, fax machines, and other equipment by filtering out electrical surges in phone lines and to reduce noise so that the devices can operate at their nominal throughput speeds.
In wide area network (WAN) links for protecting Channel Service Unit/Data Service Units (CSU/DSUs) and access servers connected to Integrated Services Digital Network (ISDN), T1, and other copper telecommunication lines against EMI surges. T1 lines must have line conditioners at regular intervals to ensure the integrity of the signal transmitted over the line.
With high-speed analog modems to enable them to function at their maximum transmission speeds over noisy telephone lines in the local loop.
See Also electromagnetic interference (EMI) , Integrated Services Digital Network (ISDN) , modem, T-carrier, uninterruptible power supply (UPS)
Any device used to extend the distance over which a signal may be transmitted over a copper or fiber-optic cable.
Overview
A line driver is essentially a combination of a signal converter and an amplifier. The signal converter performs line conditioning, and the amplifier increases the signal strength.
Line drivers allow signals to be carried over a longer distance than the media or transmission interface normally allows. Line drivers are typically used to extend the maximum distance of serial communication protocols such as RS-232, V.35, X.21, and G.703 and can provide either synchronous or asynchronous communication in various vendor implementations.
Uses
A common type of line driver often used in mainframe computing environments is the RS-232 line driver. This device is used to extend the distance over which dumb terminals can be connected to mainframe computers located in different parts of a building or in different buildings on a campus. RS-232 line drivers support synchronous transmission of data over installed four-wire telephone cabling or fiber-optic cabling and are typically deployed on existing twisted-pair phone lines within a building or on custom-installed fiber-optic lines laid between buildings. These line drivers can extend the maximum distance of RS-232 serial transmission from 49 feet (15 meters) to several miles or more.
Another common use for line drivers is in Asymmetric Digital Subscriber Line (ADSL) and T-1 circuits between a customer and a telco central office (CO).
Line driver. Using a line driver to connect a remote terminal to a server.
Implementation
Line drivers are always used in pairs. One line driver is placed at the local site and is connected to the client or terminal, and the other is located at the remote site and is connected to the server or mainframe. For intrabuilding connections using line drivers, copper unshielded twisted-pair cabling or the installed telephone lines are typically used. For inter-building connections, fiber- optic cabling is preferred.
Line drivers are available for almost every kind of communication mode, from 19.2-kilobit-per-second (Kbps) RS-232 serial line drivers over 3.5 miles (6 kilometers) to 2-megabits per second (Mbps) single-mode fiber-optic line drivers over 11 miles (18 kilometers). Line drivers can also be used to extend parallel transmission of data from about 20 feet (6 meters) to several miles.
When you use line drivers, be aware that your maximum bandwidth and transmission distance are inversely related-that is, the longer the line, the less bandwidth you have. Considerations for purchasing a line driver include whether it supports full-duplex or half-duplex communication, 2-wire or 4-wire cabling options, and what kinds of connectors are used. Line drivers for customer premises generally are cheaper and have a smaller footprint than those used by service providers such as telcos at the COs.
Notes
For connecting data terminal equipment (DTE) such as two computers, you should use a modem eliminator instead.
See Also Asymmetric Digital Subscriber Line (ADSL) ,modem eliminator ,RS-232 ,T-carrier
A device used to suppress noise in a transmission line or cable, caused by electromagnetic interference (EMI).
Overview
EMI is produced by nearby power lines, motors, generators, and other sources. EMI can introduce noise into a transmission line or cable that can degrade a signal's quality or even make communication impossible. By inserting a line filter at the appropriate point, you can suppress the noise and potentially improve transmission speeds.
Line filter. Using a line filter to screen out EMI.
Line filters are sometimes needed in homes or small businesses that use modems to connect to the Internet through a dial-up connection over the local loop. High-speed V.90 modems sometimes have difficulty attaining their top data transfer speeds because of ambient line noise caused by nearby sources of EMI. By placing a line filter at the customer premises between the modem and the Plain Old Telephone Service (POTS) connection, you can filter out noise, which could improve modem speeds.
Notes
Before installing a line filter, you should use a radio frequency (RF) spectral analyzer to determine the general frequency of the source of EMI so that you can choose an appropriate line filter. Line filters typically filter out one of the following frequency ranges: low frequency (LF), high frequency (HF), very high frequency (VHF), or ultra high frequency (UHF) signals.
See Also electromagnetic interference (EMI) , modem, Plain Old Telephone Service (POTS)
The UNIX daemon (background service) used for spooling print jobs.
Overview
Line Printer Daemon (LPD) is a daemon that resides on UNIX print servers. Its function is to simply wait and receive any print jobs submitted from clients using the Line Printer Remote (LPR) protocol. When LPD receives a job, it temporarily stores the job in the print queue, a file system subdirectory. There the job sits waiting to be serviced by LPD. When a print device becomes available, LPD retrieves the job from the queue and sends it to the device for printing.
UNIX sends print jobs to printers as raw data streams-for example, as a Postscript file and does not use print drivers the way Microsoft Windows does. However, sometimes print job formatting is required-for example, when you want to send a plain text file to a postscript printer. In this case, a printer filter is used to properly format the job so the output will not look garbled. Printer filters are specified in a UNIX system file called Printcap.
To view the status of jobs currently waiting in the queue, use the Line Printer Queue (LPQ) utility.
Implementation
Both Windows NT and Windows 2000 support UNIX LPD printing, although it is implemented differently on each platform.
Windows NT Server has an optional LPD service that you configure by installing the Microsoft TCP/IP Printing service. This service enables computers running UNIX to send print jobs to the Windows NT server by using the standard UNIX LPR command. Windows NT servers can also use LPR to submit print jobs to either a Windows NT server running LPD or a UNIX LPD print server. The Microsoft TCP/IP Printing service thus provides printing interoperability between the UNIX and Windows platforms for heterogeneous network environments.
Line Printer Daemon (LPD). How UNIX-to-Windows and Windows-to-UNIX printing work.
To support UNIX printing services, Windows 2000 Server and Windows .NET Server employ Microsoft Print Services for UNIX, which provides both LPD and LPR services through two services:
LPDSVC: Runs on the Windows 2000 and Windows .NET Server print servers and receives print jobs from native LPR utilities running on UNIX workstations
LPRMON: Runs on the Windows 2000 and Windows .NET Server print servers and forwards print jobs to native LPD processes running on UNIX computers with attached printers
The startup configuration for the LPD service on Windows 2000 and Windows .NET Server is set to Manual by default and should be changed to Automatic if this feature is used.
See Also daemon , printing terminology, UNIX
A UNIX command used for querying the status of a print queue.
Overview
On the UNIX platform, print jobs are submitted to print servers using the Line Printer Remote (LPR) command, which sends them to the print queue to wait for spooling to an available print device. UNIX print servers employ a daemon called Line Printer Daemon (LPD), which waits in the background for print jobs to be submitted to the queue from LPR clients. It is often useful to be able to examine the queue to determine which jobs are still waiting for spooling-the LPQ command does just that. You can use the LPQ command on a UNIX platform to determine
What jobs are waiting for spooling in the print queue
Who submitted these jobs to the print server
Which print device each job is destined for
Examples
The LPQ command displays a list of files on the server that are waiting to be printed, along with associated information. For example, the lpq -S Foxhound -P Laser12 command might be used to display the status of the print queue called Laser12 on a UNIX print server named Foxhound. Alternatively, Foxhound might be a Microsoft Windows NT server with the Microsoft TCP/IP Printing service installed, or a Windows 2000 server with Microsoft Print Services for UNIX installed. Both of these Windows platforms support UNIX LPD printing as an optional feature.
See Also daemon , printing terminology, UNIX
A UNIX networks command used to submit print jobs to print servers.
Overview
Line Printer Remote (LPR) is the standard protocol for submitting print jobs to print servers. On the UNIX platform, print servers use the Line Printer Daemon (LPD), a service that runs in the background waiting for print jobs to be received. When LPD receives a job, it temporarily places it in a queue. When a print device becomes available, the job is spooled or moved from the queue to the print device for printing. Note that files to be printed using LPR must be either text files or files specially formatted for the printer being used (for example, a PostScript file for a PostScript printer).
LPR is a Transmission Control Protocol/Internet Protocol (TCP/IP) protocol defined in RFC 1179 and is implemented on all UNIX platforms and on Linux. Microsoft Windows NT and Windows 2000 can also use LPR by installing optional components to support UNIX printing. In UNIX implementations, LPR connections (both inbound and outbound) are formed only on TCP ports 721 through 731. On Windows 2000 and on Windows NT 4 service pack 3 or later, however, LPR may use any port in the range 512 through 1023.
Examples
To print the file Readme.txt using the print queue called Laser12 on an LPD print server named Lazyboy, you would use the command lpr -S Lazyboy -P Laser12 readme.txt.
See Also daemon , printing terminology, UNIX
Any device that allows multiple devices to share the same communications line.
Line sharer. Three varieties of line sharer.
Overview
Many different types of devices function as line sharers. These vary from those used in mainframe computing environments to customer premises equipment (CPE) used in telephony and telecommunications. Some common examples of line sharers include the following:
Host line sharers: Allow multiple terminals or other data terminal equipment (DTE) to be connected to an asynchronous mainframe host over a single, shared serial transmission line using V.35 adapters and cable. You can use host line sharers primarily to broadcast data to the DTEs. Data transmitted by the DTEs can also be buffered in the line sharer until the line is free and the data can be sent to the host. Host line sharers typically use either RS-232 or V.35 serial interface connections.
PSTN line sharers: Allow multiple Public Switched Telephone Network (PSTN) devices, such as phones, fax machines, and modems, to share one or more phone lines by using RJ11 connectors and adapters. You can often program PSTN line sharers to switch between devices based on calling tones, so that you can remotely control data collection equipment over modems in industrial environments. A small line sharer might let you connect four phones or other devices to two shared phone lines for a Small Office/Home Office (SOHO). Other line sharers connect large numbers of phones to a relatively small number of phone lines on a first-come, first-served basis in modem-pooling environments.
Internet line sharers: Typically stand-alone devices that together with RS-232 modem adapters allow several PCs to share one modem for dial-up connection to the Internet. Internet line sharers typically use one Internet Protocol (IP) address to allow multiple users to browse the Internet simultaneously.
See Also data terminal equipment (DTE) ,Internet access ,Public Switched Telephone Network (PSTN) ,RS-232 ,V.35
An emerging telco technology for speeding up Digital Subscriber Line (DSL) provisioning.
Overview
In the aftermath of the Telecommunications Act of 1996, new companies called Competitive Local Exchange Carriers (CLECs) have entered the marketplace. These CLECs frequently focus on provisioning DSL services to residential customers requiring high speed Internet access and to business customers as an alternative wide area network (WAN) technology to traditional (and expensive) leased lines. Because Incumbent Local Exchange Carriers (ILECs) such as Regional Bell Operating Companies (RBOCs) actually own the local loop wiring that enters homes and offices, however, CLECs have been forced to either resell DSL services from ILECs (the ILEC provisions the line for DSL) or request that the ILEC lay down an additional separate line to provision DSL to the customer.
Line sharing is an emerging approach that speeds up the provisioning process without incurring the cost of additional lines or leasing DSL from ILECs. In a line sharing scenario, a splitter is installed at the customer premises end of the existing local loop connection and at the telco central office (CO), separating DSL data transmission from voice traffic. In other words, both voice and DSL are carried on the same existing telephone line, but each is managed by a different carrier: DSL service by the CLEC and voice by the ILEC. No disruption in phone service occurs, and the cost to the ILEC is negligible, while the CLEC saves costs that can be passed on to the consumers.
Prospects
Line sharing is expected to breathe new life into the beleaguered CLEC marketplace and to allow for more aggressive rollouts of DSL to bring broadband Internet access to customers who need it. Several CLECs and ILECs have already formed agreements to implement this technology, and state regulators are holding hearings to set appropriate pricing policies.
See Also Competitive Local Exchange Carrier (CLEC) , Digital Subscriber Line (DSL) , Regional Bell Operating Company (RBOC), wide area network (WAN)
The data-link layer protocol for D channel communications in Integrated Services Digital Network (ISDN).
Overview
ISDN uses two separate channels for communication: a D channel for signaling and control and one or more B channels for data transfer. As shown in the table, these different channels use different protocols at the data- link and network layers of the Open Systems Interconnection (OSI) reference model. Link Access Protocol, D-channel (LAPD) is the data-link protocol for the D channel and is defined by the Q.921 specification from the International Telecommunication Union (ITU).
OSI Layer | B Channel | D Channel |
Physical layer | ITU's I.430 (Basic Rate Interface, BRI) or I.431 (Primary Rate Interface, PRI) | ITU's I.430 (BRI) or I.431 (PRI) |
Data-link layer | High-level Data Link Control/Point-to- Point Protocol (HDLC)/PPP) | LAPD (Q.921) |
Network layer | Internet Protocol/ Internetwork Packet Exchange (IP/IPX) | Q.931 |
Architecture
LAPD provides full-duplex transmission over synchronous serial links and supports both point-to-point and point-to-multipoint communications links. LAPD also supports multiplexing of multiple logical channels over a single physical D channel.
LAPD frames always begin with a standard flag (binary 01111110) to identify the start of the frame. This is followed by a 2-byte address that identifies the ISDN device transmitting the frame, a 1-byte or 2-byte control field, variable-length octet-aligned data payload, and a 2-byte frame check sequence (FCS).
See Also data-link layer ,D channel ,full-duplex ,Integrated Services Digital Network (ISDN) ,multiplexing ,Open Systems Interconnection (OSI) reference model ,serial transmission ,synchronous transmission
Any technology for combining multiple physical data links into a single logical link.
Overview
Link aggregation occurs at Layer 2 (the data-link layer) of the Open Systems Interconnection (OSI) reference model. The basic idea is to combine two or more data- link connections into a single fat logical connection. An aggregated link has several advantages over a single physical link:
Scalability: If demand for bandwidth increases, another physical link can simply be added to the logical bundle. Likewise, if demand for bandwidth decreases, a physical link can be deallocated from the aggregation and assigned elsewhere as needed.
Fault tolerance: If one physical link goes down in a logical bundle, there is no interruption in transmission, just a reduction in allowed bandwidth. Link aggregation is also a useful way of implementing redundancy on point-to-point connections.
Load balancing: Links from several sources can be load balanced by combining them into a single logical pipe.
Link aggregation is also known sometimes as port aggregation or trunking in vendor literature.
Uses
Link aggregation has two main uses in the enterprise:
Increasing throughput of switch-switch connections in collapsed backbones.
Combining data streams from multiple network interface cards (NICs) on a multihomed server into a single logical network connection.
Link aggregation. Two uses for link aggregation in the enterprise.
Implementation
Early implementations of link (or port) aggregation were vendor-specific technologies that required solutions to be built and deployed using technologies from a single source. Examples of proprietary Layer 2 link aggregation technologies include
Cisco Systems' proprietary Fast EtherChannel (FEC) and Giga EtherChannel (GEC) technologies, which enable up to four full-duplex Fast Ethernet or Gigabit Ethernet (GbE) links to be combined into a single logical link having throughputs of 800 megabits per second (Mbps) or 8 gigabits per second (Gbps). FEC and GEC employ Cisco's proprietary Port Aggregation Protocol (PagP) to automate the process of discovering and configuring link aggregation.
Cisco's proprietary Inter-Switch Link (ISL) trunking technology, which allows link aggregation across Cisco Catalyst switches.
Adaptec's proprietary Duralink port aggregation technology.
Prospects
Proprietary link aggregation technologies and interoperability problems are soon to be a thing of the past as vendors implement the new Institute of Electrical and Electronics Engineers (IEEE) 802.3ad standard, which defines a vendor-neutral approach to port aggregation. As more vendors implement this new standard in switching and NIC technologies, enterprise network architects will be able to aggregate links across equipment from different vendors.
See Also 802.3ab ,data-link layer ,Ethernet switch ,Fast Ethernet ,Gigabit Ethernet (GbE)
The portion of Point-to-Point Protocol (PPP) responsible for link management.
Overview
The Link Control Protocol (LCP) operates at the data-link layer (Layer 2) of the Open Systems Interconnection (OSI) reference model and is responsible for opening and closing PPP connections and negotiating their configuration. During session establishment, LCP establishes the link, configures PPP options, and tests the quality of the line connection between the PPP client and PPP server.
Architecture
During the negotiation phase, LCP handles four main functions: authentication, callback, compression, and multilink establishment. After a PPP link has been established at the physical layer and a modulation method selected, the client must then be authenticated. Different authentication protocols are supported for satisfying the security needs of different networking environments. LCP can negotiate the following authentication protocols:
Password Authentication Protocol (PAP): Transmits passwords in clear text using a two-way handshake
Shiva PAP (SPAP): A vendor-specific implementation of PAP
Challenge Handshake Authentication Protocol (CHAP): Passes a password hash using a three-way handshake and is more secure than PAP or SPAP
Microsoft Challenge Handshake Authentication Protocol (MS-CHAP): Microsoft Corporation's implementation of CHAP, which is more secure than regular CHAP
Once an authentication method has been negotiated that is understood by both the PPP client and server and the client has been successfully authenticated by the server, the next phase is callback. This phase is optional and allows the client to request that the server hang up and call it back to ensure greater security or to reverse billing charges. After callback is complete, the next phase is compression. This phase is also optional and a variety of different compression algorithms are supported, including Predictor, Stacker, Microsoft Point-to-Point Compression (MPPC), and Transmission Control Protocol (TCP) header compression. The final phase of LCP is aggregation of multiple PPP links using Multilink PPP (MPPP). This last phase is also optional.
See Also Challenge Handshake Authentication Protocol (CHAP) ,Multilink Point-to-Point Protocol (MPPP) ,Password Authentication Protocol (PAP) ,Point-to-Point Protocol (PPP)
An algorithm for dynamic routing that was designed to address scalability limitations of distance-vector routing protocols.
Overview
The first dynamic routing protocols were based on the distance-vector routing algorithm, which required that routers periodically advertise their routing tables to neighboring routers. Routing protocols such as Routing Information Protocol (RIP) that are based on the distance-vector algorithm suffer from two main problems: large amounts of routing updates, which consume valuable network traffic, and slow convergence, which results in an inability to scale to large internetworks.
As a result of these problems, the link state routing algorithm was developed. Routing protocols that use link state include
Open Shortest Path First (OSPF): This is the main link state protocol in use today and can be used to build large IP internetworks.
NetWare Link Services Protocol (NLSP): This is a legacy protocol used on Internetwork Packet Exchange (IPX) networks but not much used anymore.
Intermediate System to Intermediate System (IS-IS): This is not much used anymore.
Architecture
Link state routers advertise changes in network topology to other routers using link state packets (LSPs). The router advertises these LSPs only when changes occur in the network-for example, a router going down or a new router being brought up. When a network change occurs, LSPs are sent to all routers everywhere on the network, not just to neighboring routers as in distance- vector routing.
Using the LSPs a router receives from other routers on the network, link state routers use the Shortest Path First (SPF) algorithm to construct a logical tree that represents the topology of the entire network based on the local router as the root of the tree. The router then uses this tree to calculate the optimal paths to different parts of the network and populates routes in its internal routing table with this information. Every link state router on a network thus knows the exact router topology of the entire network.
Link State | Distance Vector |
Fast convergence | Slow convergence |
Highly scalable | Small to mid-sized networks only |
Broadcasts entire routing table | Broadcasts link status information |
Sends updates periodically | Sends updates only when a change occurs to the network |
Views network from perspective of itself | Views network from perspective of neighbors |
Calculates optical routes based on lowest metric | Calculates optimal routes based on least cost factors, including hops, segment speeds, traffic flow patterns, and so on |
Advantages and Disadvantages
Link state routing protocols have several advantages over distance-vector protocols:
Faster convergence
Better scalability
Less traffic due to router updates
On the other hand, there are some issues with link state protocols:
They are more processor-intensive and memory- intensive in their operation, hence routers using link state are usually more expensive.
They are more complex to configure than distance-vector routing protocols.
When several link state routers start up, there can be a large amount of network traffic associated with router discovery, a problem called link state flooding. This can temporarily saturate the network and make other communications impossible.
If LSPs are not synchronized properly, it is possible for routers to acquire wrong or incomplete link state information, causing black holes and other routing problems. Most link state routers now use time stamps and sequence numbers to ensure that convergence occurs properly.
Notes
Link state routing protocols are classless routing protocols, a feature that enhances their scalability by allowing discontiguous subnets and variable length subnet masks (VLSMs) to be employed to reduce the amount of interrouter traffic propagated on the network.
See Also distance vector routing algorithm ,dynamic routing protocol ,NetWare protocols ,Open Shortest Path First (OSPF) ,Routing Information Protocol (RIP) ,routing protocol
An open-source UNIX-like operating system.
Overview
Linux is a Portable Operating System Interface for UNIX (POSIX)-compliant operating system that is freely available from numerous sites on the Internet. Linux is available on a large number of hardware platforms including Intel, Alpha, MIPS, PowerPC, Sparc, and even IBM S/390 mainframe. Linux provides a multiuser multitasking operating system environment that supports symmetric multiprocessing (SMP) and clustering. The platform supports a wide range of file systems and standard architecture features such as interprocess communications (IPCs), remote procedures calls (RPCs), and dynamic and shared libraries.
History
Linux was developed in 1991 by Linus Torvalds, a student at the University of Helsinki in Finland. The platform's name is a contraction of Linus and UNIX and reflects the similarity between Linux and the UNIX operating system. Unlike UNIX, however, Linux was released under the GNU Public License (GPL) as open source software whose underlying code base is shared openly. Under this license, others can share in the Linux development process, and a whole community has arisen whose job is to refine and evolve Linux into a better and more powerful platform. Development of the standard Linux kernel development tree, however, is still controlled by Torvalds himself.
The first distribution of Linux to appear was Slackware, which became available in 1993. A distribution is typically a package that contains the Linux kernel and supporting modules, the GNU C/C++ compiler, Xfree86, or some other freeware version of the X Windows graphical interface, the Apache web server, various applications and tools, an installer, and source code for everything. Version 1 of the Linux kernel appeared in 1994. Since that time, the kernel has gone through several revisions and now stands at version 2.4.
Uses
Linux has found a niche in many companies for specific server-based needs, including Web servers, mail gateways, file servers, and Domain Name System (DNS) name servers. Although the fact that Linux is free would seem to make it an appealing solution for companies wanting to build out their server infrastructure while saving money, many companies have been reluctant until recently to embrace Linux in their mission- critical operations. Reasons for this reluctance have included the lack of a single company responsible for the platform's development and support, as well as the fact that trained Linux professionals are in high demand and are thus difficult and expensive to hire. Recently, however, Linux has consolidated its toehold in large companies through developments such as the appearance of easy-to-install Linux distributions such as Red Hat and Caldera, and through system management software that allows large numbers of Linux servers to be remotely managed from a central console. For example, companies such as Google.com have successfully deployed up to 5000 Linux servers to provide Internet services. Although the low cost of Linux distributions would seem an inducement to deploying Linux in a corporate environment, most IT (information technology) departments carefully weigh this against the cost of supporting and administering a Linux environment before choosing to deploy Linux servers.
Linux on the desktop is less visible in the enterprise than Linux on the server end. Reasons for this include: a lack of device drivers (improved universal serial bus [USB] support is provided in current release 2.4 of the Linux kernel), lack of support for running widely used business applications such as Microsoft Office (although Win4Lin from NeTraverse allows you to run a Microsoft Windows 98 emulator on a Linux box and the StarOffice suite from Sun Microsystems is also available for Linux), lack of implementers and administrators skilled in the Linux platform, limited support from commercial vendors (although Red Hat has set an example of how a vendor can base a business on services and support of the Linux platform), and the slow desktop upgrade cycle of large IT departments. Some analysts estimate that Linux had less than 2 percent of the client operating system market in 2000, though it has passed the Apple Macintosh platform and now holds second place behind the dominant desktop platform, Microsoft Windows. Corel Corporation's departure from the Linux desktop market was probably another setback to Linux on the desktop; the fact that there are two competing Linux desktop environments, K Desktop Environment (KDE) and GNU Network Object Model Environment (GNOME), may be another.
Another tree of Linux platform development targets the embedded device space, where Linux's small footprint is an advantage. For example, Red Hat has developed a set of Linux APIs called EL/IX that can run Linux and applications having 32 kilobytes (KB) memory.
Architecture
The core of the Linux operating system is the kernel. Linux is built on a modular (as opposed to monolithic) kernel architecture, which enables custom kernels to be built using subsystem modules, device drivers, protocol stacks, and so on. The Linux kernel follows two different development trees:
Odd-numbered versions: Kernel builds having odd numbers such as 2.1, 2.3, and so on are development (experimental) builds. These are used for trying out new features of Linux and are generally not recommended for use in production systems.
Even-numbered builds: These are builds intended for production systems and are tested until they are believed to be stable. Recent examples include versions 2.2 and 2.4.
Features of the most recent Linux kernel (version 2.4) include
Improved symmetric multiprocessing (SMP) that supports up to 32 processors on Intel x86 servers and scales better than previous kernel versions
Up to 64 gigabytes (GB) of addressable physical memory
Support for large file systems up to terabytes of stored data
Kernel-level Web daemon called kHTTPd that provides high-performance delivery of static Web pages
Support for Intel's 64-bit Itanium (IA-64) processor architecture
Support for Network File System (NFS) version 3
Logical Volume Manager (LVM) that improves how files and volumes are handled
Improved USB support
Marketplace
Although the core Linux platform is freely available for download from various places on the Internet, Linux is also available as commercial Linux distributions from a number of companies, including Red Hat, Caldera International, SuSE, Turbolinux, Debian, and many others. Some distributions include special tools or other "value-adds" to meet specific needs of enterprise users. For example, Turbolinux includes its TurboCluster tools for building clustering solutions, Red Hat Linux includes Red Hat PowerTools to enable administrators to remotely manage servers, and VALinux Systems provides vendor-neutral turnkey Linux systems. Both the Red Hat and Helixcode distributions have proved popular for deploying Linux on the desktop.
A thin-client version of Linux is also available from Neoware Systems and is designed for use in devices such as firewall appliances, smart card readers, cash registers, and so on. This platform is called NeoLinux and is a customized version of Red Hat Linux. Korea's LG Electronics has a Linux tablet PC called Digital iPad, and Linux Personal Digital Assistants (PDAs) are also available. Linux also forms the basis for a number of server appliance platforms, such as the 1U rack- mountable Qube2 and RaQ 4r server appliances from Cobalt Networks. Preconfigured Linux servers are available from Penguin Computing and also from large vendors such as Compaq Computer Corporation and Dell Computer Corporation.
Linux on the mainframe is one new development that has given Linux greater credibility as an enterprise- class operating system. IBM's S/390 platform can run thousands of Linux virtual machines (VMs) on a single mainframe and provides significant cost savings over running Linux on thousands of separate PCs instead. A side effect of this is that stodgy old mainframes are now viewed as a cutting-edge platform for application development, giving a boost to the badly sagging mainframe market.
The most popular server-side applications for the Linux platform are probably the Apache Web server (still used by more than half of all public Web sites) and Samba, a program than enables a Linux server to provide file and print services to other platforms such as Windows and Apple Macintosh. Popular open-source network monitoring tools for Linux include NetSaint and Multi-Router Traffic Grapher (MRTG). Caldera International has released a systems management platform called Volution that allows secure remote administration of thousands of Linux servers and desktops. Volution employs a standard Web browser interface; uses a Lightweight Directory Access Protocol (LDAP)-compliant directory service for storing system object information; provides remote installation, inventory, and monitoring services; and supports policy-based and profile-based management similar to Microsoft's Active Directory directory service and Novell's ZENworks platforms.
Prospects
Some of the largest players in the enterprise computing market have embraced Linux to various degrees. Examples include IBM, Oracle Corporation, Sun Microsystems, Hewlett-Packard, and others. IBM has ported their DB2 Universal Database, WebSphere Application Server e-commerce platform, and Lotus Notes groupware to Linux. Hewlett-Packard has done many Linux deployments, including running SAP AG's Enterprise Resource Planning (ERP) software on Linux. Compaq, Dell, and Hewlett-Packard all offer pre-installed Linux systems for customers who request it. And Oracle has ported its Oracle 8i database platform to Linux.
IBM's commitment to supporting the Linux platform is probably the biggest development in the Linux story. IBM has demonstrated a 512-node cluster running Linux at the Albuquerque High Performance Computing Center in New Mexico and has built in support for Linux using the SuSE distribution across its full range of computing platforms, from PC servers to its S/390 and zSeries mainframes.
Linux's most significant impact on the enterprise networking scene will probably be the displacement of most versions of the UNIX operating system by Linux. Industry analysts indicate that Linux is already outselling all versions of UNIX combined. Although Windows has firmly established itself as the market leader in application server platforms and is indisputably king of the desktop, Linux is likely to establish itself and remain the number two player in the server arena due to its entrenched use in specific niches of enterprise networking such as DNS name servers, public Web servers, and a few other popular applications.
Notes
An application that runs on one Linux distribution may not automatically run on another, as there are small differences between where different distributions store key system files and other issues. The Linux Development Platform Specification (LDPS) developed by the Linux Standard Base (LSB) project is expected to reduce these interoperability problems by ensuring that an application developed for one distribution runs on others.
The U.S. government, through the National Security Agency (NSA), is also developing a hardened version of Linux called Security-Enhanced Linux. This version will include mandatory access controls for type enforcement and role-based access.
For More Information
A good general source of Linux information is Linux Online at www.linux.orgAdvanced Linux users can visit Linux Journal at www.linuxjournal.com A popular event for Linux users is LinuxWorld, found at www.linuxworldexpo.com
See Also Apache ,GNU General Public License (GPL) ,GNU Object Modeling Environment (GNOME) ,K Desktop Environment (KDE) ,open source ,POSIX ,UNIX
A program that allows people to subscribe to an e-mail mailing list.
Overview
Organizations typically set up list servers for facilitating the discussion of marketing issues, asking and receiving answers from technical support, announcing new products and services, disseminating tips and tricks for using software, and similar activities. To use a list, users must first subscribe to the list using a special e-mail command, although nowadays many lists also have Web interfaces for doing things such as subscribing, unsubscribing, posting messages, and receiving help. Once a user successfully subscribes to the list, the user generally receives a copy of every message posted to the list, and every message the user posts is distributed to all members of the list. Other common list options include receiving daily collections of messages compacted into a single message, accessing past archives of old messages, and so on.
Marketplace
Two common list server programs are Listserv and Majordomo. Of these, Listserv is the older and was originally developed for the BITNET/EARN network.
Notes
Do not subscribe to too many mailing lists, because the e-mail traffic might clog up your mailbox.
For More Information
Search CataList, the official catalog of LISTSERV lists at www.lsoft.com/catalist.html.
Stands for logical link control layer, a sublayer of the data-link layer in the Open Systems Interconnection (OSI) reference model.
See Also logical link control (LLC) layer
Stands for Local Multipoint Distribution Service, an emerging broadband wireless service with speeds up to 155 megabits per second (Mbps).
See Also Local Multipoint Distribution Service (LMDS)
A file used to resolve NetBIOS computer names into Internet Protocol (IP) addresses.
Overview
Lmhosts is a text file that provides a local method for name resolution of remote NetBIOS names into their respective IP addresses on a Transmission Control Protocol/Internet Protocol (TCP/IP) network. Using lmhosts files is an alternative to using WINS servers for name resolution on Microsoft Windows-based networks. Using a WINS server is generally preferable because it reduces administrative overhead.
The lmhosts file contains mappings for hosts on remote networks only. Mappings are not required for hosts on local networks because these can be resolved using broadcasts. If you are using lmhosts files to resolve NetBIOS names on a network, each computer on the network should have an lmhosts file.
Examples
Each line in the lmhosts file contains the IP address of a NetBIOS computer on the network, followed by the NetBIOS name of the computer. The computer name can be followed by optional prefixes that identify domains and domain controllers and allow entries to be loaded into the NetBIOS name cache at startup. Comments are prefixed with the pound sign (#). Here is an example taken from the sample lmhosts file included with Windows 98:
102.54.94.97 rhino #PRE #DOM:networking #net group's DC 102.54.94.123 popular #PRE #source server 102.54.94.117 localsrv #PRE #needed for the include
You can find the lmhosts file in the %SystemRoot%\ system32\drivers\etc directory in Windows NT, Windows 2000, Windows XP, Windows .NET Server, and in the \Windows directory in Windows 95 and Windows 98.
Notes
Place the NetBIOS names that need to be resolved most frequently near the top of the lmhosts file because the file is parsed linearly from the beginning.
See Also hosts file ,Networks file ,protocol file ,services file
The process of distributing client connections across multiple servers.
Overview
Load balancing is a technique used to increase the reliability of server-based computing. In a typical load-balancing scenario, incoming client requests are redirected to different servers on a server farm. The way in which redirection occurs varies with product and implementation, as in the following examples:
Round robin: This method passes each incoming request to the next server in a series. When the final server is reached, the next request is passed to the first server of the series. Each server in a round-robin load balancing scenario receives an equal number of client connections averaged over time.
Weighted: This form of load balancing employs a cost metric to determine how many client connections to direct to each server. For example, if server A has weight 10 and server B has weight 5, twice as many incoming requests will be redirected to server A than server B.
Least connected: In this scenario, incoming requests are forwarded to the server that currently has the fewest number of connections.
Fastest: Here requests are simply directed to the server that responds the quickest.
Uses
The primary use for load balancing in today's networks is in Web server farms. Load balancing increases the reliability of these farms for e-commerce applications by ensuring that user demand can be accommodated for and failure of any one server will not affect the application's overall functioning. The following figure illustrates a typical implementation of Web farm load balancing using a hardware load-balancing device such as Cisco LocalDirector (described later in this article).
Load balancing. Using load balancing to increase reliability of a Web server farm.
The biggest issue with Web server load balancing is ensuring that applications that employ persistent connections (a session that spans multiple Hypertext Transfer Protocol [HTTP] requests) work properly. Most modern Web server load balancers accomplish this by using sticky sessions, in which a request that is part of a session already opened is directed to the server in the farm that previously serviced it.
Other newer approaches to Web server load balancing on the market today include
Layer 4 load balancing: Uses Layer 4 switches to ensure persistent HTTP connections are maintained between a client and a server having a specific Internet Protocol (IP) address and port.
Layer 5 (or Layer 4/5 or Layer 4/7) load balancing: Similar to Layer 4 load balancing but includes the capability of differentiating traffic on the basis of Uniform Resource Locators (URLs), not just IP addresses and ports. This approach is sometimes called URL parsing, Web directing, and a dozen other vendor-specific names.
Implementation
The oldest form of load balancing is called Round Robin DNS and takes advantage of a feature of Domain Name System (DNS) name server that allows it to map multiple IP addresses to a single fully qualified domain name (FQDN). The main problem with this approach is that the name server directs clients to different servers without any regard to the availability of those servers. For example, if a server in the farm suddenly goes down, the name server has no way of knowing this and continues to redirect client requests to that server as it goes through the round-robin procedure. Because of this weakness, round-robin load balancing is seldom used anymore.
Hardware-based load balancers are more popular and are typically routers or switches that use application- specific integrated circuits (ASICs) to distribute load quickly and effectively to hundreds or even thousands of connected servers. In a typical scenario, a hardware load balancer will be assigned a "virtual" IP address while the real IP addresses of servers in the farm are hidden behind the load balancer. To the outside world, the farm of servers appears as a single server with a single IP address, the address of the load balancer. Hardware-based load balancers are expensive but provide best performance, and analysts estimate that they own over 60 percent of the load balancer market. Some newer hardware load balancers are better classified as network appliances than switches or routers due to their packaged functionality and ease of use.
Software-based load balancers are rising in popularity and consist of a load-balancing application that is installed on a standard PC or UNIX host. Software- based load balancers are slower than hardware-based ones and typically support only a few dozen servers, but they are cheaper and are an attractive solution for many e-businesses.
Marketplace
The first and still most popular hardware-based load balancer is the LocalDirector from Cisco Systems. LocalDirector sits in front of your server farm and bridges incoming traffic between the external network and the local area network (LAN) segment on which the servers all reside. If you need to load balance servers from several different sites or local area network (LAN) segments, you can use LocalDirector in conjunction with another Cisco product, DistributedDirector, to accomplish this. Other examples of hardware load balancers include ServerIron from Foundry Networks, BIG/ip and Edge-FX Local Cache Cluster from F5 Networks, and products from Alteon WebSystems (acquired by Nortel Networks in 2000), ArrowPoint Communications (now part of Cisco), Coyote Point Systems, and many others.
A popular software-based load balancing solution is Microsoft Corporation's network Load Balancing Service (NLBS), a feature that comes with Microsoft Windows 2000 Advanced Server and replaces the earlier Windows Load Balancing Service (WLBS) of Windows NT 4 Enterprise Edition. NLBS can load balance up to 32 servers and provides basic load balancing functionality at a price anyone can afford (it is free with the Windows 2000 Advanced Server operating system). Other examples of software load balancers include WebSphere from IBM, WSD Pro from Radware, and Central Dispatch/Global Dispatch from Resonate.
See Also Domain Name System (DNS) , name server, router
Service boundaries for local exchange carriers (LECs).
Overview
When the divestiture of AT&T occurred in 1984, 197 separate LATAs were created to specify the service boundaries for different LECs. These LATAs are identified by three-digit numbers that are different from area codes and do not necessarily match the same geographical areas as these codes. Calls made within a LATA are handled by the LEC administering the LATA and are typically local calls but may be long distance if the LATA spans a large enough region. Calls made between LATAs are always long-distance calls and are handled by inter-exchange carriers (ICXs) such as AT&T, MCI/Worldcom, and Sprint Corporation.
The Telecommunications Act of 1996 made some changes to this landscape by allowing LECs to handle inter-LATA traffic under certain conditions. The act also gives the Federal Communications Commission (FCC) full jurisdiction over the boundaries between LATAs.
Examples
LATAs are used in other parts of North America as well. Some examples of LATAs and their associated numbers include
New York Metro (132)
San Francisco (722)
Los Angeles (730)
Puerto Rico (820)
Mexico (838)
British Columbia, Canada (886)
See Also inter-exchange carrier (IXC) ,
An Internet Protocol (IP) address on the local subnet.
Overview
A host that is located on the same subnet is said to have a local address (with respect to the particular host under consideration). A host with a local address can be reached without the need to traverse any routers. By contrast, a remote address is an address of a host located on a different subnet. To communicate with a host having a remote address, IP packets must be routed across subnet boundaries by routers or Layer 3 switches.
Examples
As an example of a local address, consider a Transmission Control Protocol/Internet Protocol (TCP/IP) network with the following subnetting scheme:
Network ID = 181.55.0.0
Subnet Mask = 181.255.240.0
Using the above class B network and custom subnet mask, there will be 16 different subnets of 4094 hosts each, specifically
Subnet 1 has hosts 181.55.0.1 through 181.55.15.254.
Subnet 2 has hosts 181.55.16.1 through 181.55.31.254.
Subnet 3 has hosts 181.55.32.1 through 181.55.47.254.
Subnet 16 has hosts 181.55.240.1 through 181.55.255.254.
Now consider the following three hosts on the network:
Host A = 181.55.22.147
Host B = 181.55.28.12
Host C = 181.55.43.6
From the point of view of Host A, which is located on Subnet 2:
Host B is located on the local subnet (Subnet 2), so Host B's address is local to Host A.
Host C is located on a remote subnet (Subnet 3), so Host C's address is remote to Host A.
See Also Internet Protocol (IP) ,IP address ,routing ,subnetting
A telco service for transmitting data using line drivers.
Overview
Also called telco restricted lines, Local Area Data Channel (LDAC) conforms to the Bell 43401 standard published by AT&T and basically specifies DC continuity. This means that metallic (copper) conductors must be used, typically the unshielded twisted-pair (UTP) cabling employed for phone lines. The LADC standard also indicates that these lines must also be unloaded-that is, without terminators, loading coils, or protection circuitry that can add to the inductance of the line and thus distort signals.
LADC lines are available to distances of 3 miles (5 kilometers) from the telco's central office (CO). The longer the distance for a line, the lower the carrying bandwidth supported.
See Also central office (CO) ,unshielded twisted-pair (UTP) cabling
Typically, a group of computers located in the same room, on the same floor, or in the same building that are connected to form a single network.
Overview
Local area networks (LANs) are the simplest forms of computer networks and enable groups of users to share storage devices, printers, applications, data, and other resources on the network. LANs are typically limited to a single location but can sometimes span several buildings or even a campus. The collection of cabling and networking devices used to build a LAN is known as its infrastructure.
LANs do not contain any telecommunications circuits such as phone lines in the infrastructure-if they do, they are properly called wide area networks (WANs) instead. LANs come in all shapes and sizes, including
Workgroup LANs: Typically a group of workstations deployed in a single room or on a single floor.
Shared LANs: These use legacy hubs and bridges to connect stations.
Switched LANs: These use Ethernet switches to connect stations and have largely displaced shared LANs in the enterprise.
Campus LANs: These LANs span several buildings across a few miles and usually consist of smaller workgroup LANs connected by routers or Layer 3 switches.
Implementation
Building a LAN from a group of stand-alone computers requires the assembly and configuration of a number of components:
Network architecture: The vast majority of today's LANs are of the Ethernet type, usually 10BaseT or Fast Ethernet.
Cabling: This is used to join the computers together so they can communicate with one another. The most common type of cabling used in LANs is Category 5 (Cat5) unshielded twisted-pair (UTP) cabling. This cabling is typically installed in a topology called structured wiring, which is essentially a hierarchical or cascaded star topology employing hubs, switches, and routers.
Network protocol: To communicate on a network, computers must speak a common "language" or protocol, the most popular of which is Transmission Control Protocol/Internet Protocol (TCP/IP), which is necessary for Internet connectivity.
Network-aware operating system: This must be installed on the computers to enable them to share their resources with other computers (thus acting as a server) and access resources on other computers (acting as a client). The choice of operating system depends on whether the network will be a peer-to- peer network or a server-based network. Microsoft Windows 98 or Windows Millenium Edition (Me) is a good choice for small peer-to-peer workgroup LANs, while Windows 2000 offers the security and scalability needed to support a larger server-based network.
Network interface card (NIC): This must be installed in each computer in an available slot on the motherboard, together with a software driver, to control the card's functions. The network cabling is then connected to the NIC in each computer to form the actual network.
See Also bridge ,cabling ,Category 5 (Cat5) cabling ,Ethernet switch ,hub ,infrastructure ,NetBEUI ,network ,protocol ,router ,server ,Transmission Control Protocol/Internet Protocol (TCP/IP) ,unshielded twisted-pair (UTP) cabling ,wide area network (WAN)
A telco in the United States that provides telephone and telecommunication services for subscribers within a geographical region.
Overview
Traditionally, the local exchange part of the term local exchange carrier is another word for the telco's central office (CO), and the carrier part means they are the company that "carries" telephone and data traffic for their customers. In other words, your local exchange carrier (LEC) is simply the company that sends you a telephone bill each month. The LEC owns the local loop wiring between their CO and their subscribers' premises, and these premises are in a geographical area known as the Local Access and Transport Area (LATA). Any calls that take place within a given LATA are considered local calls and are billed accordingly. A single LEC may have control over the local loop in one or more LATAs.
The largest LECs came into existence with the breakup of AT&T in the early 1980s, which led to the formation of a number of independent Regional Bell Operating Companies (RBOCs). However, a number of smaller, independent LECs in the United States, especially in rural areas, were never part of the Bell system.
LECs directly handle traffic, including both local and long distance types, only within their area of jurisdiction. For subscribers of one LEC to communicate with those in a different LEC, long-distance carriers called inter-exchange carriers (IXCs) are used. In the United States, the "Big Three" IXCs are Sprint Corporation, AT&T, and MCI/WorldCom.
Types
Several different types of LECs are in the marketplace today:
Incumbent Local Exchange Carriers (ILECs): These include the RBOCs that came into existence through the AT&T divestiture, plus various independent telcos such as Verizon Communications. They are called "incumbent" because they own the local loop wiring in their service areas.
Competitive Local Exchange Carriers (CLECs): These arose as a result of the Telecommunications Act of 1996 and include mainly resellers of voice and Digital Subscriber Line (DSL) services. Some CLECs provide services regionally, but others, such as Intermedia Communications, have gained a national presence or transformed themselves into other types of service providers.
Building Local Exchange Carriers (BLECs): These are essentially offshoot CLECs that focus on provisioning voice and data services to office towers, hotels, industrial parks, and so on.
Prospects
The Telecommunications Act of 1996 changed the landscape of the telephone system in the United States by allowing LECs to compete in the deregulated long-distance market and by allowing IXCs to provide services directly to customer premises through mergers, acquisitions, and new technologies. Before 1996, each LEC was essentially an incumbent LEC (ILEC) that was the sole provider of telephone services to subscribers in its geographical region. The Telecommunications Act allowed new companies to become competitive LECs (CLECs) that could compete directly with ILECs in their areas of jurisdiction by mandating the leasing or purchasing of local loop and switching services from the ILECs or installing their own separate distribution and switching systems. LECs have an advantage over IXCs, however, in that they already own a right-of- access to customer premises, but IXCs have an advantage in that they are larger, more highly capitalized companies that can afford to invest heavily in new technologies and services or even acquire LECs outright.
The landscape is still changing after six years, but it appears to many analysts that the Telecommunications Act has largely failed to deliver on its promise of opening up more competition in the telephone and telecommunications industry. Many CLECs, particularly those that specialized in offering digital subscriber line (DSL) services, went out of business during the dot-com bust of 2001. Meanwhile, RBOCs, traditionally viewed as dinosaurs when it comes to implementing technological innovation, have modernized their services and consolidated their positions in the local loop. And the "Big Three" IXCs remain just three, with little expectation of things changing in that arena.
See Also central office (CO) , Competitive Local Exchange Carrier (CLEC) ,Incumbent Local Exchange Carrier (ILEC) ,inter-exchange carrier (IXC) , Regional Bell Operating Company (RBOC), telco
A type of group that exists only on the Microsoft Windows 2000 computer on which it is created.
Overview
Local groups reside within the local security database of the computer on which they are created. Local groups are intended for use only on Windows 2000 computers that are not part of a domain and are used for granting users who are interactively logged on to the computer access to resources on that computer. Local groups can contain only local user accounts from the same machine. You create local groups on a stand-alone Windows 2000 machine using the Local Users and Groups console.
Notes
Another type of Windows 2000 group called domain local groups have a domain-wide scope and can be used to provide users with access to resources located anywhere in a domain.
See Also AGLP ,built-in group ,global group ,group
The friendly name used in the HOSTS file for the loopback address, a special Internet Protocol (IP) address used to test the protocol stack on a host.
See Also loopback address
The portion of the telephone system that connects a subscriber to the nearest telco central office (CO).
Overview
The wiring used in the local loop is typically unshielded twisted-pair (UTP) four-wire copper cabling terminated at RJ-11 jacks in the customer premises. With traditional Public Switched Telephone Network (PSTN) lines that employ analog transmission, the maximum distance allowed between the customer premises and the CO is about 3 miles (5 kilometers). In many urban and commercial areas, the local loop has been upgraded to Integrated Services Digital Network (ISDN), which employs the same wiring but provides all-digital transmission for better voice and data connections. Local loop wiring can also carry a combination of voice and data at high speeds using various forms of Digital Subscriber Line (DSL) technologies.
Local loop. The local loop wiring between a telco CO and the customer premises.
Prospects
In many respects, the local loop represents the bottleneck in providing subscribers with high speed voice, data, multimedia, and Internet access services. This is because of two issues:
The copper nature of the local loop wiring makes it much less efficient than fiber-optic cabling at carrying large amounts of information.
The local loop wiring is owned by the Incumbent Local Exchange Carrier (ILEC) in your area, typically a Regional Bell Operating Company (RBOC). This legislated monopoly means that the main competition for providing customers with broadband services generally has to come from technologies other than traditional copper local loop wiring. Furthermore, because of their monopoly, RBOCs have generally been slow to bring about promised innovation such as fiber to the curb (FTTC), which was promised decades ago but never delivered.
For residential customers, the main competition for the traditional copper local loop consists of cable modem and satellite dish systems. For business customers, cable modems are generally not an option due to the lack of cable television infrastructure in industrial parks and office towers. Instead, business customers have other alternatives, such as Local Multipoint Distribution System (LMDS), a fixed wireless broadband service, and optical Ethernet, which involves provisioning fiber to office towers in dense urban areas.
See Also analog transmission , central office (CO) ,Digital Subscriber Line (DSL) ,digital transmission ,fiber to the curb (FTTC) ,Incumbent Local Exchange Carrier (ILEC) ,Integrated Services Digital Network (ISDN) , optical Ethernet, Public Switched Telephone Network (PSTN), Regional Bell Operating Company (RBOC), telco, unshielded twisted-pair (UTP) cabling
An emerging wireless broadband technology operating in the millimeter band of the electromagnetic spectrum.
Overview
Local Multipoint Distribution Service (LMDS) is a new wireless telecommunications technology that can simultaneously carry voice, data, and multimedia at speeds as high as 155 megabits per second (Mbps). LMDS is primarily targeted toward the business market and is designed to help alleviate the bottleneck of the telco local loop. LMDS is simple to deploy and relatively inexpensive, and it provides businesses with speeds higher than those of Digital Subscriber Line (DSL) technologies. It can also be deployed where cable modem infrastructures generally do not exist, such as industrial parks and office towers.
Implementation
LMDS is a cellular communications system in which coverage areas are typically 2.5 to 6 miles (4 to 10 kilometers) in diameter. Each cell is served by one or more LMDS transmitters. A transceiver is deployed with a fixed antenna at the customer premises, usually on a rooftop or other high location (LMDS does not support mobile users and does not support roaming between cells).
LMDS operates in the millimeter range at frequencies between 27.5 and 31.3 gigahertz (GHz). Transmissions in this range require precise line of sight between transmitter and station and are strongly affected by reflections of walls and other surfaces. Since frequencies used by LMDS are those at which water molecules absorb energy (which is how a microwave oven works), rain and moisture can absorb and scatter LMDS transmissions, causing dropouts. Deploying multiple transmitters per coverage area may lessen this effect, however.
LMDS can operate in two configurations:
Point-to-multipoint (PMP): This is the usual way of deploying LMDS and uses a single base station transmitter or "hub" broadcasting to multiple fixed end stations. Range is typically less than 6 miles (10 kilometers) for this configuration.
Point-to-Point (PTP): This is used to connect two LMDS stations and can sustain longer distances up to about 12 miles (19 kilometers).
Prospects
Trial deployments of LMDS are currently underway. The technology is expected to be deployed mainly in dense urban areas by implementing wireless rings to serve a region of customers. The main issue with LMDS is that communications are easily affected by moisture and bad weather, but efforts are underway to work around these problems. Some companies that have licensed LMDS frequencies include Advanced Radio Telecom Corporation, NextLink Communications, Teligent, and WinStar Communications.
See Also cellular communications , Digital Subscriber Line (DSL) ,
The network on which your computer resides, as opposed to a remote network.
Overview
The local network consists of all computers having the same network number as your machine. For example, on a Transmission Control Protocol/Internet Protocol (TCP/IP) network, if your computer was assigned the IP address 208.16.8.25, then your network ID number is 208.16.8.0, and your local network would consist of all computers having the same network ID number. Each computer on a network has the same network ID number but a different host ID to distinguish it from other hosts on the local network. In the above example, your host ID would be .25 and host IDs for other hosts on your local network might be .26, .27, and so on. The IP address of a host is a combination of the host ID and network ID. For example, if your network ID is 208.16.8.0 and your host ID is .25, your IP address would be 208.16.8.25.
The term local network is also used in another context to describe hosts that are on the same physical LAN segment, such as all the hosts connected to the same hub in an Ethernet network. The usage of the term is sometimes vague, and you must determine its meaning from the context in which it is used.
See Also IP address ,network ,subnet ,Transmission Control Protocol/Internet Protocol (TCP/IP)
A built-in identity in Microsoft Windows 2000 that provides a security context for running operating system tasks.
Overview
The LocalSystem account is a special identity built into the Windows 2000 operating system. LocalSystem has the attributes of a user account but has special privileges:
It is an implicit member of the Administrators group
It has full permissions on every operating system object
It can take ownership of any object
It has the system right "act as part of the operating system"
LocalSystem is used as a security context for running operating system services. The LocalSystem account can run these services whether or not any user is interactively logged on to the machine.
Notes
The System account on the Windows NT platform was also used as a security context for running services, but it could only do so for individual computers. In other words, if you had five Windows NT computers on a network, there were five System accounts present, one on each computer. The System account on one computer could not be used as a security context for running services on a different computer. This scenario has changed in Windows 2000 and Windows .NET Server, where the LocalSystem account has forest-wide applicability.
See Also built-in identities
A legacy networking protocol for the Apple platform.
Overview
LocalTalk was originally called AppleBus and operated over serial connections at a speed of 230.4 kilobits per second (Kbps). A LocalTalk segment could have a maximum of only 32 nodes, and network addresses were assigned dynamically using AppleTalk Address Resolution Protocol (AARP). LocalTalk was strictly a Layer 2 protocol only-higher layer functions had to be managed by the operating system itself. LocalTalk is a legacy protocol that was later replaced by AppleTalk, initially in a configuration called AppleTalk over LocalTalk and later AppleTalk Phase II.
See Also AppleTalk
A user account that exists only on the local machine on which it is defined.
Overview
In Microsoft Windows NT-based networks, a local user account is a user account that resides in the local security database of a particular Windows NT member server or workstation. When a user has a local account on a computer, the user can log on to the computer interactively. In a Windows NT-based network based on the workgroup security model, all user accounts are local user accounts and are created using the administrative tool called User Manager, the version of User Manager for Domains that is installed on stand-alone Windows NT member servers and workstations. In a Windows NT-based network that is based on the domain security model, new user accounts created using User Manager for Domains are by default global user accounts that are valid everywhere in the domain and are stored in the Security Accounts Manager (SAM) database on domain controllers. However, in a domain, you can also create a local account with User Manager for Domains by clicking the Account button in the New User dialog box and specifying Local Account as the Account Type. This is generally not recommended because local user accounts are not valid throughout the domain and are valid only for logging on interactively to the computer on which they are created.
In a Windows 2000-based or Windows .NET Server-based network, a local user account is one of three types of user accounts, the others being domain user accounts and built-in accounts. Local user accounts enable users to log on interactively to stand-alone Windows 2000 and Windows .NET servers or client computers in a workgroup and access system resources on the machine for which they have suitable permissions. Domain user accounts allow users to log on to a domain and access resources anywhere in the domain. Local user accounts are created using the Local Users and Groups tool, which is implemented as a snap-in for Microsoft Management Console (MMC). Local user accounts are stored in the local security database on the machine on which they are created, but domain user accounts are created in Active Directory directory service and stored in organizational units (OUs).
See Also Active Directory ,built-in account ,domain user account ,global user account
A user profile stored locally on a computer.
Overview
In Microsoft Windows 2000, Windows NT, Windows XP, and Windows .NET Server, a local profile is created for a user the first time the user successfully logs on to his or her computer. If the user does not have a preconfigured roaming user profile at the time of the first logon, Windows 2000 copies the default user profile to the new local user profile folder.
Local profiles are created for all users who interactively log on to computers running Windows 2000 so that they can access their own personal settings on that machine. Each user who logs on to a machine thus has his or her own local profile stored on the machine. Local profiles are stored in the folder Documents and Settings. Each user's profile is stored in a subfolder that is named after the user's username and contains the user's personal settings. The personal settings include both the appearance of the desktop and Start menu and the user's network connections (such as mapped drives). Even if users have a roaming profile that allows them to log on from any machine in the network and obtain their personal settings, each machine also stores a local copy of their profiles in case the network is down when they try to log on.
See Also roaming user profile ,user profile
A Microsoft Windows 2000 administrative tool for managing local user and group accounts on a stand-alone server or workstation.
Overview
Local Users and Groups is an administrative tool available on member servers running Windows 2000 Server and client computers running Windows 2000 Professional that you can use to create and manage local user accounts and local groups on the machine. Local Users and Groups is implemented as a snap-in for Microsoft Management Console (MMC), like other Windows 2000 administrative tools. You can use Local Users and Groups only if a workgroup security model is being used for your network. In a workgroup, each computer manages its own security and maintains its own local security database of account information. If your network uses a domain security model, all user accounts for the domain are stored in the Active Directory database, which contains a distributed domain directory database maintained by domain controllers on your network. You cannot install Local Users and Groups on domain controllers; on these machines, you should use Active Directory Users and Computers for creating domain user accounts.
See Also Active Directory ,Active Directory Users and Computers ,Microsoft Management Console (MMC)
A mechanism that protects data in a database from being overwritten.
Overview
Locking is a mechanism used to protects a database against data loss when users simultaneously attempt to modify the same database object. Locking synchronizes users' access to the database and prevents concurrent data manipulation problems to ensure that data remains consistent and query results are correct.
Locking provides concurrency in a multiuser environment-that is, it enables multiple clients to simultaneously access and modify a database without the danger of the data becoming corrupted. If one user locks a portion of the database to view or modify data, that data cannot be accessed or modified by any other user until the first user's updates have been committed.
Examples
The Microsoft SQL Server relational database platform employs multigranular locking, in which each database resource is locked at a level appropriate for that kind of resource. The following table shows the various database resources that can be locked in SQL Server, in order of decreasing granularity. This range of granularity allows a balance between concurrency (the ability of multiple clients to simultaneously access a database) and performance (speed). For example, highly granular locking such as row-level locking allows more concurrency (different users can simultaneously modify different rows in the same database table), but this increases system overhead because the server must manage more locks.
Locked Resource | Description |
DB | Locks the entire database |
Table | Locks an entire database table, including its data and indexes |
Extent | Locks a contiguous group of eight data pages or eight index pages |
Page | Locks individual 8-kilobyte (KB) data pages or index pages |
Key | Locks a row within an index |
RID (row identifier) | Locks individual rows in a table |
SQL Server uses a number of resource lock modes that specify how concurrent transactions can access different database resources. These include the following:
Shared locks: Allow concurrent transactions to read data-for example, by using transact-SQL SELECT statements. Shared locks allow concurrent reads but lock the resource against modification until the reads are completed. After the data is read, the lock is removed unless a repeatable read is being performed.
Exclusive locks: Lock data so that it can be modified-for example, by using INSERT, DELETE, or UPDATE statements. No other reads or modifications can be performed on the resource while it is exclusively locked.
Other locking modes include update locks, bulk update locks, and intent locks.
See Also database ,Structured Query Language (SQL)
Any file that contains records corresponding to application or operating system events or conditions, usually arranged sequentially by time.
Overview
Log files are usually delimited text files (such as .csv files) in which each line represents a transaction or logged event, with individual data fields separated by delimiting characters such as commas. Delimited text files can be imported into spreadsheet programs such as Microsoft Excel, database programs such as Microsoft Access, and report and analysis tools such as Crystal Reports for further analysis and graphical display of trends and usage patterns. Relogging is the process of taking a log file and sampling it at larger time intervals to reduce the size of the file for archiving purposes while maintaining the overall trend of data within the log.
Numerous processes within the Microsoft Windows operating systems and the Microsoft BackOffice applications maintain logs. Some log functions include the following:
Keeping track of transactions performed on an information store or database (as in Microsoft SQL Server or Microsoft Exchange Server)
Monitoring server or network performance over time when Performance Monitor is used on a Windows NT-based, Windows 2000-based, Windows XP-based, or Windows .NET Server-based network
Recording details of visitors to Web sites when you use Internet Information Services (IIS)
Recording the details of modem commands or Point-to-Point Protocol (PPP) transmissions when you use Network and Dial-up Connections to connect to an Internet service provider (ISP)
Regular inspection of log files is often an important component of insuring the security of a software platform or application.
See Also security
The data-link layer protocol for Bluetooth wireless networking.
Overview
The Logical Link Control and Adaptation Layer Protocol (L2CAP) resides over the Baseband layer, which is the physical (PHY) layer mechanism for Bluetooth communications. L2CAP enables logical channels to be established between different Bluetooth devices. L2CAP identifies different devices using channel identifiers (CIDs), which are also used for sending control information (for example, the L2CAP signaling channel is designated as CID 0x0001).
L2CAP provides both connection-oriented and connectionless services at the data-link layer, provides segmentation and reassembly, and supports protocol multiplexing. L2CAP packages real-time data streams into packets that can be up to 64 kilobytes (KB) in length and employ a little endian byte ordering.
See Also Bluetooth
A sublayer of the data-link layer in the Open Systems Interconnection (OSI) reference model.
Overview
For local area network (LAN) data-link protocols such as Ethernet, the data-link layer is divided into an upper layer called the logical link control (LLC) layer and a lower layer called the media access control (MAC) layer. The MAC layer coordinates access to the physical layer according to a media access control method, which for standard Ethernet is the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) scheme. The MAC layer thus provides services to the LLC layer so that protocol data units can be transferred to the medium without any concern about the broadcast, framing, addressing, or error-detection schemes used. The LLC uses the MAC services to provide two types of data-link operations to the network layer above it: LLC1 for connectionless and LLC2 for connection-oriented data-link communication services (known as Type 1 and Type 2, respectively). These LLC services are grouped into two classes:
Class 1 services: Connectionless services used by applications that do not require error detection or flow control.
Class 2 services: Either connectionless (Type 1) or balanced-mode connection-oriented (Type 2) data transfer services. The LLC provides the error detection and recovery, flow control, and resequencing services needed for connection-oriented data transfer.
Notes
The LLC protocol is based on the earlier High-level Data Link Control (HDLC) protocol. The term LLC sometimes refers to the IEEE 802.2 protocol itself, which is the most common LAN protocol implemented at the LLC layer.
See Also Carrier Sense Multiple Access with Collision Detection (CSMA/CD) ,Ethernet ,High-level Data Link Control (HDLC) ,media access control (MAC) layer ,Open Systems Interconnection (OSI) reference model
The process by which users notify their network's security authority (for example, domain controllers on a Microsoft Windows 2000 network) that they are terminating their session on the network. Users should always log off their computers when they are finished for the day to prevent unauthorized access to the network through their computers by others who might use the building at night. If, as an administrator, you find that users do not log off their computers, configure logon hours restrictions to forcibly disconnect users after work hours. You can also check whether your password policy is too strict, which might encourage users to stay logged on to avoid having to reenter a complex password each time they return to their stations.
Notes
If you are leaving your desk for only a short time, you can lock a machine running Windows 2000, Windows XP, or Windows .NET Server instead of logging off. You do this by pressing Ctrl+Alt+Delete followed by Lock Computer. You can unlock your workstation by pressing Ctrl+Alt+Delete again and reentering username and password. This approach is faster than logging off and allows applications such as your e-mail program to continue running in the background.
See Also Active Directory ,
The process by which a user's credentials are verified by a network security authority so that the user can be granted access to resources on the local machine or network.
Overview
There are two basic types of logons:
Interactive logons: Occurs when a user sit at the console of his local computer and enters credentials (usually username and password) in the logon dialog box.
Remote logons: Occurs when a user has already logged on interactively to a machine but wants to establish a network connection with a remote computer. For example, if the user tries to map a drive letter to a shared folder on the remote computer, a remote logon must take place during the process so that the remote computer can be sure that the user has the right to perform the action.
When a user attempts an interactive logon on a local computer, the user's credentials are verified by a security authority. On Microsoft Windows networks, this security authority may be
The local machine itself: This scenario typically occurs on computers that are configured to belong to a workgroup rather than a domain. In the workgroup security model, each machine maintains its own separate list of valid user accounts in its own local security database. For example, when a user performs an interactive logon to a stand-alone machine running Windows 2000, the machine itself validates the user's credentials.
A designated machine or group of machines on the network: For example, on a Windows 2000-based network that has Active Directory directory service installed, special Windows 2000 servers called domain controllers store and maintain information about valid user accounts for all users on the network. These domain controllers are then used for validating users' logon attempts. When the user tries to log on interactively to his local machine, the machine forwards the user's credentials to a nearby domain controller using a mechanism called pass- through authentication. The domain controller authenticates the credentials and then builds and returns an access token to the user, granting that user suitable levels of access to resources on the network.
Notes
Windows 2000 includes a feature called Run As that lets an administrator temporarily log on to a client machine without requiring that the user first log off from his machine. This feature is useful, for example, if an administrator needs to install an application or troubleshoot some problem while at the user's machine.
See Also logoff
A batch file that automatically runs on a user's client machine every time the user logs on to the network.
Overview
Logon scripts allow administrators to run a special series of commands each time a user logs on to her machine. These commands can perform functions such as
Configuring the desktop working environment for the user
Configuring legacy network clients and launching network connections
Automatically launching certain applications the user will need
Communicating a "message of the day" and other information to users
On the Microsoft Windows 2000 and Windows .NET Server platforms, logon scripts are primarily used for configuring legacy Windows clients and non-Windows clients that belong to a Windows 2000 network. On a pure Windows 2000 network, logon scripts are rarely needed because Windows 2000 employs Intellimirror technologies for performing tasks such as configuring the user's desktop, synchronizing home folders, and establishing network and printer connections.
Examples
On Windows NT networks, however, logon scripts still perform many useful functions. A typical Windows NT logon script might contain a series of Net.exe commands that synchronize the client computer's clock with a particular server, ensure that mapped network drives are available, restore printer connections, and perform other actions to configure the user's work environment.
The following simple script runs when a Windows client logs on to a Windows NT Primary Domain Controller (PDC). The script synchronizes the workstation's clock with the server, maps the drive letter K to a share on the server, and then exits.
net time \\pdc /set /yes >nul net use k: \\pdc\home exit
See Also IntelliMirror ,scripting
A random, malformed frame of data that is sent continuously by failed circuitry in a networking component. Better known as jabber.
See Also jabber
A new technology from Cisco Systems that supports Ethernet over older telephone wiring.
Overview
Provisioning new high-speed data services to multitenant units (MTUs) such as office towers and hotels can be complex and expensive. In-building digital subscriber line (DSL) is one solution, but this approach can be expensive due to costs of installing Digital Subscriber Line Access Multiplexer (DSLAM) equipment at the customer premises. DSL is also complex to provision because it involves Asynchronous Transfer Mode (ATM) technology and generally ties customers in to a single service provider, their Incumbent Local Exchange Carrier (ILEC). Fixed wireless services are another approach, but these services are currently limited to a few dense urban locations.
To open up the last mile marketplace, Cisco has developed a new technology called Long Reach Ethernet (LRE). This system provides customers with the plug-and-play simplicity of Ethernet running over existing in-building telephone wiring. In addition, LRE can be used to simultaneously carry voice, data, and video traffic over a single wiring infrastructure, and it can simultaneously support Plain Old Telephone System (POTS) analog voice, Integrated Services Digital Network (ISDN), and even asymmetric digital subscriber line (ADSL).
LRE supports data transmission at speeds between 5 and 15 megabits per second (Mbps). It can run over distances as long as 4970 feet (1515 meters), which is considerably greater than the 330 feet (100 meters) that can be achieved using standard 10 Mbps Ethernet. LRE thus allows Ethernet to be deployed in scenarios where previously it could not be used due to distance limitations and nonstandard wiring. LRE also saves money by eliminating the need to deploy a separate networking infrastructure by laying twisted pair or fiber-optic cabling throughout a building.
Implementation
LRE operates over existing telephone wiring and does not require deploying a parallel Category 5 (Cat5) cabling infrastructure or laying fiber. LRE makes use of several Cisco products working together, including
Cisco Catalyst 2900 LRE XL switches: These are available in either 12-port or 24-port configurations.
Cisco 575 LRE Customer Premises Equipment (CPE): These devices have an RJ-11 port for connecting to phone lines and an RJ-45 port for connecting to a computer or network.
Cisco LRE 48 POTS Splitters: These are installed at the customer's private branch exchange (PBX) and allow coexistence of voice and data traffic on the telephone lines.
See Also Asymmetric Digital Subscriber Line (ADSL) ,Asynchronous Transfer Mode (ATM) ,Digital Subscriber Line (DSL) ,Digital Subscriber Line Access Multiplexer (DSLAM) ,Ethernet ,Incumbent Local Exchange Carrier (ILEC) ,Integrated Services Digital Network (ISDN) ,Plain Old Telephone Service (POTS)
In routing terminology, a route that causes packets to be forwarded until they time out.
Overview
A routing loop-a packet that enters a loop circle endlessly until its Time to Live (TTL) value decrements to zero and it is dropped by a router interface-is an undesirable thing. Routing loops result in dropped packets and retransmissions and waste network bandwidth.
Early routing protocols such as the Exterior Gateway Protocol (EGP) lacked the intelligence to detect and eliminate loops from their routing topologies. For this reason, EGP was eventually replaced with a superior routing protocol called Border Gateway Protocol (BGP), which guaranteed loop-free forwarding of packets between autonomous systems (ASs) in an Internet Protocol (IP) internetwork. All routing protocols today include mechanisms for detecting and eliminating loops, which is essential in a strongly meshed network topology such as that of the Internet.
See Also Border Gateway Protocol (BGP) ,Exterior Gateway Protocol (EGP) ,routing
A test circuit that goes from one device to another and back again.
Overview
Loopback tests are used to check line integrity and the proper functioning of customer premises equipment and to diagnose and troubleshoot problems with telecommunications equipment. Loopback tests can be performed by wide area network (WAN) access devices such as Channel Service Unit/Data Service Units (CSU/DSUs) and routers to place calls to themselves over a WAN to test the WAN link's integrity.
Implementation
In loopback testing, a test signal is typically sent from a service provider's central office (CO) to the customer premises and is returned or echoed by the customer premises equipment (CPE) back to the service provider. If the loopback signal fails to return, the WAN link is down and must be repaired. If the loopback signal returns, the device compares the original signal with the returned one; any discrepancies can be used to troubleshoot communication problems.
Loopback. An example of how loopback testing can be used to verify the integrity of a telecommunications link.
Examples
One place where loopback testing is valuable is for testing Integrated Services Digital Network (ISDN) lines. If the Service Profile Identifiers (SPIDs) and ISDN directory numbers have been configured for your ISDN interface, a loopback test will determine whether
You can connect with your provider's ISDN exchange. If not, you might have a cable or interface problem.
Your ISDN numbers are correctly assigned.
You have caller ID or other advanced ISDN services on your line.
Another type of loopback test is called a local loopback test. This is often used with WAN access devices to test networking connectivity with locally attached network devices. You can also implement a local loopback test by having network application software place a call to the WAN access equipment and having the equipment return an echo to the application.
See Also Channel Service Unit/Data Service Unit (CSU/DSU) ,customer premises equipment (CPE) ,Integrated Services Digital Network (ISDN) ,router
A special Internet Protocol (IP) address used to test the protocol stack on a host.
Overview
In Transmission Control Protocol/Internet Protocol (TCP/IP) networking, the loopback address is the special IP address 127.0.0.1. The loopback address can be used to test the protocol stack on a host even when the host is not connected to a network. For example, to test whether TCP/IP is installed and configured correctly on a machine running Microsoft Windows, type ping 127.0.0.1 at the command prompt. Alternately, you can type the command ping localhost to achieve the same result since the Hosts file on a Windows machine resolves the friendly name localhost into the IP address 172.0.0.1. Finally, you can even ping any legal IP address of the form 127.x.y.z to test your TCP/IP protocol stack. If this test produces an error, either your network interface card (NIC) is incorrectly configured or your protocol stack is corrupt. If the configuration looks correct, try removing and reinstalling TCP/IP on your machine to fix the problem. If that fails, try reinstalling the driver for your NIC or replacing the NIC.
See Also hosts file ,IP address ,ping ,Transmission Control Protocol/Internet Protocol (TCP/IP)
Stands for Line Printer Daemon, the UNIX daemon used for spooling print jobs.
See Also Line Printer Daemon (LPD)
Stands for Line Printer Queue, a UNIX command used for querying the status of a print queue.
See Also Line Printer Queue (LPQ)
Stands for Line Printer Remote, a UNIX network command used to submit print jobs to print servers.
See Also Line Printer Remote (LPR)
Stands for Long Reach Ethernet, a new technology from Cisco Systems that supports Ethernet over older telephone wiring.
See Also Long Reach Ethernet (LRE)
Stands for Linear Tape Open, a high-capacity tape backup technology.
See Also Linear Tape Open (LTO)