SONETSDH Principles


SONET/SDH Principles

SONET/SDH defines a family of transmission standards for compatible operation among equipment manufactured by different vendors or different carrier networks. The standards include a family of interface rates; a frame format; a definition of overhead for operations, administration, and protection; and many other attributes.

Although SONET/SDH is typically associated with operations over single-mode fiber, its format can be transmitted over any serial transmission link that operates at the appropriate rate. For example, implementations of SONET/SDH over wireless (both radio and infrared) links have been deployed.

SONET/SDH has roots in the digital TDM world of the telecommunications industry, and its initial applications involved carrying large numbers of 64-kbps voice circuits. SONET/SDH was carefully designed to be backward compatible with the DS1/DS3 hierarchy used in North America, the E1/E3/E4 hierarchy used internationally, and the J1/J2/J3 hierarchy used in Japan. The initial SONET/SDH standards were completed in the late 1980s, and SONET/SDH technology has been deployed extensively since that time.

Digital Multiplexing and Framing

Framing is the key to understanding a variety of important SONET functions. Framing defines how the bytes of the signal are organized for transmission. Transport of overhead for management purposes, support of subrate channels including DS1 and E1, and the creation of higher-rate signals are all tied to the framing structure.

It's impossible to understand how signals are transported within SONET/SDH without understanding the basic frame structure.

Digital time-division networks operate at a fixed frequency of 8 KHz. The 8 KHz clock rate stems from the requirement to transmit voice signals with a 4 KHz fidelity. Nyquist's theory states that analog signals must be sampled at a rate of twice the highest frequency component, to ensure accurate reproduction. Hence, sampling of a 4-KHz analog voice signal requires 8000 samples per second. All digital transmission systems operating in today's public carrier networks have been developed to be backward compatible with existing systems and, thus, operate at this fundamental clock rate.

In time-division transmission, information is sent in fixed-size blocks. The fixed-size block of information that is sent in one 125-microsecond ([1/8000] of a second) sampling interval is called a frame. In time-division networks, channels are delimited. First locate the frame boundaries and then count the required number of bytes to identify individual channel boundaries.

Note

Although the underlying goal is similar, time-division frames have several differences when compared to the link layer frame of layered data communications.


The time-division frame is fixed in size, does not have built-in delimiters (such as flags), and does not contain a frame check sequence. It is merely a fixed-size container that is typically subdivided into individual fixed-rate channels.

All digital network elements operate at the same clock rate. Combining lower-rate signals through either bit or byte interleaving forms higher-rate signals. As line rates increase, the frame rate of 8000 times per second remains the same, to maintain compatibility with all the subrate signals. As result, the number of bits or bytes in the frame must increase to accommodate the greater bandwidth requirements.

The North American digital hierarchy is often called the DS1 hierarchy; the international digital hierarchy is often called the E1 hierarchy.

Even though both hierarchies were created to transport 64-kbps circuit-switched connections, the rates that were chosen were different because of a variety of factors. In North America, the dominant rates that are actually deployed are DS1 and DS3. Very little DS2 was ever deployed. What DS4 has existed has been replaced by SONET. Internationally, E1, E3, and E4 are most common.

Note

A DS2 is roughly four times the bandwidth of a DS1, a DS3 is roughly seven times the bandwidth of a DS2, and a DS4 is roughly six times the bandwidth of a DS3. But the rates are not integer multiples of one another. The next step up in the hierarchy is always an integer multiple plus some additional bits. Thus, TDM has to be done asynchronously.


Prior to a fully synchronized digital network, digital signals had to be multiplexed asynchronously. Figure 2-6 shows the signal rates in the North American and international digital hierarchies. An example of asynchronous multiplexing can be seen in the process of combining 28 DS1s to form 1 DS3 (North American hierarchy). Because each of the 28 constituent DS1s can have a different source clock, the signals cannot be bit- or byte-interleaved to form a higher-rate signal. First, their rates must be adjusted so that each signal is at exactly the same rate. This is accomplished by bit-stuffing each DS1 to pad it up to a higher common rate. In addition, control bits are inserted to identify where bit stuffing has taken place.

Figure 2-6. Digital Transmission Before SONET/SDH Digital Signal Hierarchy


When all 28 DS1s are operating at the same nominal rate, the DS3 signal is formed by bit-interleaving the 28 signals. The insertion of stuffing and control bits is why 28 x 1.544 Mbps = 43.232 Mbps, although a DS3 operates at 44.736 Mbps.

To demultiplex the signals, the control bits and stuffing bits must be removed. Because the control and stuff bits aren't fully visible at the DS3 level, the only way to demultiplex or remove one DS1 from the DS3 stream is to demultiplex the entire DS3 into its 28 constituent DS1s. For example, consider an intermediate network node in which you need to drop one DS1. First, the entire DS3 is demuxed into 28 DS1s, then the target DS1 is dropped, and finally the remaining 27 DS1s (and perhaps a 28th DS1 that is added) are remultiplexed to re-create the DS3 for transport to the next node. Figure 2-7 shows an example of multiplexing before SONET/SDH.

Figure 2-7. Multiplexing Before SONET/SDH


DS1 Frame

The most common time-division frame in the North American hierarchy is the DS1 frame. One DS1 carries twenty-four 64-Kbps channels. Figure 2-8 shows an individual frame of a DS1 and indicates that it contains a single 8-bit sample from each of the 24 channels. These twenty-four 8-bit samples (8 x 24 = 192) dominate the DS1 frame.

Figure 2-8. DS-1 Frame


Each frame also contains a single bit called the framing bit. The framing bit is used to identify frame boundaries. A fixed repetitive pattern is sent in the framing bit position. The receiver looks for this fixed framing pattern and locks on it. When the framing bit position is located, all other channels can be located by simply counting from the framing bit. Thus, in a TDM network, there is no requirement for special headers or other types of delimiters. Channels are identified simply by their position in the bit stream.

Each DS1 frame has 193 bits. The frame is repeated every 125 microseconds (8000 times per second), leading to an overall bit rate of 1.544 Mbps. All other digital time-division signals operate in a similar fashion. Some bits or bytes will always uniquely identify frame boundaries. When frame boundaries are established, individual channels can be located by a combination of counting and pre-established position in the multiplex structure.

STS-1 Frame

The basic building block of a SONET signal is the STS-1 frame. To facilitate ease of display and understanding, SONET and SDH frames are usually described using a matrix structure, in which each element of a row or column in the matrix is 1 byte. The matrix always has nine rows, but the number of columns depends on the overall line rate. In the case of an STS-1, the frame is presented as a 9-row-by-90-column matrix, which results in a total of 810 bytes per frame. The bytes are transmitted from left to right, top to bottom. In Figure 2-9, the first byte transmitted is the one in the upper-left corner. Following that byte are the remaining 89 bytes of the first row, which are followed by the first byte in the second row, and so on until the right-most byte (column 90) of the bottom row is sent. What follows the last byte of the frame? Just as in any other TDM system, the first byte of the next frame.

Figure 2-9. STS-1 Frame


The transmission of 810 bytes at a rate of 8000 times per second results in an overall line rate of 51.84 Mbps.

Thirty-six bytes of the 810 bytes per frame, or roughly 2.3 Mbps, are dedicated to overhead. This results in a net payload rate of 49.536 Mbps.

This might seem like an odd transmission rate. It's not an ideal match to either the DS1 or E1 hierarchy in terms of backward compatibility. However, as you'll see, it's a relatively good match as a single-rate compromise that's compatible with both the E1 and DS1 hierarchies.

If SONET rates were chosen simply for operation in North American networks, a different rate would have been chosen. Similarly, if SDH had been developed strictly for Europe, a rate that allows more efficient multiplexing from the E1 hierarchy would have been chosen. But for SONET and SDH to adopt common rates, compromises were made.

STS-1 Frame and the Synchronous Payload Envelope

Figure 2-10 illustrates the relationship between the STS-1 frame and the Synchronous Payload Envelope (SPE). The SPE is the portion of the STS-1 frame that is used to carry customer traffic. As the figure shows, the position of the SPE is not fixed within the frame. Instead, the SPE is allowed to "float" relative to the frame boundaries.

Figure 2-10. STS-1 SPE Relative to Frame Boundary


This doesn't mean that the SPE varies in size. The STS-1 SPE is always 9 x 87 bytes in length, and the first column of the SPE is always the path overhead. "Floating" means that the location of the SPE, as indicated by the first byte of the path overhead, can be located anywhere within the 783 payload bytes of the frame.

Because the location of the beginning of the SPE is not fixed, a mechanism must be available to identify where it starts. The specifics of the overhead bytes have yet to be presented, but they are handled with a "pointer" in the line overhead. The pointer contains a count in octets from the location of the pointer bytes to the location of the first byte of the path overhead.

Several benefits are gained from this floating relationship. First, because the payload and the frame do not need to be aligned, the payload does not need to be buffered at the end nodes or at intermediate multiplexing locations to accomplish the alignment. The SPE can be immediately transmitted without frame buffering.

A second benefit, related to the first, occurs when creating higher-rate signals by combining multiple STS-1s to form an STS-N, such as an STS-12, at a network node.

The 12 incoming STS-1s might all derive timing from the same reference so that they are synchronized, but they might originate from different locations so that each signal has a different transit delay. As a result of the different transit delays, the signals arrive at the multiplex location out of phase. If the SPE had to be fixed in the STS-1 frame, each of the 12 STS-1s would need to be buffered by a varying amount so that all 12 signals could be phase-aligned with the STS-12 frame. This would introduce extra complexity and additional transit delay to each signal. In addition, this phase alignment would be required at every network node at which signals were processed at the Synchronous Transport Signal (STS) level. By allowing each of the STS-1s to float independently within their STS-1 frame, the phase differences can be accommodated and, thus, the associated complexity can be reduced. Reducing the requirement for buffering also reduces the transit delay across the network.

A final advantage of the floating SPE is that small variations in frequency between the clock that generated the SPE and the SONET clock can be handled by making pointer adjustments.

The details of pointer adjustments are beyond the scope of this chapter, but this basically involves occasionally shifting the location of the SPE by 1 byte to accommodate clock frequency differences. This enables the payload to be timed from a slightly different clock than the SONET network elements without incurring slips. The pointer adjustments also allow the payload clock to be tunneled through the SONET network.

SONET/SDH Rates and Tributary Mapping

The SONET and SDH standards define transmission rates that are supported in MSPP systems. These rates are specific to each of these two standards, yet (as you will see) they are closely related. In this section, you will learn about these rates of transmission and how they are used to carry a variety of customer traffic.

SONET Rates

Figure 2-11 shows the family of SONET rates and introduces the terminology that is used to refer to the rates. The base SONET rate is 51.84 Mbps.

Figure 2-11. SONET Rates


This rate is the result of the standards body compromises between the SDH and SONET camps, which led to equally efficient (or inefficient, depending on your point of view) mapping of subrate signals (DS1 and E1) into the signal rate. All higher-rate signals are integer multiples of 51.84 Mbps. The highest rate currently defined is 39.808 Gbps. If traditional rate steps continue to be followed, the next step will be four times this rate, or approximately 160 Gbps.

The SONET signal is described in both the electrical and optical domains. In electrical format, it is the STS. In the optical domain, it is called an Optical Carrier (OC). In both cases, the integer number that follows the STS or OC designation refers to the multiple of 51.84 Mbps at which the signal is operating.

Sometimes confusion arises regarding the difference between the STS and OC designations. When are you talking about an OC instead of an STS? The simplest distinction is to think of STS as electrical and OC as optical. When discussing the SONET frame format, the assignment of overhead bytes, or the processing of the signal at any subrate level, the proper signal designation is STS. When describing the composite signal rate and its associated optical interface, the proper designation is OC. For example, the signal transported over the optical fiber is an OC-N, but because current switching fabric technology is typically implemented using electronics (as opposed to optics), any signal manipulation in an add/drop or cross-connect location is done at the STS level.

SDH Rates

Figure 2-12 shows the rates that SDH currently supports. The numbers in the Line Rate and Payload Capacity columns should look familiar: They are exactly the same as the higher rates defined for SONET. SDH does not support the 51.84-Mbps signal because no international hierarchy rate maps efficiently to this signal rate; that is, E3 is roughly 34 Mbps and E4 is roughly 140Mbps. So the SDH hierarchy starts at three times the SONET base rate, or 155.52 Mbps, which is a fairly good match for E4.

Figure 2-12. SDH Rates


SDH calls the signal a Synchronous Transport Module (STM) and makes no distinction between electrical and optical at this level. The integer designation associated with the STM indicates the multiple of 155.52 Mbps at which the signal is operating.

You can see how well the standards body compromise on SONET/SDH rates worked by comparing the capacity in DS0s column of this chart with the similar column of the SONET chart. The DS0 capacity is equivalent for each line rate. This implies that the efficiency of mapping E1s into SDH is equivalent to the efficiency of mapping DS1s into SONET.

When you look at the two charts, notice that even though all the terminology is different, the rate hierarchies are identical. Also note that capacity in DS0s is the same, so the two schemes are equally efficient at supporting subrate traffic, whether it originates in the DS1 or E1 hierarchies. This compatibility in terms of subrate efficiency is part of the reason for the SONET base rate of 51.84 Mbps; it was a core rate that could lead to equal efficiency.

Even though the rates are identical, the SDH and SONET standards are not identical. Differences must be taken into account when developing or deploying one technology versus the other. The commonality in rates is a huge step in the right direction, but you still need to know whether you're operating in a SONET or SDH environment.

Transporting Subrate Channels Using SONET

When SONET and SDH standards were developed during the 1980s, the dominant network traffic was voice calls, operating at 64 kbps. Any new transmission system, such as SONET/SDH, must be backward compatible with these existing signal hierarchies. To accommodate these signals, SONET has defined a technique for mapping them into the SONET synchronous payload envelope. Mappings for DS1, E1, DS1C, DS2, and DS3 signals have been defined. The mappings involve the use of a byte-interleaved structure within the SPE. The individual signals are mapped into a container called a virtual tributary (VT). VTs are then mapped into the SPE using a structure called a virtual tributary group (VTG); Figure 2-13 shows an example of a VTG. VTs define the mechanisms for transporting existing digital hierarchy signals, such as DS1s and E1s, within the SONET payload. Understanding the VT structure and its mapping into the SONET payload enables you to understand how DS1 and E1 can be accommodated for transport within SONET. This also clarifies the flexibility for transporting these signals and how channel capacity must be sized to meet the customers transport needs.

Figure 2-13. VTGs


The basic container for transporting subrate traffic within the SONET SPE is the VTG. The VTG is a subset of the payload within the SPE. The VTG is a fixed time-division multiplexed signal that can be represented by a 9-row-by-12-column matrix, in which where the members of each row and column are bytes, just as in the previous example of the SONET frame. If you do the arithmetic, you'll find that each VTG has a bandwidth of 6.912 Mbps, and a total of seven VTGs can be transported within the SPE. An individual VTG can carry only one type of subrate traffic (for example, only DS1s). However, different VTGs within the same SPE can carry different subrates. No additional management overhead is assigned at the VTG level, but as you'll see, additional overhead is assigned to each virtual tributary that is mapped into a VTG.

The value of the VTG is that it allows different subrates to be mapped into the same SPE. When an SPE of an STS-1 is defined to carry a single VTG, the entire SPE must be dedicated to transporting VTGs (that is, you cannot mix circuit and packet data in the same SPE except by using the VTG structure). However, different VTGs within the same SPE can carry different subrates.

Now that you know about the structure of an individual VTG, let's see how the VTGs are multiplexed into the STS-1 SPE. Figure 2-14 illustrates this. As with all the other multiplexing stages within SONET/SDH, the seven VTGs are multiplexed into the SPE through byte interleaving. As discussed previously, the first column of the SPE is the Path Overhead column. This byte is followed by the first byte of VTG number 1, then the first byte of VTG number 2, and so on through the first byte of VTG number 7. This byte is followed by the second byte of VTG 1, as shown in Figure 2-14. The net result is that the path overhead and all the bytes of the seven VTGs are byte-interleaved into the SPE.

Figure 2-14. VTG Structure


Note that columns 30 and 59 are labeled "Fixed Stuff." These byte positions are skipped when the payloads are mapped into the SPE, and a fixed character is placed in those locations. The Fixed Stuff columns are required because the payload capacity of the SPE is slightly greater than the capacity of seven VTGs. The SPE has 86 columns after allocating space for the path overhead. But the seven VTGs occupy only 84 columns (7 x 12). The two Fixed Stuff columns are just a standard way of padding the rate so that all implementations map VTGs into the SPE in the same way.

Individual signals from the digital hierarchy are mapped into the SONET payload through the use of VTs. VTs, in turn, are mapped into VTGs. A VT mapping has been defined for each of the multiplexed rates in the existing digital hierarchy.

For example, a DS1 is transported by mapping it into a type of VT referred to as VT1.5s. Similarly, VT mappings have been defined for E1 (VT2), DS1C (VT3), and DS2 (VT6) signals. In the current environment, most implementations are based on VT1.5 and VT2. Because each of the subrates is different, the number of bytes associated with each of the VT types is also different.

As we said, VTGs are fixed in size at 9 x 12 = 108 bytes per frame. Because the size of the individual VTs is different, the number of VTs per VTG varies.

A VTG can support four VT1.5s, three VT2s, two VT3s, or one VT6, as shown in Figure 2-15. Only one VT type can be mapped into a single VTG, such as four VT1.5s or three VT2s; however, you cannot mix VT2s and VT1.5s within the same VTG. Different VTGs within the same SPE can carry different VT types. For example, of the seven VTGs in the SPE, five might carry VT1.5s and the remaining two could carry VT2s, if an application required this traffic mix.

Figure 2-15. Four DS1s Mapped into a VTG


As a reminder, VT path overhead is associated with each VT, which can be used for managing the individual VT path. In addition, a variety of mappings of the DS1 or E1 signal into the VT have been defined to accommodate different clocking situations, as well as to provide different levels of DS0 visibility within the VT.

The most common VT in North America is the VT1.5. The VT1.5 uses a structure of 9 rows x 3 columns = 27 bytes per frame (1.728 Mbps) to transport a DS1 signal. The extra bandwidth above the nominal DS1 signal rate is used to carry VT overhead information. Four VT1.5s can be transported within a VTG. The four signals are multiplexed using byte interleaving, similar to the multiplexing that occurs at all other levels of the SONET/SDH hierarchy. The net result of this technique in the context of the SONET frame is that the individual VT1.5s occupy alternating columns within the VTG. Figure 2-15 shows an example of four DS-1 signals mapped into a VTG.

Outside North America, the 2.048-Mbps E1 signal dominates digital transport at the lower levels. The VT2 was defined to accommodate the transport of E1s within SONET. The VT2 assigns 9 rows x 4 columns = 36 bytes per frame for each E1 signal, which is 4 more bytes per frame than the standard E1. As is the case for VT1.5s, the extra bandwidth is used for VT path overhead. Because the VT2 has four columns, only three VT2s can fit in a VTG. The VTG is again formed by byte-interleaving the individual VT2s.

Signals of Higher Rates

In this section, you'll learn about the creation of higher-rate signals, such as STS-48s or STS-192s, and concatenation.

Remember that rates of STS-N, where N = 1, 3, 12, 48, 192, or 768 are currently defined. An STS-N is formed by byte-interleaving N individual STS-1s. Except for the case of concatenation, which is discussed shortly, each of the STS-1s within the STS-N is treated independently and has its own associated section, line, and path overhead.

At any network cross-connect or add/drop node, the individual STS-1s that forms the STS-N can be changed. The SPE of each STS-1 independently floats with respect to the frame boundary.

In an OC-48, the 48 SPEs can each start at a different byte within the payload. The H1 and H2 pointers (in the overhead associated with each STS-1) identify the SPE location. Similarly, pointer adjustments can be used to accommodate small frequency differences between the different SPEs.

When mapping higher-layer information into the SPE (such as VTGs or Packet over SONET), the SPE frame boundaries must be observed. For example, an STS-48 can accommodate 48 separate time-division channels of roughly 50 Mbps each. The payload that's mapped into the 48 channels is independent, and any valid mapping can be transported in the channel. However, channels at rates higher than 50 Mbps cannot be accommodated within an OC 48. For higher rates, concatenation is required.

Byte Interleaving to Create Higher-Rate Signals

Figure 2-16 shows an example of three STS-1s being byte-interleaved to form an STS-3. The resultant signal frame is now a 9-row-by-270-column matrix. The first nine columns are the byte-interleaved transport overhead of each of the three STS-1s. The remaining 261 columns are the byte-interleaved synchronous payload envelopes. Higher-rate signals, such as STS-48s or STS-192s, are formed in a similar fashion. Each STS-1 in the STS-N adds 3 columns of transport overhead and 87 columns of payload. All the individual STS-1s are byte-interleaved in a single-stage process to form the composite signal. Each of the STS-1s is an independent time-division signal that shares a common transport structure. The maximum payload associated with any signal is roughly 50 Mbps. A technique called concatenation must be used to transport individual signals at rates higher than 50 Mbps.

Figure 2-16. Creating Higher-Rate Signals


Concatenation

Increasingly, data applications in the core of the network require individual channels to operate at rates much greater than the 50 Mbps that can be accommodated in a single STS-1. To handle these higher rate requirements, SONET and SDH define a concatenation capability.

Concatenation joins the bandwidth of N STS-1s (or N STM-1s) to form a composite signal whose SPE bandwidth is N multiplied by the STS-1 SPE bandwidth of roughly 50 Mbps. Signal concatenation is indicated by a subscript c following the rate designation. For example, an OC-3c means that the payload of three STS-1s has been concatenated to form a single signal whose payload is roughly 150 Mbps.

The concatenated signal must be treated in the network as a single composite signal. The payload mappings are not required to follow the frame boundaries of the individual STS-1s.

Intermediate network nodes must treat the signal as a contiguous payload. Only a single path overhead is established because the entire payload is a single signal. Many of the transport overhead bytes for the higher-order STSs are not used because their functions are redundant when the payload is treated as a single signal.

Concatenation is indicated in the H1 and H2 bytes of the line overhead. The bytes essentially indicate whether the next SPE is concatenated with the current SPE. It is also possible to have concatenated and nonconcatenated signals within the same STS-N. As an example, an STS-48 (OC-48) might contain 10 STS-3cs and 18 STS-1s.

No advantage is gained from concatenation if the payload consists of VTs containing DS1s or E1s. However, data switches (for example, IP routers) typically operate more cost-effectively if they can support a smaller number of high-speed interfaces instead of a large number of lower-rate interfaces (if you assume that all the traffic is going to the same next destination).

So the function of concatenation is predominantly to allow more cost-effective transport of packet data.

Figure 2-17 shows an example of a concatenated frame. It's still nine rows by N x 90 columns. The first 3N columns are still reserved for overhead, and the remaining N x 87 columns are part of the SPE. The difference is that, except for the concatenation indicators, only the transport overhead associated with the first STS-1 is used in the network.

Figure 2-17. Concatenated Frames


In addition, the payload is not N byte-interleaved SPEs: The payload is a single SPE that fills the full N x 87 columns of the frame. A single path overhead is associated with the concatenated signal. The 9 bytes per frame of path overhead that are normally associated with the remaining STS-1s are available for payload in the SPE. A variety of mappings have been identified to transport packet protocols within the SPE.

Even though concatenation does allow greater flexibility and efficiency for higher-rate data communications, there are several limitations to its use.

First is the signal granularity. Each rate in the hierarchy is four times the preceding rate. That makes for very large jumps between successive rates that are available. The signal granularity issue is compounded when looking at rates common to data communications applications. For example, Ethernet and its higher-speed variants are increasingly popular data-transport rates for the metropolitan-area networks (MANs) and wide-area networks (WANs). But in the Ethernet rate family of 10 Mbps, 100 Mbps, 1 Gbps, and 10 Gbps, only the 10 Gbps signal is a close match to an available concatenated rate.

An additional problem is that network providers are not always equipped to handle concatenated signals, especially at the higher rates, such as OC-192c. Implementations that operate at these high rates today are often on a point-to-point basis, and the signal does not transit through intermediate network nodes. This problem of availability of concatenated signal transport is especially an issue if the signal transits multiple carrier networks.

In an attempt to address some of these limitations, you can use the virtual concatenation technique.

As the name implies, virtual concatenation means that the end equipment sees a concatenated signal, but the transport across the network is not concatenated. The service is provided by equipment at the edge of the network that is owned by either the service provider or the end user. The edge equipment provides a concatenated signal to its client (for example an Internet Protocol (IP) router with an OC-48c interface), but the signals that transit the network are independent STS-1s or STS3cs. The edge equipment essentially uses an inverse multiplexing protocol to associate all the individual STS-1s with one another to provide the virtually concatenated signal. This requires the transmission of control bytes to provide the association between the various independent STS channels on the transmit side.

The individual channels can experience different transit delays across the network, so at the destination, the individual signals are buffered and realigned to provide the concatenated signal to the destination client.

Virtual concatenation defines an inverse-multiplexing technique that can be applied to SONET signals. It has been defined at the VT1.5, STS-1, and STS-3c levels. At the VT1.5 level, it's possible to define channels with payloads in steps of 1.5 Mbps by virtually concatenating VT1.5s. Up to 64 VT1.5s can be grouped. For example, standard Ethernet requires a 10-Mbps channel; this can be accomplished by virtually concatenating seven VT1.5s.

Similarly, a 100-Mbps channel for Fast Ethernet can be created by virtually concatenating two STS-1s. Virtual concatenation of STS-3cs provides the potential for several more levels of granularity than provided by standard concatenation techniques.

SONET/SDH Equipment

SONET/SDH networks are typically constructed using four different types of transmission equipment, as shown in Figure 2-18. These are path-terminating equipment, regenerators, add/drop multiplexers, and digital cross-connects. Each equipment type plays a slightly different role in supporting the delivery of services over the SONET/SDH infrastructure. All are necessary to provide the full range of network capabilities that service providers require.

Figure 2-18. SONET/SDH Equipment


Path-terminating equipment (PTE), also sometimes called a terminal multiplexer, is the SONET/SDH network element that originates and terminates the SONET/SDH signal. For example, at the originating node, the PTE can accept multiple lower-rate signals, such as DS1s and DS3s, map them into the SONET payload, and associate the appropriate overhead bytes with the signal to form an STS-N. Similarly, at the destination node, the PTE processes the appropriate overhead bytes and demultiplexes the payload for distribution in individual lower-rate signals.

When digital signals are transmitted, the pulses that represent the 1s and 0s are very well defined. However, as the pulses propagate down the fiber, they are distorted by impairments such as loss, dispersion, and nonlinear effects. To ensure that the pulses can still be properly detected at the destination, they must occasionally be reformed to match their original shape and format. The regenerator performs this reforming function.

The beauty of digital transmission is that, as long as the regenerators are placed close enough together that they don't make mistakes (that is, a 1 is reformed to look like a clean 1, not a zero, and vice versa), digital transmission can be essentially error free.

The regenerator function is said to be 3R, although there is sometimes disagreement over exactly what the three R's stand for.

These three functions are performed:

  • Refresh or amplify the signal to make up for any transmission loss

  • Reshape the signal to its original format to offset the effects of dispersion or other impairments that have altered the signal's pulse shape

  • Retime the signal so that its leading edge is consistent with the timing on the transmission line

In today's world, the full 3R functionality requires O-E-O conversion. Because it is tied to the electrical signal format, the regenerator is unique to the line rate and signal format being used. Upgrades in line ratesay, from OC-48 to OC-192require a change-out of regenerators. As such, it is a very expensive process, and network operators try to minimize the number of regenerators that are required on a transmission span. One of the benefits of optical amplifiers has been that they have significantly reduced the number of 3R regenerators required on a long-haul transmission span.

The add/drop multiplexer (ADM) is used, as the name implies, to "add" and "drop" traffic at a SONET/SDH node in a linear or ring topology. The bandwidth of the circuits being added and dropped varies depends on the area of application. This can range from DS1/E1 for voice traffic up to STS-1/STM-1 levels and, in some cases, even concatenated signals for higher-rate data traffic.

The functions of an ADM are very similar to the functions of a terminal multiplexer, except that the ADM also handles pass-through traffic.

Although this book is primarily related to MSPP, the next network element of a SONET network bears some mention and explanation.

The Digital Cross-Connect System (DCS) exchanges traffic between different fiber routes. The key difference between the cross-connect and the add/drop is that the cross-connect provides a switching function, whereas the ADM performs a multiplexing function. The cross-connect moves traffic from one facility route to another.

A cross-connect is also used as the central connection point when linear topologies are connected to form a mesh. There might be no local termination of traffic at the cross-connect location. In fact, if there is, the traffic might first be terminated on an add/drop multiplexer or terminal multiplexer, depending on the signal level at which the cross-connect operates.

DCSs are generally referred to as either wideband digital cross-connect, narrowband cross-connect, or broadband digital cross-connect.

A narrowband digital cross-connect is designed for interconnecting a large number of channels at the DS0 level.

A wideband digital cross-connect is designed for interconnecting a large number of channels at the DS1 basic level.

A broadband digital cross-connect is designed for interconnecting a large number of channels at the DS3 and higher levels.




Building Multiservice Transport Networks
Building Multiservice Transport Networks
ISBN: 1587052202
EAN: 2147483647
Year: 2004
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net