Multimedia Networking Requirements

Multimedia Networking Requirements

Today's networks simply were not built for multimedia and, in particular, for applications that involve video communications, multimedia collaboration, and/or interactive-rich media. Curiously, it is through the application of sophisticated computer applications and devices that we have been able to determine what the human information processing model is comprised of: There is a very strong tendency for us to rely on the visual information stream for rapid absorption and longer retention. More than 50% of a human's brain cells are devoted to processing visual information, and combined with the delights of sound, smell, and touch despite our enormous dependence on the written word we're very active in processing the cues from the physical world. By changing the cues, we can change the world. Digital-rich media, in every conceivable sort of format audio, animation, graphic, full-motion video, application, whiteboards, communities, and so on will increasingly depend on multimedia. Digital video and digital audio require minimal, predictable delays in packet transmission, which conventional shared-bandwidth, connectionless networks do not offer. (Chapter 4, "Establishing Communications Channels," discusses connectionless networks in detail.) They also require tight controls over losses, yet control of losses is not accounted for in connectionless networks. As more people simultaneously access files from a server, bandwidth becomes a significant issue. Correct timing, synchronization, and video picture quality are compromised if the bandwidth is insufficient.

Two key issues relate to multimedia communications: the nature of digital video and the role of television.

Digital Video

One of the fascinating areas that is driving and motivating the need for broadband access is television. Although television has a tremendous following throughout the world more than computing or telecommunications it has remained untouched by the digital revolution, but it is poised to change in the near future. Despite major advances in computing, video, and communications technologies, television has continued to rely on standards that are more than 55 years old. The biggest shortcoming with the existing television standards (that is, National Television Standards Committee [NTSC; used in the North America and Japan], Phase Alternation Line [PAL; used throughout the world], and Systeme Electronique Couleur Avec Memoire [SECAM; used in France and French territories]) is that they are analog systems, in which video signals degrade quickly under adverse conditions. Most of this signal degradation occurs along the path the picture travels from the studio to your TV. Digital TV (DTV) offers numerous advantages over the old analog TV signal, among which is the fact that it is nearly immune to interference and degradation. Another advantage of DTV is the capability to display a much better range of colors. The typical analog television can display around 32,000 colors, whereas the human eye can discriminate 16,000,000 colors. Sophisticated computer monitors and DTVs can display those 16,000,000 colors and more. Most importantly, digital technology will convert television from a mechanism that supports passive viewing to an engaging and interactive sensory experience an environment in which you choose when, where, and how you engage with the world at your disposal.

People have only so much time and money to spend on electronic goods and services. In many parts of the world, the first thing people seem willing to spend their time and money on involves entertainment. Therefore the television industry, as well as the content, entertainment, and application world, will be increasingly important to how the local loop develops and how this further demands the introduction of home area networking. Of course TV and networks will deliver more than entertainment. They will deliver edutainment and infotainment, too, and the presentation of the information and knowledge you need will be in a format that is palatable and ensures assimilation and retention on a rapid and effective basis. Video and multimedia facilitate our ability to retain information and therefore will become the basis of much information delivery. This will drive the need for more bandwidth not just to the home, but also within the home, to network the range of computing and entertainment systems.

What would be required to carry a digitized stream to today's television? In the U.S. system, NTSC would require approximately 160Mbps, a digitized PAL stream would require about 190Mbps, and high-definition TV (HDTV) would require 1.5Gbps. Videoconferencing requires much less bandwidth than does TV, but it still requires a substantial amount; the H.323 standard from the ITU allows videoconferencing to be carried at bandwidths ranging from 384Kbps to 1.5Mbps. Streaming video requirements vary, depending on the quality: low quality requires 3Mbps, medium quality requires 5Mbps, and high quality requires 7Mbps.

An important driver behind broadband access is content, and much of the content for which people are willing to pay is entertainment oriented. The television industry has yet to undergo the revolution that digital technology has caused in other communications-related industries, and it has yet to capitalize on the potential new revenue-generating services that personal digital manipulation may allow. With the introduction of DTV and the mandate by spectrum management agencies to phase out or decommission analog broadcast, we will need a much greater amount of bandwidth into our homes to feed the new generations of televisions.

In terms of information transfer, television is generally associated with the concept of broadcast or cable delivery of someone else's programming on someone else's timetable. Video is associated with the ability to record, edit, or view programming on demand, according to your own timetable and needs. Multimedia promises to expand the role of video-enabled communications, ultimately effecting a telecultural shift, with the introduction of interactive television.

Video Compression

To make the most of bandwidth, we need to apply compression to video. Full-motion digital video needs as much compression as possible in order to fit on most standard storage devices. Moving Picture Experts Group (MPEG) is a working group of ISO/IEC in charge of the development of standards for coded representation of digital audio and video. It has created the MPEG compression algorithm, which reduces redundant information in images. One distinguishing characteristic of MPEG compression is that it is asymmetric a lot of work occurs on the compression side and very little occurs on the decompression side. It's off-line versus real-time compression. Off-line allows 80:1 or 400:1 compression ratios, so it takes 80 or 400 times longer to compress than to decompress. It can take as much as an hour to compress one minute of video. The advantage of this asymmetrical approach is that digital movies compressed using MPEG run faster and take up less space. MPEG is hardware dependent.

There are several MPEG standards, in various stages of development and completion, and with different targeted uses. The following are some of the most common MPEG standards:

         MPEG-1 MPEG-1 is the standard on which such products as Video CD and MP3 are based. MPEG-1 addresses VHS-quality images with a 1.5Mbps data rate. MPEG-1 can play back from a single-speed CD-ROM player (150Kbps or 1.2Mbps) at 352 x 240 (that is, quarter screen) at 30 frames per second (fps).

         MPEG-2 MPEG-2 is the standard on which such products as DTV set-top boxes and DVD are based, and at this point it is the compression scheme of choice. It addresses DTV- or computer-quality images with a 6Mbps data rate. MPEG-2 offers resolutions of 720 x 480 and 1280 x 720 at 30 fps, with full CD-quality audio.

         MPEG-4 MPEG-4, an evolution of MPEG-2, features audio, video, and systems layers and offers variable-bit-rate encoding for both narrowband and broadband delivery in a single file. It also uses an object-based compression method, rather than MPEG-2's frame-based compression. MPEG-4 enables objects such as 2D or 3D video objects, text, graphics, and sound to be manipulated and made interactive through Web-like hyperlinks and/or multimedia triggers. The best feature of MPEG-4 is that the RealNetworks players, the Microsoft Windows Media Player, and Apple QuickTime all support MPEG-4.

         MPEG-7 MPEG-7, which is scheduled for completion in 2001, deals mainly with providing descriptions of multimedia content.

         MPEG-21 Today, there are many elements involved in building an infrastructure for the delivery and consumption of multimedia content. There is, however, no big picture to describe how these elements relate to each other. MPEG-21 was created to provide a framework for the all-electronic creation, production, delivery, and trade of content. Within the framework, we can use the other MPEG standards where appropriate. The basic architectural concept in MPEG-21 is the "digital item." Digital items are structured digital objects, including a standard representation and identification, as well as metadata. Basically, a digital item is a combination of resources (for example, videos, audio tracks, images), metadata (such as MPEG-7 descriptors), and structure (describing the relationship between resources).

MPEG-1, MPEG-2, and MPEG-4 are primarily concerned with the coding of audio/visual content, whereas MPEG-7 is concerned with providing descriptions of multimedia content, and MPEG-21 seeks to provide a framework for the all-electronic creation, production, delivery, and trade of content.

MPEG-4 is intended to expand the scope of audio/visual content to include simultaneous use of both stored and real-time components, plus distribution from and to multiple endpoints, but also to enable the reuse of both content and processes. Standards are being proposed as far out as MPEG-21 for specialized applications. Faster compression techniques using fractal geometry and artificial intelligence are being developed and could theoretically achieve compression ratios of 2,500:1. Implemented in silicon, this would enable full-screen, NTSC-quality video that could be deliverable over a LAN, as well as over the traditional PSTN and wireless networks.

Until better compression schemes are developed, most users have standardized on MPEG-2. By applying MPEG-2 encoding to NTSC, we can reduce the bandwidth to 2.7Mbps. Broadcast quality would be reduced to 7.2Mbps, DVD would require 10.8Mbps, and HDTV would come down to 20Mbps. Even so, how many of us have 20Mbps pipes coming into our homes now? Not even a 1.5Mbps connection over DSL or cable modem can come close to carrying a 20Mbps DTV signal. This tells us that broadband access alternatives over time will shift. We will need more fiber, we will need that fiber closer to the home, and we will need much more sophisticated compression techniques that enable us to make use of the even more limited wireless spectrum to carry information. We will also need to move forward with introducing new generations of wireless technologies geared toward the support of multimedia capacities a combination of intelligent spectrum use and highly effective compression of course with support for the requisite variable QoS environment and strong security features.

Delay and Jitter

Along with their demands for so much capacity, video and other real-time applications, such as audio and voice, also suffer from delay (that is, latency), and bit errors (that is, missing video elements or synchronization problems or complete loss of the picture) can be fatal. Delay in the network can wreak havoc with video traffic. The delay in a network increases as the number of switches and routers in the network increases. Today, the ITU recommends a maximum delay of 150 milliseconds, and evolving agreements promise packet loss of 1% or less per month and a round-trip latency guarantee of 80 milliseconds. However, the public Internet has as much as 40% packet loss during peak traffic hours and average latencies of 800 to 1,000 milliseconds. Although we really can't control the delay in the public Internet, we can engineer private IP backbones to provide the levels that we're seeking.

Jitter is another impairment that has a big impact on video, voice, and so on. Jitter is introduced when delay does not remain the same throughout a network, so packets arrive at the receiving node at different rates. Video can tolerate a small amount of delay, but when congestion points slow the buffering of images, jitter causes distortion and highly unstable images. Reducing jitter means reducing or avoiding the congestion that occurs in switches and routers, which, in turn, means having as many priority queues as the network QoS levels require.

Television Standards

Given the importance of a new era in television, it is worthwhile establishing some reference points for both analog and digital television standards.

Analog TV Standards

In 1945 the U.S. Federal Communications Commission (FCC) allocated 13 basic VHF television channels, thus standardizing the frequencies and allocating a broadcast bandwidth of 4.5MHz. In 1948 the NTSC was formed to define a national standard for the broadcast signal itself. The standard for black-and-white television was finally set in 1953 (developed primarily by RCA), and was ratified by the Electronics Industries Association (EIA) as the RS-170 specification. Full-time network color broadcasting was introduced in 1964, with an episode of Bonanza.

The NTSC color TV specification determines the electronic signals that make up a color TV picture and establishes a method for broadcasting those pictures over the air. NTSC defines a 4:3 horizontal:vertical size ratio, called the aspect ratio. This ratio was selected in the 1940s and 1950s, when all picture tubes were round because the almost-square 4:3 ratio made good use of round picture tubes. An NTSC color picture with sound occupies 6MHz of frequency spectrum, enough bandwidth for 2,222 voice-grade telephone lines. To transmit this signal digitally without compression would require about 160Mbps.

The English/German PAL system was developed after NTSC and adopted by the United Kingdom, Western Germany, and The Netherlands in 1967. The PAL system is used today in the United Kingdom, Western Europe (with the exception of France), Asia, Australia, New Zealand, the Middle East, Africa, Asia, and Latin America. Brazil uses a version of PAL called PAL-M. The PAL aspect ratio is also 4:3, and PAL channels occupy 8MHz of spectrum. Uncompressed PAL, digitally transported, would require approximately 200Mbps.

The SECAM system is used in France and the former French colonies, as well as in parts of the Middle East. Russia and the former Soviet-allied countries used a modified form of SECAM. There are two versions of SECAM: SECAM vertical and SECAM horizontal.

The PAL and SECAM standards provide a sharper picture than NTSC, but they display a bit of a flicker because they have a slower frame rate. Programs produced for one system must be converted to be viewed on one of the other systems. The conversion process detracts slightly from in the image quality, and converted video often has a jerky, old-time-movie look.

DTV Standards

DTV represents the growing convergence of broadcasting and computing. Thanks to MPEG-2, studio-quality images can be compressed and transformed into a digital stream. DTV is the next generation of television; its development will improve the audio and video quality of broadcast television, and it will replace the film cameras used in movie production. The difference between analog and digital TV will be profound in terms of picture quality as well as special screen effects, such as multiple-windowed pictures and interactive viewer options. The quality of DTV is almost six times better than what current TV offers, delivering up to 1,080 lines of resolution and CD-quality sound. But the real promise of DTV lies in its huge capacity the capability to deliver, during a single program, information equivalent to that contained on dozens of CDs. The capacity is so great that wholly new industries will be created to use this digital potential for an entirely new revenue streams. Recognizing that the Web and other Internet services may grow to rival television, it is highly likely that new generations of television system infrastructure design will include this medium as part of the total system so this is another area where convergence will occur.

Several standards worldwide cover DTV. Advanced Television Systems Committee's (ATSC's) DTV standards include digital high-definition television (HDTV), standard definition television (SDTV), data broadcasting, multichannel surround-sound audio, and satellite direct-to-home broadcasting. On December 24, 1996, the U.S. FCC adopted the major elements of the ATSC DTV standard (Standard A/53). The ATSC DTV standard has since been adopted by the governments of Canada, South Korea, Taiwan, and Argentina.

As shown in Table 10.1, the ATSC high-definition standard includes several basic DTV formats, which are defined by the number of pixels per line, the number of lines per video frame, the frame repetition rate, the aspect ratio, and the frame structure (that is, interlaced or progressive). The ATSC standard recommends that the receiver seamlessly and without loss of video continue to display all these formats in the native format of the television receiver.

One of the biggest issues in television standards involves how DTV images are drawn to the screen. There are two perspectives: those of the broadcast TV world and those of the computer environment. The broadcasters would rather initiate DTV with interlaced scanning, which is used by today's TV sets. Computer companies want progressive scanning DTV signals, similar to those used by computer monitors. The source of the conflict is different historical bandwidth limits. Originally, the NTSC decided that the best way to fit a 525-line video signal into a 6MHz broadcast channel was to break each video frame into two fields, each holding one half of the picture. Interlacing is a technique the camera uses to take two snapshots of a scene within a frame time. During the first scan, it creates one field of video, containing even-numbered lines, and during the second, it creates another, containing the odd-numbered lines. The fields are transmitted sequentially and the receiver reassembles them. This technique makes for reduced flicker and therefore higher brightness on the television receiver for the given frame rate (and bandwidth). Interlacing is rough on small text, but moving images look fine.

Table 10.1. ATSC DTV Standard

Vertical Value

Horizontal Value

Aspect Ratio

Frame Rate

Scanning Sequence

HDTV

1080

1920

16:9

30, 24

Progressive

1080

1920

16:9

60

Interlaced

720

1280

16:9

60, 30, 24

Progressive

DTV

480

704

4:3

60, 30, 24

Progressive

480

640

4:3

60, 30, 24

Progressive

SDTV

480

704

4:3

60

Interlaced

480

640

4:3

60

Interlaced

Countries in Europe, as well as Australia, use the Digital Video Broadcasting (DVB) standard. Formed in 1993, the DVB Consortium is responsible for recommending technical standards for the delivery of DTV. The DVB Consortium's 300-plus membership includes broadcasters, manufacturers, regulatory bodies, software developers, network operators and others from more than 35 countries. DVB standards are published by the European Telecommunications Standards (ETSI) and there is considerable day-to-day cooperation between the two organizations. ETSI, the Centre for Electrotechnical Standards (CENELEC), and the European Broadcasting Union (EBU) have formed a joint technical committee to handle the DVB family of standards.

DVB standards are very similar to ATSC standards, including MPEG-2 video compression and packetized transport, but they provide for different audio compression and transmission schemes. The DVB standard has guidelines for a 1,920-pixel-by-1,080-line HDTV format.

At resolutions greater than 1,000 horizontal pixels, more than double that of today's TV sets, DTV receivers are likely to require massive amounts of dynamic random access memory (DRAM) to buffer the frames as they are processed. DTV can require a system to move upwards of 9.5Mbps of data.

At this point, it is not certain which DTV standard will win out, and we may see more than one retain a place. Whichever DTV format is favored, television will soon be fully digital.

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net