2. NetMedia Architecture


2. NetMedia Architecture

The NetMedia system has three main components: client for presentation scheduling, server for resource scheduling and database (file) systems for data management and storage. Each client supports a GUI (Graphical User Interface) and is responsible for synchronizing images, audio and video packets, and delivering them to an output device such as a PC or a workstation. Meanwhile, each server is superimposed upon a database (file) system, and supports the multi-user aspect of media data caching and scheduling. It maintains timely retrieval of media data from the media database (file) and transfers the data to the client sites through a network. Finally, each database (file) system manages the insertion, deletion and update of the media data stored in the local database (files). As certain media streams may be represented and stored in different formats, the underlying database systems can be heterogeneous. The aim in the proposed middleware is to give the user as much flexibility and individual control as possible. Each media type must be supported individually to allow independent and interactive functionalities as well as QoS requirements.

Both server and client are divided into a pair of the front end and the back end, which work independently as part of our modular system design and to guarantee an optimum in pipelining of the data.

2.1 Server Design

The design of the server permits the individual access of the streams with the support of sharing resources to enhance scalability. The aim of the server functionality can be summarized in the following points:

  • resource sharing (use of common server disk reading for multiple clients)

  • scalability (support multiple front end modules)

  • individual QoS and congestion control (managing individual streams)

  • interactivity

The server back end includes the disk service module which fills in the server buffer according to the request. The server buffer contains a subbuffer for each stream. The back end module is shared among all streams and front end modules.

The server front end includes a communication module and a packetization module which support reading out the data from the server buffer and delivers it to a network according to the QoS requirements for each stream. It also deals with the admission control of the clients.

Figure 33.1 shows the implementation design of the server component for processing audio and video streams. The disk service module is realized by the DiskReadThread. The Packetization module is realized by the SendThread which reads media units from the server buffer and sends out packets to the network. The Communication module is realized by the ServerTimeThread, AdmissionThread and AdmittedClientSet. The ServerTimeThread reports server's current time and estimates the network delay and time difference between server and client. The AdmittedClientSet keeps track of all admitted clients. A Server_Probe_Thread is used to get feedback messages and control messages from the probe thread at the client site and initiates the control of probing the network.

click to expand
Figure 33.1: Server design for audio and video streams.

2.2 Client Design

The main functionality of a client is to display multiple media streams to the user in the specified format. The client is responsible for synchronizing the audio and video packets and delivering them to the output devices. In addition, the client sends feedbacks to the server which is used for admission control and scheduling in the server. Figure 33.2 shows the design of the client component.

click to expand
Figure 33.2: Multiple streams synchronized at client.

The client back end receives the stream packets from the network and fills in the client cache, which is divided into two components:

  • Communication module - The communication module provides the interface to the network layer. It primarily consists of two sub-systems to handle UDP and TCP protocols. This enables the client to use faster protocols like UDP for media streams that can tolerate some data loss and reliable protocols like TCP for feedback messages.

  • Buffer Manager - The buffer manager is responsible for maintaining the client cache in a consistent state.

The client front end reads out the data from the client cache and ensures the synchronized presentation of the streams, which is divided into two components:

  • Synchronization Manager - The synchronization manager schedules the delivery of multimedia data to the output devices. This module controls the delivery of media streams based on QoS parameters defined by the user and the state of the presentation.

  • Presentation Manager - The presentation manager interacts with the GUI and translates user interactions such as START and STOP into meaningful commands that the other sub-systems can understand.

Note that a stream consisting of transparencies (or images) can be retrieved using conventional transmission methods such as RPC (remote procedure calls) and be synchronized with the audio and video streams at the interface level. The advantage of RPC is that well known file functions such as opening and reading from a certain position can be implemented by standard function calls (which then are transparently executed at the server machine). A RPC request may contain commands to find the file length of a stream, providing directory content or starting the delivery of a slide session. The RPC server retrieves all requested data and sends it back to the client side. Since the scheduling for such data is not as time critical as audio or video data, some delay can be tolerated. In our implementation, the slides are synchronized first according to the audio information. If no audio is available, slides are scheduled with video; otherwise it behaves like a regular postscript document.

To make the access to media streams transparent to the higher levels of the application, we provide an interface, termed MultiMedia RealTime Interactive Stream (MRIS), for requesting the delivery of media streams such as audio, video or any other media forms in a synchronized form from the server. MRIS separates the application from the software drivers needed to access multimedia data from remote sites, and controls specific audio and video output devices. The MRIS Interface has the advantage that the transmission support is independent of the application. Any application that utilizes synchronized multimedia data can rely on this module. We use the MRIS class to implement our virtual-classroom application. The synchronization with slides data can be done at the MRIS interface level.

A crucial design element of a multimedia system is control over the timing requirements of the participating streams so that the streams can be presented in a satisfiable fashion. In NetMedia, a robust timing control module, named SleepClock, is designed to handle all timing requirements in the system. The novelty of SleepClock is that it can flexibly control and adjust the invocation of periodic events. We use SleepClock to enforce:

  • end-to-end synchronization between server and client,

  • smoothed transmission of streams,

  • periodic checks in modules and

  • timing requirements of media streams.

The detail of the design for SleepClock can be found in [50].




Handbook of Video Databases. Design and Applications
Handbook of Video Databases: Design and Applications (Internet and Communications)
ISBN: 084937006X
EAN: 2147483647
Year: 2003
Pages: 393

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net