Yima is a second generation scalable real-time streaming architecture, which incorporates results from first generation research prototypes, and is compatible with industry standards. By utilizing a pseudo-random block assignment technique, we are able to reorganize data blocks efficiently when new disks are added or removed while in a normal mode of operation. Yima has a rate-control mechanism that can be utilized by a control policy such as Super-streaming , to speed-up and slowdown streams. We showed the usefulness of this mechanism in supporting variable-bitrate streams. Finally, the fully distributed design of Yima yields a linear scale up in throughput. We conducted several experiments to verify and evaluate the above design choices in realistic setups.
Our experiments demonstrate graceful scale up of Yima with SCADDAR as disks are added to the storage subsystem. We showed that average load across all disks has uniformly been decreasing as we add disks. The experiments also showed the effectiveness of the MTFC smoothing technique in providing a hiccup-free display of variable-bitrate streams. We showed that the technique is flexible and lightweight enough so that by increasing the number of thresholds a smoother traffic is achieved. Finally, we demonstrated that with Yima single node setup, we can support up to 12 streams each with a 5.3 Mbps bandwidth requirement before the 100 Mbps network card becomes the bottleneck. We installed Quicktime™ on the same hardware and tweaked the software to support the same media format; we could only support 9 streams with Quicktime. The superiority of Yima is attributed to our optimized lightweight scheduler and RTP/RTSP servers.
To demonstrate that the bottleneck was the network card with the single node Yima server, we upgraded the card to a 1 Gbps card and achieved up to 35 streams on a single node before the CPU became the bottleneck. We observed a linear scale-up in throughput with 2 node and 4 node configurations of Yima. A total of 48 streams has been supported on the 4 node configuration when each server carries a 100 Mbps network card.
We intend to extend Yima in three ways. First, we have co-located a 4-node Yima server at one of our industry partner's data centers during 2001. Currently we have a server located across the continental United States that is connected to our campus via Internet 2. We plan to stream high-resolution audio and video content to our campus from distributed sources as part of a demonstration event. This environment provides us with a good opportunity to not only perform several intensive tests on Yima but also collect detailed measurements on hiccup and packet-loss rates as well as synchronization differences. Second, we plan to extend our experiments to support distributed clients from more than one Yima server. We already have four different hardware setups hosting Yima. By co-locating two Yima server clusters at off-campus locations and the other two servers in different buildings within our campus, we have an initial setup to start our distributed experiments. We have some preliminary approaches to manage distributed continuous media servers  that we would like to incorporate, experiment with and extend. Finally, we performed some studies on supporting other media types, in particular the haptic  data type. Our next step would be to store and stream haptic data.