In a Multicast environment, videos are not broadcast repeatedly, but multicast on demand. In this section, we first discuss the Batching approach, and then present a more efficient technique called Patching.
In this environment, users requesting the same video, within a short period of time, are served together using the multicast facility. Since there could be several such batches of pending requests, a scheduler selects one to receive service according to some queuing policy. Some scheduling techniques for the Batching approach are as follows:
First-Come-First-Serve : As soon as some server bandwidth becomes free, the batch containing the oldest request with the longest waiting time is served next. The advantage of this policy is its fairness. Each client is treated equally regardless of the popularity of the requested video. This technique, however, results in a lower system throughput because it may serve a batch with few requests, while another batch with many requests is pending.
Maximum Queue Length First : This scheme maintains a separate waiting queue for each video. When server bandwidth becomes available, this policy selects the video with the most number of pending requests (i.e., longest queue) to serve first. This strategy maximizes server throughput. However, it is unfair to users of less popular videos.
Maximum Factored Queued Length First : This scheme also maintains a waiting queue for each video. When server resource becomes available, the video vi selected to receive service is the one with the longest queue weighted by a factor , where fi denotes the access frequency or the popularity of vi. This factor prevents the system from always favoring more popular videos. This scheme presents a reasonably fair policy without compromising system throughput.
The benefit of periodic broadcast is limited to popular videos. In this sense, Batching is more general. It, however, is much less efficient than periodic broadcast in serving popular videos. A hybrid of these two techniques, called Adaptive Hybrid Approach (AHA), was presented in  offering the best performance. This scheme periodically assesses the popularity of each video based on the distribution of recent service requests. Popular videos are repeatedly broadcast using Skyscraper Broadcasting while less demanded ones are served using Batching. The number of channels used for periodic broadcast depends on the current mix of popular videos. The remaining channels are allocated to batching. The AHA design allows the number of broadcast channels allocated to each video to change in time without disrupting the on-going playbacks.
All the techniques discussed so far can only provide near-on-demand services. A multicast technique that can deliver videos truly on demand is desirable. At first sight, making multicast more efficient and achieving zero service delay seem to be two conflicting goals. We discuss in this subsection one such solution called Patching.
The patching technique  allows a new client to join an on-going multicast and still receive the entire video stream. This is achieved by receiving the missed portion in a separate patching stream. As this client displays data arriving in the patching stream, it caches the multicast stream in a buffer. When the patching stream terminates, the client switches to playback the prefetched data in the local buffer while the multicast stream continues to arrive. This strategy is illustrated in Figure 31.10. The diagram on the left shows a Client B joining a multicast t time units late, and must receive the first portion of the video through a patching stream. The right diagram shows t time units later. Client B has now just finished the patching stream, and is switching to play back the multicast data previously saved in the local buffer.
Figure 31.10: Patching.
In a simple Patching environment, a new client can be allowed to join the current multicast if the client has enough buffer space to absorb the time skew; otherwise a new multicast is initiated. This strategy is too greedy in sharing the multicasts, and may result in many long patching streams. A better patching technique should allow only clients arriving within a patching period to join the current multicast. The appropriate choice of this period is essential to the performance of Patching. If the patching period is too big, there are many long patching streams. On the other hand, a small patching period would result in many inefficient multicasts. In either case, the benefit of multicast diminishes. A technique for determining the optimal patching period was introduced in . A patching period is optimal if it results in minimal requirement on server bandwidth. In , clients arriving within a patching period are said to form a multicast group. This scheme computes D, the mean amount of data transmitted for each multicast group, and τ, the average time duration of a multicast group. The server bandwidth requirement is then given by D/τ which is a function of the patching period. The optimization can then be done by finding the patching period that minimizes this function. It was shown in  that under different request inter-arrival times, the optimal patching period is between 5 and 15 minutes for a 90-minute video.