We have discussed several different approaches to multicast distribution, and each has its strengths and weaknesses leading to a somewhat fragmented installed base. It is clearly desirable for different routing protocols to be able to interoperate with one another until there is a clear winner. Interoperability approaches can include the following:
MOSPF is designed to run on top of OSPFv2, so multicast routing can be easily introduced into an OSPFv2 routing domain. Interoperability between MOSPF and dense-mode protocols such as DVMRP is specified in [18, 24].
PIM designers are addressing both interoperability between PIM-DM and PIM-SM, as well as between PIM and other multicast routing protocols.
Interoperability between a single CBTv2 stub domain and a DVMRP backbone is outlined in .
An impressive example of the early interoperability in this respect is a network called the Multicast Backbone (MBone), where DVMRP is used to connect multicast-enabled islands via tunneling over the largely unicast-based Internet. An overview of this network is given in section 4.7.2.
Another area we will briefly touch on here is the delivery of IP multicasts between domains (i.e., AS-AS multicast delivery). While an interim solution is available (through a combination of both new and existing technologies), a long-term solution requires a radical examination of possible solutions.
There is a fundamental incompatibility between sparse- and dense-mode multicast protocols in the way they approach the construction of distribution trees. Dense-mode protocols are data driven, while sparse-mode protocols rely on explicit join requests. If a dense-mode group is to interoperate with a sparse-mode group (e.g., to form a group that is sparsely distributed over a wide area network but that is densely distributed within a single subnet), there must be a mechanism for allowing the dense group to reach out to the sparse group to request to join. The solution proposed by PIM designers is to have Multicast Border Routers (MBRs) send explicit joins to the sparse group. Note that the same approach would enable PIM-SM to interoperate with other dense-mode protocols, such as DVMRP. For further details on interoperability between different multicast routing protocols via MBRs refer to .
Tunneling is a transition strategy for IP multicast routing. In this context we refer to the encapsulation of multicast packets within IP unicast datagrams, which may then be routed through parts of an internetwork via conventional unicast routing protocols, such as RIP, OSPF, and EIGRP. The encapsulation is added on entry to a tunnel and stripped off on exit from a tunnel. Perhaps the best-known demonstration of multicast tunneling is employed to create an Internet overlay network called the MBone.
Multicast packet forwarding is far from uniformly supported on the Internet at present. To gain experience with multicasting, the designers of the Internet decided to create a virtual overlay network on top of the physical infrastructure of the existing Internet. This overlay network is called the Multicast Backbone (MBone). The MBone performed its first worldwide event in March 1992, to support a real-time multicast audioconference over the Internet from an IETF meeting in San Diego. In the original experiment there were 20 sites involved; by 1994 the IETF meeting in Seattle was multicasting to 567 hosts in 15 countries on two parallel channels (audio and video). The multicast routing function was provided by workstations running a daemon process (mrouted), capable of receiving encapsulated multicast packets and processing them as required. Connectivity between these devices was provided using point-to-point IP-encapsulated tunnels, where DVMRP was employed to create logical links between end points over one or more unicast Internet routers. With this early deployment multiple tunnels sometimes ran over the same physical link.
The MBone has grown substantially since 1992, and has subsequently been used for video- and audioconferencing, video broadcasts from international technical conferences, and NASA space shuttle missions. The MBone is probably one of the few places where DVMRP is currently implemented on a live network (although it is understood that the administrators of the MBone plan to adopt PIM in the future because of its greater efficiency). Figure 4.12 illustrates the MBone status as of May 1994.
Figure 4.12: Major MBone routers and links as of May 11, 1994. (Attributed to S. Casner)
Multicasting can be supported in commercial multicast routers or in hosts running the multicast routing daemon (mrouted), which uses DVMRP as the routing protocol. Networks that are connected to the MBone must comply with specific requirements for the available bandwidth. For video transmissions, a minimum bandwidth of 128 Kbps is required. For audio transmissions a minimum of 9–16 Kbps is required. The IETF multicast traffic has average transmission rates of 100 to 300 Kbps and spikes of up to 500 Kbps. The interested reader should refer to [26, 27] for further details on the MBone and its architecture.
The basic idea in constructing an overlay network is to create virtual links by tunneling multicast packets inside regular IP unicast packets where the transmission path traverses routers that are not multicast enabled (see Figure 4.13). Since few routers in the Internet today support multicasting, the MBone is overlaid on top of the existing Internet protocols, with multicast routers (mrouters) connected by virtual point-to-point links. Unicast encapsulation hides the multicast data and addressing information inside the payload of a new unicast IP header. The unicast destination address of the new IP header is the tunnel end-point mrouter IP address. When the mrouter at the end of the tunnel receives the encapsulated packet, it strips off the IP header and forwards the original multicast packet. In Figure 4.13, we see that the multicasts being forwarded from Router-1 to Router-4 are sent as multicasts to Router-2 and then encapsulated in IP and tunneled (as unicasts) via Router-3 on to Router-4 (where they are decapsulated). We need both unicast and multicast routing tables to support tunneling, since the shortest path for multicasting between R1 and R4 is not necessarily the shortest path for unicasting.
Figure 4.13: MBone tunnel. Shaded nodes are multicast-enabled routers forming an overlay network (shown in bold).
MBone topology is engineered via path metrics, which specify the routing cost for each tunnel (used by DVMRP to select the cheapest path). The lower the metric the lower the cost of forwarding packets through a tunnel. If, in Figure 4.13, we set up two tunnels between Router-2 and Router-4, as R2-R3-R4 and R2-R6-R5-R4, with tunnel metrics 8 and 6, respectively, then the resulting MBone topology will be as illustrated in Figure 4.14.
Figure 4.14: Modified MBone tunnel topology. By changing the tunnel metrics between R2 and R4 we can force a different path.
The MBone also uses a threshold to limit the distribution of multicasts. This parameter specifies the minimum TTL for a multicast packet to be forwarded into an established tunnel. The TTL is decremented by one at every multicast router hop (i.e., it is unaffected by the number of unicast routers traversed). In the future it is envisaged that most Internet routers will be multicast enabled, and this will negate the use of tunneling. The MBone may eventually become obsolete, but this could take some time given the current adoption of multicasting on the Internet.
The first multiparty video- and audioconferencing tools to be used over the MBone were developed by the Network Research Group at Lawrence Berkeley National Laboratory (LBNL). Today there are many commercial and noncommercial applications; the following list summarizes some popular MBone applications that are currently available.
Session Directory (SD)—SD is used to announce MBone sessions.
Netvideo (NV)—A videoconferencing tool.
Videoconferencing System (VIC)—A videoconferencing tool.
INRIA Videoconferencing System (IVS)—See .
Visual Audio Tool (VAT)—An audioconferencing tool.
Robust Audio Tool (RAT)—An audioconferencing tool.
Whiteboard (WB)—Provides a shared drawing space for use by videoconference participants.
MiMaze—A distributed game that runs over the Mbone.
SD is particularly interesting. It can be used by MBone users to reserve and allocate media channels and to view advertised channels. SD advertises session schedules periodically (via a well-known multicast address, 18.104.22.168, and port 4000) and also assigns a unique multicast address and port number to each multicast application session (actually, to each message flow within a session).
The MBone is used widely in the research community to transmit the proceedings of various conferences and to permit desktop conferencing. Most MBone applications run over UDP rather than TCP, since the reliability and flow-control mechanisms of TCP are not practical for real-time broadcasting of multimedia data. The loss of an audio or video packet is acceptable, rather than transmission delays, due to TCP retransmissions. Above UDP most MBone applications use the Real-Time Transport Protocol (RTP), discussed in section 4.8.
There is a perceived need to provide Internet-wide IP multicast, evidenced by the expansion of the MBone and the emergence of new multicast-aware applications. The short-term solution for interdomain multicast routing is functional but relies on an inelegant combination of new and existing technology, as follows:
An extension to the existing unicast exterior unicast routing protocol BGP4, known as BGP4+, see . BGP4 has been extended to support multicast routes, and this protocol is generally referred to as Multicast Border Gateway Protocol (MBGP).
Use of an existing interior multicast routing protocol to handle interdomain multicast tree construction. Since broadcast-and-prune methods are not desirable in this regard, the protocol selected is PIM-SM. In this mode PIM-SM treats domains as nodes in a network and determines a multicast tree between domains containing group members.
A new protocol, called the Multicast Source Discovery Protocol (MSDP), see . The protocol operates by having representatives in each domain announce to other domains the existence of active sources. MSDP is run in the same router as a domain's RP (or one of the RPs). MSDP's operation is similar to MBGP in that MSDP sessions are configured between domains and TCP is used for reliable session message exchange.
While this approach is accepted as a reasonable interim solution, it lacks scalability in the long term, and there is still a perceived need to develop a more integrated long-term strategy. There are several approaches being actively researched at this time, broadly divided into two camps, as follows:
Border Gateway Multicast Protocol (BGMP)—BGMP was first proposed back in 1998. The basic idea is to construct bidirectional shared trees (*, g) between domains, with only a single RP (BGMP also needs to decide in which domain to root the shared tree). Since address allocation has become an important issue for commercial IP multicast users, BGMP also includes its own address allocation scheme called the Multicast Address-Set Claim (MASC) protocol (although it is not dependent on MASC).
Root-Addressed Multicast Architecture (RAMA)—The aforementioned approaches to inter-domain multicast do not address related issues such as security, billing, and management. Therefore, several members of the multicast community are attempting to make fundamental changes in the multicast model in an effort to produce a more comprehensive solution. One proposal is the Root-Addressed Multicast Architecture (RAMA).
Further discussion on this topic is beyond the scope of this book, and the interested reader should keep a watchful eye on forthcoming Internet drafts on this topic.