Distance-Vector Multicast Routing Protocol (DVMRP) is a relatively old dense-mode multicast routing protocol, first defined in 1988 , although a new version of the protocol (DVMRPv3) is currently the subject of an IETF Internet Draft . DVMRP is used to propagate multicasts through an internetwork by building per source group multicast delivery trees for use within an AS. DVMRP is used for multicast distribution only; routers that are required to distribute both unicasts and multicasts must also run a traditional unicast routing protocol (such as OSPF or RIP).
DVMRP is essentially a flood-and-prune routing protocol (hence, dense-mode). It builds per source multicast trees based upon routing exchanges and then dynamically creates per source group multicast delivery trees by selectively pruning the multicast tree for each source. DVMRP performs reverse path forwarding to determine whether multicast traffic should be forwarded to downstream interfaces. To identify which interface leads back to the source, DVMRP implements its own unicast routing protocol, together with a distributed distance-vector routing algorithm. This unicast routing protocol is similar to RIP and uses a hop-based metric (consequently, the path of multicast traffic may differ from the path of unicast traffic). Using these techniques, source-rooted shortest path trees can be constructed to reach all group members from each source network of multicast traffic. For the interested reader, the application of distance-vector routing to multicast trees is described in .
Figure 4.6 shows the format of the DVMRP message.
Figure 4.6: DVMRP message format.
Type—Always 0×13 for a DVMRP packet.
Code—Identifies the DVMRP message type, as follows:
1—Probe (for neighbor discovery)
2—Report (for route exchange)
3—Ask neighbors (obsolete)
5—Ask neighbors 2 (request neighbor list—diagnostic/troubleshooting)
6—Neighbors 2 (respond with neighbor list—diagnostic/troubleshooting)
7—Prune (for pruning multicast delivery trees)
8—Graft (for grafting multicast delivery trees)
9—Graft ack (for acknowledging graft messages)
Checksum—A 16-bit checksum calculated as the ones complement of the ones complement sum of the whole IGMP message (i.e., the entire IP payload). Prior to computing the checksum, the checksum field is set to 0.
Minor version—Minor version number of the protocol. Reference  specifies 0xFF.
Major version—Major version number of the protocol. Reference  specifies 3.
DVMRP data—Parameters and data dependent upon the type of DVMRP message. Refer to  for details.
DVMRP messages are encapsulated in IP datagrams and are essentially a modified form of IGMP. DVMRP uses the basic IGMP header format, introduces a new type code, and makes use of the Unused field as a subtype that is used to specify the DVMRP message type.
DVMRP constructs a separate distribution tree for each (s, g) flow initially via reverse path forwarding and flooding techniques. Packets are forwarded out of all interfaces on a distribution tree, initially assuming every branch is part of the multicast group. To optimize delivery trees and deal with dynamic group membership changes, DVMRP relies on pruning and grafting techniques. Pruning eliminates tree branches that have no multicast group members and also eliminates redundant nonshortest paths from any receiver to the source (such as multiple interfaces onto a broadcast LAN).
The following terminology is used when describing DVMRP operations:
Upstream interface—the interface on the shortest path tree back to the source of a multicast group. Sometimes called the reverse path forwarding interface.
Downstream interfaces—the set of interfaces nominated for multicast forwarding for a particular multicast group. By definition this will not include the upstream interface.
Received interface—the interface on which a multicast packet is received (which may or may not be the upstream interface).
Nonleaf network—a network with dependent downstream neighbors for a particular source network.
Leaf network—a network with no dependent downstream neighbors for a particular source network.
Neighbor—an adjacent DVMRP router (i.e., a peer router).
Designated forwarder—a router interface elected to inject packets for a particular multicast group.
IGMP local group database—a list of active membership information maintained by all IP multicast routers on each physical, multicast-enabled network interface.
Upstream router—The router responsible for supplying multicasts for a particular (s, g) pair (i.e., the router one hop closer to the source on a particular multicast tree).
A summary of the key DVMRP timing parameters is given in the following chart (for full details refer to ):
Neighbor timeout interval
Minimum flash update interval
Route report interval
Route expiration time
2 × route report interval
variable (< 2 hours)
Prune retransmission time
3, followed by exponential back-off
Graft retransmission time
5, followed by exponential back-off
DVMRP routers use distance-vector routing to establish routing tables of source-based shortest-path multicast trees. The routing process is similar to RIP As with RIP, the routing table is propagated to all DVMRP routers, using routing updates in order to provide a consistent view of the network. Initially, each DVMRP router knows only about its local interfaces. In order to learn about the complete network topology it must acquire one or more neighbors. DVMRP routers discover neighbors dynamically by sending neighbor probe messages on multicast-enabled network interfaces and virtual tunnel interfaces. Probes are sent frequently (every ten seconds) to the all-DVMRP-routers multicast address (220.127.116.11). Each probe contains the list of neighbors for which probe messages have been received on that interface. On receiving a probe a router will attempt to establish a full-duplex adjacency with its neighbor.
Each router sends routing updates, comprising the network address and mask of directly connected interfaces, plus all routes received from its neighbors, at regular intervals (default 60 seconds). The metric used in route entries is hop based, and this applies to both standard routing interfaces and tunnel interfaces. Routing metrics comprise the aggregated path cost, as per RIP, but with infinity set to 32. This constrains the network diameter and places an upper bound on convergence time.
As part of the routing exchange process, upstream routers determine whether any downstream routers are dependent on them for forwarding multicasts from particular sources. This is achieved using a technique called poison reverse. When a downstream router selects a particular upstream router as the best next hop to a given source, this is flagged by echoing back the route on the upstream interface with a metric equal to the original metric plus infinity. Hence, legal metric values lie between 1 and 63, as follows:
1 to 31 indicates reachable source networks.
32 (infinity) indicates unreachable source networks.
33 to 63 indicates that the downstream router originating the report depends upon the upstream router to provide multicast datagrams for a particular source network.
When the upstream router sees a metric that lies between 33 and 63, it caches the downstream router address and associated source network in a list associated with that particular interface. This list is used to determine whether to prune back specific IP source multicast trees.
When an IP multicast datagram arrives at a DVMRP router, the router checks that the upstream interface matches the received interface. If the interfaces do not match, the packet will be discarded; otherwise, a DVMRP router will forward the datagram to a set of downstream interfaces (including tunnel interfaces), depending upon the state of the interface. If the interface is a leaf network then the IGMP local group database must be consulted. If the destination group address is listed in this database, and the router is the designated forwarder for the source, then the interface is included in the list of downstream interfaces. If there are no group members on the interface, then the interface is removed from the list of downstream interfaces.
Initially, all non-leaf networks (i.e., those with dependent neighbors) should be included in the downstream interface list. This enables all downstream routers to see traffic for a particular (s, g) pair, which they may subsequently prune back (depending upon group membership status for a particular interface) and later reinstate through grafting. Note that this process is applied on a multicast group (s, g) basis, not globally; there may be several overlaid trees with some branches in both a live and pruned state for different multicast groups. Delivery trees are, therefore, calculated and updated dynamically to track the membership of individual groups.
If a router determines that all of its downstream interfaces do not meet the criteria for forwarding, it initiates a prune operation by notifying the upstream router that it no longer wants traffic destined for a particular (s, g) pair. This notification is done via a DVMRP prune message upstream to the router to which it expects to forward datagrams from a particular source. When the upstream router receives prune messages, it can remove the interface from its downstream interface list. In the event that the upstream router is able to remove all of its downstream interfaces, it can send a prune message to its upstream router. This process continues until all unnecessary branches are removed from the delivery tree. In Figure 4.7 the multicast delivery tree for this particular service means that links L2 and L6 are logically disconnected, since they are not on the shortest path. In Figure 4.7(a) H7's group membership on Net3 lapses and on R6 DVMRP is informed by IGMP that its local database is empty. R6 sends a prune message to its upstream neighbor, R3, requesting disconnection of the branch from the tree. R3 stops forwarding multicasts for source S. Note that R3 cannot prune itself from the tree, even though it has no hosts requiring multicast service, because it has a downstream dependent neighbor, R5. In Figure 4.7(b) H7 some time later requests multicast service via IGMP. DVMRP is notified on R6, which then sends a graft message to upstream neighbor R3. R3 sends a graft ack, and the branch is reinstated on the multicast delivery tree.
Figure 4.7: Multicast network running DVMRP between routers and IGMP on the router interfaces.
Once branches are pruned back there is no way for routers on these dead branches to know about particular (s, g) flows. To overcome this, DVMPR imposes a lifetime on prune states and periodically resumes the flooding process whenever the prune lifetime expires. If the interface is still a nonleaf, the interface is simply joined back onto the delivery tree. If the multicast datagrams being received are still not required, the prune process is undertaken once more. Note that the lifetime of the prune sent by an upstream router must be equal to the minimum of the lifetime still remaining from the prunes received from downstream routers (each prune message carries its own lifetime parameter).
IP multicast promotes the concept of dynamic group membership, so rather than wait for the prune lifetime to expire, hosts on previously pruned branches may proactively initiate multicast delivery at any time. The process starts with a host registering interest in a multicast group via IGMP. Once an interface is known to have interested users, DVMRP routers use graft messages to cancel the prunes that are in place. This is done recursively—all the way back up the multicast delivery tree—with each downstream router sending a graft to its upstream neighbor in turn, until the branch is fully connected. Since there is no way to tell if a graft message sent upstream was lost or the source simply stopped sending traffic, each graft message is positively acknowledged with a DVMRP graft ack message. If an acknowledgment is not received within a graft timeout period, the graft message should be retransmitted, using binary exponential back-off between retransmissions. Duplicate graft ack messages are ignored.
If multiple DVMRP routers are attached to the same multiaccess network, then multicast may be duplicated unnecessarily (depending upon the upstream topology). DVMRP automatically prunes back duplicate paths by electing a designated multicast forwarder for each source address (applied on an interface basis). DVMRP routers on a common network will peer and exchange routing updates and will, therefore, have topological knowledge of each other's aggregate metric back to a particular source. The router with the lowest metric to a source is elected to the forwarding state; all other routers must defer. Where routers have the same aggregate path cost, the router with the lowest IP address is elected. In this way DVMRP elects designated forwarders for every source network on every downstream interface.
Since not all IP routers support native multicast routing, DVMRP includes support for tunneling IP multicast datagrams. To achieve this, multicast datagrams are encapsulated in IP unicast packets and then addressed and forwarded to the next multicast router along the destination path. DVMRP treats tunnel interfaces in a manner identical to physical network interfaces, so even if there are several routers that do not support native multicast routing in the intermediate path, these are transparent to the multicast tree. In practice, tunnels generally use either IP-IP  or Generic Routing Encapsulation (GRE), although other encapsulation methods can be used. Perhaps the best example of DVMRP tunnel deployment is the MBone (described in section 4.7.2).
Note that in the DVMRP implementations that predate version 3, DVMRP protocol messages were forwarded directly to the unicast tunnel end-point addresses (since they are unicast, they do not actually require encapsulation). Although more direct, it increases the complexity of firewall configuration, since there are multiple flow types to deal with. The latest incarnation of the protocol  specifies that all DVMRP protocol messages should be sent encapsulated via the tunnel.
The requirement to flood frequently means that DVMRP has limited scalability and is inappropriate for large internetworks with sparsely distributed receivers (i.e., the norm). There is no support for shared trees and the maximum number of hops must be less than 32, constraining the scope of the network. These problems are amplified with early implementations of DVMRP in that they did not support pruning (specifically, DVMRP versions 1 and 2). DVMRP converges slowly, suffering from the same problems as RIP. DVMRP requires a significant amount of state information to be stored in routers (i.e., [s, g] for all pairs). Since route reports may need to refresh several thousand routes each route report interval, routers must attempt to spread the routes reported across the whole route update interval. This reduces the chance of reports becoming synchronized, potentially causing routers to become regularly saturated. It is suggested  that route updates be spread across multiple route reports at regular intervals (this also could impact convergence, since this effectively extends the routing update timer).
Having said all this, DVMRP has been used quite successfully for some time on the Internet, where it was selected to provide a multicast overlay network called MBone (described in section 4.7.2). Even so, DVMRP was used largely because of its early availability and its tunneling capabilities. In reality only a minority of routers on the Internet support multicasts, so the logical network created by the overlay is actually much smaller in operational terms.