Networking the UML Instances


After seeing the example of two UML instances not interacting (i.e., not corrupting each other's filesystems) when you might expect them to, let's make them interact when we want them to. We will create a small private network with just these two instances on it and see that they can use it to communicate with each other in the same way that physical machines communicate on a physical network.

For a pair of virtual machines, the basic requirement for setting up a network between them is some method of exchanging packets. Since packets are just hunks of data, albeit specially formatted ones, in principle, any interprocess communication (IPC) mechanism will suffice. All that's needed in UML is a network driver that can send and receive packets over that IPC mechanism.

This is enough to set up a private network that the UML instances can use to talk to each other, but it will not let them communicate with anything else, such as the host or anything on the Internet. Communicating with the outside world, including the host, requires root privileges at some point. The instance needs to send packets to the host and have them be handled by its network subsystem. This ability requires root privileges because it implies that the instance is a networking peer of the host and could foul up the network through misconfiguration or malice.

Here, we will introduce UML networking by setting up a two-machine private network with no access to the outside world. We will cover networking fully in Chapter 7, including access to the host and the Internet.

As I said earlier, in principle, any IPC mechanism can be used to construct a virtual network. However, they differ in their convenience, which is strongly related to how well they map onto a network. Fundamentally, Ethernet is a broadcast medium in which a message sent by one host is seen by all the others on the same Ethernet, although, in practice, the broadcasting is often suppressed by intelligent hardware such as switches. Most IPC mechanisms, on the other hand, are point to point. They have two ends, with one process at each end, and a message sent by a process at one end is seen by the host at the other.

This mismatch makes most IPC mechanisms not well suited for setting up a network. Each host would need a connection to each other host, including itself, so the total number of connections in the network would grow quadratically with the number of hosts. Further, each packet would need to be sent individually to each host, rather than having it sent once and received by all the other hosts.

However, one broadcast IPC mechanism is available: multicasting. This little-used networking mechanism allows processes to join a group, called a multicast group. When a message is sent to this group, it is received by all the processes that have joined the group. This nicely matches the semantics needed by a broadcast medium, with one caveatit matches an Ethernet device that's connected by a hub, not a switch. A hub repeats every packet to every host connected to it, while a switch knows which Ethernet MAC addresses are associated with each of its ports and sends each packet only to the hosts it's intended for. With a multicast virtual network, as with a hub, each host will see all of the packets on the network and will have to discard the ones not addressed to it.

To start things off, we need Ethernet interfaces in our UML instances. To do this, we need to plug them in:

host% uml_mconsole debian1 config eth0=mcast OK host% uml_mconsole debian2 config eth0=mcast OK


This hot-plugs an Ethernet device into each instance. If you were starting them from the shell here, you would simply add eth0=mcast to their command lines.

Now, if you go back to one of the instances and run ifconfig, you will notice that it has an eth0 :

UML1# ifconfig -a eth0      Link encap:Ethernet  HWaddr 00:00:00:00:00:00           BROADCAST MULTICAST  MTU:1500  Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000           Interrupt:5 lo        Link encap:Local Loopback           inet addr:127.0.0.1  Mask:255.0.0.0           UP LOOPBACK RUNNING  MTU:16436  Metric:1           RX packets:6 errors:0 dropped:0 overruns:0 frame:0           TX packets:6 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:0


You'll see the same thing has happened in the other UML.

Now we need to bring them up, so we'll assign IP addresses to them. We'll use 192.168.0.1 for one instance:

UML1# ifconfig eth0 192.168.0.1 up


and similarly in the other instance, we'll assign 192.168.0.2:

UML2# ifconfig eth0 192.168.0.2 up


Don't worry if you are already using these addresses on your own networkwe have set up an isolated network, so there can't be any conflicts between IP addresses if they can't exchange packets with each other.

Running ifconfig again shows that both interfaces are now up and running:

UML1# ifconfig eth0 eth0      Link encap:Ethernet HWaddr FE:FD:C0:A8:00:01           inet addr:192.168.0.1 Bcast:192.168.0.255     \Mask:255.255.255.0           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000           Interrupt:5


No packets have been transmitted or received, so we need to fix that. Let's ping the second UML from the first:

UML1# ping 192.168.0.2 PING 192.168.0.2 (192.168.0.2): 56 data bytes 64 bytes from 192.168.0.2: icmp_seq=0 ttl=64 time=9.3 ms 64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.2 ms 64 bytes from 192.168.0.2: icmp_seq=2 ttl=64 time=0.2 ms --- 192.168.0.2 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.2/3.2/9.3 ms


This establishes that we have basic network connectivity. To see some more interesting network action, let's request a Web page from the other UML. Since we don't have any ability to run a graphical Web browser inside the UML yet, we'll use the command-line tool wget:

UML1# wget -O - http://192.168.0.2 --15:51:10--  http://192.168.0.2:80/            => `-' Connecting to 192.168.0.2:80... connected! HTTP request sent, awaiting response... 200 OK Length: 4,094 [text/html]     0K -><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN"> <HTML> <HEAD>


Following that snippet, you'll see the rest of the default Apache home page as shipped by Debian. If you want a more interactive Web experience at this point, you can just run lynx, the text-mode Web browser, with the same URL, and you'll see a pretty good text representation of that page. The external links (those that point to debian.org, apache.org, and the like) will not work because these instances don't have access to the outside network. However, any links internal to the other UML instance, such as the Apache documentation, should work fine.

Now that we have basic networking between the two instances, I am going to complicate the configuration as much as possible, given that we have only two hosts, and add them both to what amounts to a second Ethernet network. I'm going to keep this network separate from the current one, and to do so, I need to specify a different port from the default. We specified no multicast parameters when we set up the first network, so the UML network driver assigned default values. To keep this new network separate from the old one, we will provide a full specification of the multicast group:

host% uml_mconsole debian1 config eth0=mcast,,239.192.168.1,1103,1 OK host% uml_mconsole debian2 config eth0=mcast,,239.192.168.1,1103,1 OK


We are separating this network from the previous one by using the next port. You can see how things are set up by looking at the kernel message log:

UML# dmesg | grep mcast Configured mcast device: 239.192.168.1:1102-1 Netdevice 0 : mcast backend multicast address: \     239.192.168.1:1102, TTL:1 Configured mcast device: 239.192.168.1:1103-1 Netdevice 1 : mcast backend multicast address: \     239.192.168.1:1103, TTL:1


We used the same default IP address, but used port 1103 instead of the default 1102. We are still defaulting the second parameter, which is the hardware MAC address that will be assigned to the adapters. Since we're not providing one, it will be derived from the first IP assigned to the interface.

Again, if you run ifconfig, you will see that another interface has materialized on the system:

UML1# ifconfig -a eth0      Link encap:Ethernet HWaddr FE:FD:C0:A8:00:01           inet addr:192.168.0.1 Bcast:192.168.0.255      \Mask:255.255.255.0           UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1           RX packets:1363 errors:0 dropped:0 overruns:0 frame:0           TX packets:1117 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000           Interrupt:5 eth1      Link encap:Ethernet  HWaddr 00:00:00:00:00:00           BROADCAST MULTICAST  MTU:1500  Metric:1           RX packets:0 errors:0 dropped:0 overruns:0 frame:0           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:1000           Interrupt:5 lo        Link encap:Local Loopback           inet addr:127.0.0.1 Mask:255.0.0.0           UP LOOPBACK RUNNING MTU:16436 Metric:1           RX packets:546 errors:0 dropped:0 overruns:0 frame:0           TX packets:546 errors:0 dropped:0 overruns:0 carrier:0           collisions:0 txqueuelen:0


We'll bring these up with IP addresses on a different subnet:

UML1# ifconfig eth0 192.168.1.1 up


and:

UML2# ifconfig eth0 192.168.1.2 up


As before, we can verify that we have connectivity by pinging one from the other:

UML# ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1): 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0 ttl=64 time=18.6 ms 64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.4 ms --- 192.168.1.1 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.4/9.5/18.6 ms


Now that we have two networks, we can do some routing experiments. We have two interfaces on each UML instance, on two different networks, with correspondingly different IP addresses. We can pretend that the 192.168.1.0/24 network is the only one working and set up one instance to reach the 192.168.0.0/24 interface on the other. So, let's first look at the routing table on one of the instances:

UML# route -n Kernel IP routing table Destination     Gateway                 Genmask          Flags Metric Ref \      Use Iface 192.168.1.0     0.0.0.0                 255.255.255.0    U     0      0   \      0 eth1 192.168.0.0     0.0.0.0                 255.255.255.0    U     0      0   \      0 eth0


We will delete the 192.168.0.0/24 route on both instances to pretend that network doesn't work any more:

UML1# route del -net 192.168.0.0 netmask 255.255.255.0 dev eth0


and identically on the other:

UML2# route del -net 192.168.0.0 netmask 255.255.255.0 dev eth0


Now, let's add the route back in, except we'll send those packets through eth1 :

UML1# route add -net 192.168.0.0 netmask 255.255.255.0 dev eth1


and on the other:

UML2# route add -net 192.168.0.0 netmask 255.255.255.0 dev eth1


Now, the routing table looks like this:

UML# route -n Kernel IP routing table Destination     Gateway     Genmask            Flags Metric Ref      Use Iface 192.168.1.0     0.0.0.0     255.255.255.0      U     0      0      0 eth1 192.168.0.0     0.0.0.0     255.255.255.0      U     0      0      0 eth1


Before we ping the other side to make sure that the packets are traveling the desired path, let's look at the packet counts on eth0 and etH1 before and after the ping. Running ifconfig shows this output for eth0:

RX packets:3597 errors:0 dropped:0 overruns:0 frame:0 TX packets:1117 errors:0 dropped:0 overruns:0 carrier:0


and this for eth1:

RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0


The rather large packet count for eth0 comes from my playing with the network without recording here everything I did. Also, notice that the receive count for etH1 is double the transmit count. This is because of the hublike nature of the multicast network that I mentioned earlier. Every packet is seen by every host, including the ones the host itself sent. The UML received its own transmitted packets and the replies. Since there was one reply for each packet sent out, the number of packets received will be exactly double the number transmitted.

Now, let's test our routing by pinging one instance from the other:

UML# ping 192.168.0.251 PING 192.168.0.251 (192.168.0.251): 56 data bytes 64 bytes from 192.168.0.251: icmp_seq=0 ttl=64 time=19.9 ms 64 bytes from 192.168.0.251: icmp_seq=1 ttl=64 time=0.4 ms 64 bytes from 192.168.0.251: icmp_seq=2 ttl=64 time=0.4 ms --- 192.168.0.251 ping statistics --- 3 packets transmitted, 3 packets received, 0% packet loss round-trip min/avg/max = 0.4/6.9/19.9 ms


This worked, so we didn't break anything. Let's check the packet counters for eth0 again:

RX packets:3597 errors:0 dropped:0 overruns:0 frame:0 TX packets:1117 errors:0 dropped:0 overruns:0 carrier:0


and for etH1:

RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0


Nothing went over eth0, as planned, and the pings went over eth1 for both UMLs. So, even though the 192.168.0.0/24 network is still up and running, we persuaded the UMLs to pretend it wasn't there and to use the 192.168.1.0/24 network instead.

Although this is a simple demonstration, we just simulated a scenario you could run into in real life, and dealing with it incorrectly in real life could seriously mess up a network.

For example, say you have two parallel networks, with one acting as a backup for the other. If one goes out of commission, you want to fail over to the other. Our scenario is similar to having the 192.168.0.0/24 network fail. Leaving the eth0 interfaces running is consistent with this because they would remain up on a physical machine on a physical Ethernetthey would just have 100% packet loss. Having somehow seen the network fail, we reset the routes so that all traffic would travel over the backup network, 192.168.1.0/24. And we did it with no extra hardware and no Ethernet cables, just a standard Linux box and some software.

Setting this up and doing the failover without having tested the procedure ahead of time would risk fouling up an entire network, with its many potentially unhappy users, some of whom may have influence over the size of your paycheck and the duration of your employment. Developing the procedure without the use of a virtual network would involve setting up two physical test networks, with physical machines and cables occupying space somewhere. Simply setting this up to the point where you can begin simulating failures would require a noticeable amount of time, effort, and equipment. In contrast, we just did it with no extra hardware, in less than 15 minutes, and with a handful of commands.




User Mode Linux
User Mode Linux
ISBN: 0131865056
EAN: 2147483647
Year: N/A
Pages: 116
Authors: Jeff Dike

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net