OpenFlow and PlanetLab

From Sb
Jump to: navigation, search

Contents

OpenFlow support in PlanetLab Europe

Recently, PLE has been extended with OpenFlow capabilities. The OpenFlow support is built around a modified version of OpenVSwtich called sliver-ovs. Experimenters can easily create an OpenFlow overlay network by specifying the links between PLE nodes. Details are here and here


(Note that the OpenFlow support in PlanetLab (Central) is a bit different.)

Advanced usage: connecting external nodes to the PlanetLab overlay

Internally, sliver-ovs ports are connected with simple UDP tunnels, so we can terminate the tunnels in tap devices at external nodes.

Let's create an overlay with two planetlab switches and two external hosts:

  USER          PL1        PL2         WEB
+--------+   +-------+  +-------+   +-------+
| toto0  |   |  ovs  |  |  ovs  |   | toto0 |
|    .---+---+---O---+--+---O---+---+--.    |
|        |   |       |  |       |   |       |
+--------+   +-------+  +-------+   +-------+

Below is the relevant part of the conf.mk defining the topology depicted above:

SLICE=example_slice

HOST_PL1=onelab7.iet.unipi.it
IP_PL1=10.0.56.106/24
HOST_PL2=planet2.inf.tu-dresden.de
IP_PL2=10.0.56.107/24

HOST_USER=allegra1.tmit.bme.hu
HOST_WEB=allegra2.tmit.bme.hu

LINKS :=
LINKS += PL1-PL2
LINKS += PL1-USER
LINKS += PL2-WEB

EXTERNAL_HOSTS := WEB USER
EXTERNAL_PORT := 2222

The rest of the configuration file just overrides some of the normal Makefile rules and you can safely omit it when using recent sliver-ovs versions. Note that external hostnames are allowed only for the second part of a link specification. All UDP tunnels terminate at port 2222 in the example above, however we can specify per host values setting, for example, the EXTERNAL_PORT_WEB variable.

Now we can build the overlay network as usual:

$ make -j
[...]
===> WEB: ./tunproxy -t 139.91.90.239:41539 -p 2222 -e -d
===> USER: ./tunproxy -t 131.114.59.243:51829 -p 2222 -e -d

Besides other things make prints out how the tunproxy commands should be started. So, let's start them on the external hosts and test the connectivity:

USER:$ sudo ./tunproxy -t 131.114.59.243:51829 -p 2222 -e -d
(open a new terminal on USER)
USER:$ sudo ip addr add 10.0.0.1/8 dev toto0
USER:$ sudo ip link set dev toto0 up

WEB:$ ./tunproxy -t 139.91.90.239:41539 -p 2222 -e -d
(open a new terminal on WEB)
WEB:$ sudo ip addr add 10.0.0.2/8 dev toto0
WEB:$ sudo ip link set dev toto0 up

USER:$ ping -c 1 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=55.4 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 55.460/55.460/55.460/0.000 ms

Advanced usage: connecting qemu instances to the PlanetLab overlay

Luckily, resent qemu versions and sliver-ovs use the same kind of UDP tunnels for networking. However, qemu currently available with 'sudo yum qemu' is at version 0.11.0 and do not have this kind of networking support. Once you installed qemu from source (we used version 1.3.0), you can directly connect qemu instances with a sliver-ovs.

Let's modify the previous example with running USER as a virtual machine inside PL1:

     PL1            PL2         WEB
+-------------+  +-------+   +--------+
|      ovs    |  |  ovs  |   | toto0  |
|       O-----+--+---O---+---+--.     |
| USER  |     |  |       |   |        |
| +-----+--+  |  +-------+   +--------+
| |     |  |  |
| | eth0.  |  |
| |        |  |
| +--------+  |
+-------------+

Only one line should be changed in conf.mk:

SLICE=example_slice

HOST_PL1=onelab7.iet.unipi.it
IP_PL1=10.0.56.106/24
HOST_PL2=planet2.inf.tu-dresden.de
IP_PL2=10.0.56.107/24

HOST_USER=onelab7.iet.unipi.it
HOST_WEB=allegra2.tmit.bme.hu

LINKS :=
LINKS += PL1-PL2
LINKS += PL1-USER
LINKS += PL2-WEB

EXTERNAL_HOSTS := WEB USER
EXTERNAL_PORT := 2222

Although USER is running inside a PL node, now USER is an external node since it does not run sliver-ovs, i.e., it is acting as a host and not as a switch. Once again we can build the overlay with `make -j'. Assuming we got the same output as before we should start qemu as follows:

 $~/bin/qemu-system-i386 -m 256 -hda ./debian_squeeze_i386_standard.qcow2 -nographic -net nic -net socket,localaddr=0.0.0.0:2222,udp=131.114.59.243:51829

Configure WEB as before, then inside the virtual machine:

root@debian-i386:~# ip addr add 10.0.0.1/8
root@debian-i386:~# ip link set dev eth0 up
root@debian-i386:~# ping -c 1 10.0.0.2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=55.4 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 1ms
rtt min/avg/max/mdev = 55.460/55.460/55.460/0.000 ms

Further reading

We built a tool that helps creating overlay networks, distributing and starting qemu images, etc.

Joseph Beshay ported back the necessary UDP tunneling to qemu-0.14

Contact

Felicián Németh, email

Personal tools