Mininet is a great prototyping tool which takes existing SDN-related software components (e.g. Open vSwitch, OpenFlow controllers, network namespaces, cgroups, etc.) and combines them into a framework, which can automatically set up and configure customized OpenFlow testbeds scaling up to hundreds of nodes. Standing on the shoulders of Mininet, we have implemented a similar prototyping system called ESCAPE, which can be used to develop and test various components of the service chaining architecture. Our framework incorporates Click for implementing Virtual Network Functions (VNF), NETCONF for managing Click-based VNFs and POX for taking care of traffic steering. We also add our extensible Orchestrator module, which can accommodate mapping algorithms from abstract service descriptions to deployed and running service chains.
The source code of ESCAPE is available at our github page. For more information we first suggest to read our paper:
Attila Csoma, Balazs Sonkoly, Levente Csikor, Felician Nemeth, Andras Gulyas, Wouter Tavernier, and Sahel Sahhaf: ESCAPE: Extensible Service ChAin Prototyping Environment using Mininet, Click, NETCONF and POX. Demo paper at Sigcomm'14.
An enhanced version of ESCAPE is detailed here.
- You can download a prebuilt VirtualBox image from our project page (or an older version used at Sigcomm 2014 from here).
- Alternatively, you can create your own VM image by following these steps:
- Download the old official mininet image.
- Create a new virtual machine in VirtualBox with the downloaded disk image.
- Start the VM. Login with mininet/mininet.
- Mount the Guest Additions disk image by selecting Devices>Install Guest Additions menu item in the VirtualBox menu bar.
- Inside the VM, type (as user 'mininet'):
- git clone https://github.com/nemethf/escape.git
- cd escape
- Wait, wait, wait.
- Reboot with
- sudo reboot
For getting started with ESCAPE we suggest to retrace the steps of our demo at SIGCOMM 2014. Our demo presented the operation of ESCAPE when we set up a service chain consisting two illustrative VNFs (implemented in Click), a TCP header compressor and a decompressor. It was shown how the abstract service graph is defined with the GUI and how it is mapped to a configured and working service chain in the Mininet network. After the chain was up we started iperfs between two endpoints and Clicky (a GUI for Click) showed the incoming bitrates and bytecounts for both the compressor and the decompressor.
For seeing the demo with your own eyes start the virtual machine, wait and follow the walkthrough text appearing in the top-right corner. For more visual persons a screencast is also available on YouTube.
This following video illustrates the main features of the ESCAPE GUI.
In what follows we give internal details on ESCAPE. More specifically, we describe the overall architecture, the supported use-cases and the ways you can interact with ESCAPE. For further information contact firstname.lastname@example.org, email@example.com
In this setcion, the following abbreviations are used:
- VNF - Virtual Network Function
- SC - Service Chain
- SG - Service Graph
- NF-FG - Network Function Forwarding Graph (lower level SG representation)
- SAP - Service Attachment Point
- SLA - Service Level Agreement
ESCAPE follows the architecture proposed by UNIFY (FP7 EU project). It contains three main layers, namely infrastructure, orchestration and service layers (starting from the bottom) as it is shown in figure below.
The infrastructure layer of ESCAPE is based on Mininet which had to be extended in several directions. The detailed architecture is shown in the next figure.
The main components of the network infrastructure of ESCAPE are the following:
- VNF Containers (or Execution Environments or Nodes)
- OpenFlow switches (OVS instances)
- virtual Ethernet links (veth pairs)
- end hosts
Mininet was extended by the notion of VNF and VNF Container. Here, VNF is a
- Click (modular router) process
- running in a VNF Container
- started with configurable isolation models (based on Linux cgroups)
- having multiple datapath interfaces connected to OVS ports
- having a dedicated ctrl & mgmt network connection (mgmt agents of VNFs).
A VNF Container is a
- bash process
- with configurable isolation models (based on Linux cgroups)
- with a limited amount of CPU, memory, etc. resources assigned to it.
In order to support remote management of VNF Containers, we have extended Mininet by NETCONF capability. Each VNF Container includes an OpenFlow switch and additionally a NETCONF agent which is responsible for
- starting/stopping VNFs
- connecting/disconnecting VNFs to/from OpenFlow switches.
Our NETCONF implementation is based on OpenYuma which was extended to support multiple instances on single machines with given ports.
The NETCONF agents are controlled from the Orchestration layer by a NETCONF client component. The interface with the remote procedure calls and data structures is described by a Yang model constructed for this special purpose. Based on the Yang model, low-level instrumentation codes were implemented as NETCONF modules to hide the infrastructure level details. It is worth noting that this approach supports migration to other platforms such as data center environment managed by e.g., OpenStack (only the instrumentation codes have to be replaced).
Mapping logical Service Chains/Service Graphs to available resources, optimizing the usage of different types of resources (compute, storage, networking), traffic steering are indispensable parts of a service chaining framework. All these tasks are assigned to the Orchestration layer which is shown in the figure below.
The main tasks and components of the Orchestrator are the following:
- mapping abstract SGs (or NF-FGs) into available resources
- optimization algorithm can easily be changed
- integrated NETCONF client
- calls exposed RPCs
- communicates with NETCONF agents
- VNF catalog
- contains a built-in set of useful VNFs
Essential control functions are implemented on top of POX OpenFlow controller platform. For example
- traffic steering between VNFs
- configured by the Orchestrator
- other POX apps to support automatic configuration
Service layer / GUI
The Service layer of ESCAPE with the GUI is based on Miniedit as it is shown in the next figure.
It contains a Service Graph manager which is capable of
- describing/configuring/editing SGs
- configuring requirements (SLAs), such as
- on given sub-graphs
A Mininet configuration component gives a graphical front-end to the Mininet-based infrastructure. Here, we can describe
- physical topology containing
- OpenFlow switches (e.g. OVS)
- VNF Containers
- Service Attachment Points (SAP) (currently implemented as hosts)
- resources, such as
- CPU fraction
- memory fraction
- link bandwidth
- link delay
For VNF management and visualization purposes, we use Clicky. Clicky is a GUI for Click modular router which can be used for configuring and monitoring Click instances.
ESCAPE supports different use-cases focusing on different parts of a service chaining architecture. The following sections give a brief overview on the main use-cases and potential areas where using ESCAPE could be beneficial.
Static Service Chains
The first simple use-case supports static Service Chains. As a result of our Mininet extension, novel network elements, i.e., components of the Service Chain can be started/stopped. We can do this via the CLI of Mininet but typically software components of the Orchestrator call these functions using the given API.
When using the basic setup of ESCAPE, the VNF Containers that are responsible to manage Virtual Network Functions are started as Mininet's CPU Limited Hosts . Basically, these hosts are similar to pure Mininet hosts, but they are extended with Cgroups  capability in order to define and limit the available resources. Orchestrator is able to start/stop VNFs in selected VNF Containers.
Dynamic Service Chains
ESCAPE also supports dynamic, on-demand configuration of Service Chains. This requires special interfaces between the Orchestration and Infrastructure layers. In these interfaces, we use NETCONF for managing VNFs and OpenFlow for traffic steering, respectively. Mininet is responsible for starting an "empty" infrastructure while demands from the users are mapped to resources dynamically by the Orchestrator.
Besides using Mininet's built-in features, it is possible to use NETCONF (Network Configuration Protocol) to manage VNF Containers which run dedicated NETCONF agents. NETCONF  is a network management protocol standardized by the IETF, and intended to extend basic "question-and-answer" featured queries (provided by SNMP (Simple Network Management Protocol, )) with Remote Procedure Calls (RPCs). In our NETCONF implementation, we use a modified version of OpenYuma as NETCONF agent, and NCClient as NETCONF client. The interface between the Orchestrator and the VNF Container is described in YANG data modeling language , while the low-level, device specific parts are implemented as instrumentation codes in NETCONF libraries of OpenYuma.
Remote Service Chains
Since the operation is described in a standardized data modeling language (YANG), ESCAPE enables VNF Containers to be run not just in a Mininet environment, but in an external high capacity node managed by e.g., OpenStack. Only the instrumentation codes should be replaced and implemented according to the desired framework. For example, in an OpenStack data center domain, instead of starting Click processes as VNFs, we should start dedicated virtual machines (which will run the required processes). For further details of using ESCAPE in a multi-domain orchestration framework, see .
ESCAPE also aims at fostering the development of novel VNFs. Currently, we support VNFs implemented in Click, however, the framework can easily be extended to include other types of network functions.
Novel VNFs implemented in Click can be added to our VNF Catalog which stores currently available VNFs. This database contains implementation details of the VNFs (e.g., Click scripts), general parameters (e.g., description of the function), and VNF specific parameters (e.g., number of ports) as well.
VNF Catalog is the first implementation of our Network Function Information Base (NFIB). This database contains all the relevant information on the network functions which can be used by the users and the Orchestration modules.
We used sqlite3 in Python which provides a SQL interface compliant with the DB-API 2.0 specification (http://legacy.python.org/dev/peps/pep-0249/) to implement a SQL-based VNF Catalog. As we currently support Click based VNFs, each entry in the database corresponding to a VNF includes:
- Click script (command)
- Click handlers (readHdr, writeHdr)
name should be the unique name of the VNF, while
type field identifies the type of implementation (currently only Click type is supported)
description can give optional information on the VNF
dependency attribute indicates if the VNF is dependent on other elements and which VNFs they are
Click handlers stores the read/write handlers of the Click elements. These handlers can be used for configuration of Click instances while they are running. Based on the implementation type of the VNFs, different repositories/databases should be considered to store the VNF source codes, VM images, etc. Currently, the sources of Click elements are stored in the local Click installation directory.
Click script field is used for storing Click script templates. Using templates, there is no need to store multiple similar scripts with minor changes (e.g., in number of input/output ports). We use Jinja2 templating language for this purpose, which is a templating language for Python, modeled after Django's templates. By this means, Orchestrator has to instantiate a Click script from the script template using parameters given from users (e.g., input/output ports, reference to other elements) or parameters which should be derived automatically (e.g., interface names, IP/MAC addresses).
icon attribute can be used to tell the GUI to use a given png file visualizing the VNF
hidden flag can be useful if we do not want to enable direct use of a given VNF (e.g., elements used by other VNFs as smaller components)
The database file of the VNF Catalog (vnfcatalogue.db) is stored and searched in the running directory of Mininet (in the provided VM: /usr/local/lib/python2.7/dist-packages/mininet-2.1.0-py2.7.egg/mininet).
Our VNF Catalog provides interfaces to add, remove or change VNFs in the Catalog. These are implemented through some helper objects in Python (see vnfcatalog.py).
Adding your own VNF
Additionally, you can use your own VNFs in ESCAPE framework. To achieve this, you follow these steps:
- create your own Click based VNF
- implement novel Click elements in C++ if it is necessary (then compile it)
- (or just) compose your Click script using available elements
- convert your Click script into a Jinja2 template
- change variables you would like to parameterize
- use interface names VNFDEV0, VNFDEV1, etc.
- in simplest cases you should change a single interface (e.g., eth0) to VNFDEV0 (surrounded by double braces)
- copy the template file under escape/mininet/mininet/templates/ directory
- (check e.g., headerCompressor.jinja2)
- in escape/mininet run: sudo python setup.py install (this will copy the VNF template into the running directory)
- add your VNF to the Catalog
- cd escape/mininet/examples
- edit vnfcatalog-reset.py to call add_VNF function of class Catalog and add the new element to the database (if you are using built in Click elements not listed in this file also add them to the list)
- set parameters referring to database fields (if you need to change the default value)
- run sudo python vnfcatalog-reset.py to reset the database
- check if your new VNF is in the catalog (e.g. with sqliteman)
If you are a more visual type consider this video showing how to add new VNFs.
- ↑ Mininet Python API Reference, "mininet.node.CPULimitedHost Class Reference", http://mininet.org/api/classmininet_1_1node_1_1CPULimitedHost.html. (Accessed: 2014.09.23.)
- ↑ Paul Menage and Rohit Seth, "cgroups", Wikipedia article, http://en.wikipedia.org/wiki/Cgroups (Accessed: 2014.09).
- ↑ R. Enns, M. Bjorklund, J. Schoenwaelder and A. Bierman, "Network Configuration Protocol (NETCONF)", RFC 6241, 2011 (Accessed: 2014.09.23.).
- ↑ J. Case, M. Fedor, M. Schoffstall and J. Davin, "A Simple Network Management Protocol (SNMP)", RFC 1157, 1990 (Accessed: 2014.09.23.).
- ↑ M. Bjorklund, "YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)", RFC 6020, 2010 (Accessed: 2014.09.23.).
- ↑ Attila Csoma, Balazs Sonkoly, Levente Csikor, Felician Nemeth, Andras Gulyas, David Jocha, Janos Elek, Wouter Tavernier and Sahel Sahhaf, "Multi-layered Service Orchestration in a Multi-Domain Network Environment", EWSDN 2014, Budapest, Sept. 2014.
This work is carried out within the UNIFY FP7 EU project.