From Sb
Jump to: navigation, search


ESCAPE at a glance

On the one hand, ESCAPE (Extensible Service ChAin Prototyping Environment) is a general prototyping framework which supports the development of several parts of the service chaining architecture including VNF implementation, traffic steering, virtual network embedding, etc. On the other hand, ESCAPE is a proof of concept prototype implementing a novel SFC (Service Function Chaining) architecture proposed by EU FP7 UNIFY project. It is a realization of the UNIFY service programming and orchestration framework which enables the joint programming and virtualization of cloud and networking resources.

For more information on the concept, motivation and demo use-cases, we suggest the following papers.

UNIFY Architecture:

  • Balázs Sonkoly, Róbert Szabó, Dávid Jocha, János Czentye, Mario Kind, and Fritz-Joachim Westphal, UNIFYing Cloud and Carrier Network Resources: An Architectural View, in Proc. IEEE Global Telecommunications Conference (GLOBECOM), 2015.

ESCAPE as a multi-domain orchestrator:

Previous version of ESCAPE:

  • ESCAPEv1
  • Attila Csoma, Balázs Sonkoly, Levente Csikor, Felicián Németh, András Gulyás, Wouter Tavernier, Sahel Sahhaf, ESCAPE: Extensible Service ChAin Prototyping Environment using Mininet, Click, NETCONF and POX, In Proceedings of ACM SIGCOMM (Demo), August 17-22, 2014, Chicago, IL, USA. Download the paper
  • The source code of the previous version of ESCAPE is available at our github page.

For further information contact

High-level overview

ESCAPE is a multi-domain orchestrator, thus strictly speaking, it implements the Orchestration Layer (OL) of the UNIFY architecture. However, we have added a simple Service Layer (SL) interacting with clients, and implemented an Infrastructure Layer (IL) based on Mininet. This combination of the components enables to easily setup a standalone development framework and supports agile prototyping of different elements of our SFC control plane. The high-level components and their relations are shown in Figure 1.

Figure 1: ESCAPE at a first glance

A more detailed architecture view is given in Figure 2. ESCAPE implements the main interface of UNIFY, namely the Sl-Or, both at north and south. This enables multiple higher-level orchestrators on top of ESCAPE with corresponding virtual infrastructure views provided by virtualizers. ESCAPE itself constructs and works on a global domain view. The higher-level virtualizer configurations and VNF deployments are multiplexed in this element. The connection towards different infrastructure domains based on legacy or novel technologies are realized via dedicated adapter modules of ESCAPE. The most important one called UNIFY adapter implements the Sl-Or interface. By this means, the same interface can be used for different technological domains.

Figure 2: ESCAPE at a second glance

System architecture of ESCAPE

The detailed system architecture of the OL and the optional SL is shown in Figure 3.

Figure 3: System architecture of ESCAPE

ESCAPE is (mainly) implemented in Python on top of POX platform. The modular approach and loosely coupled components make it easy to change several parts and evaluate custom algorithms. In this section, we introduce the main components, interfaces and features of the framework. Further details on the implementation are given in ....

Service Layer

The SL contains an API and a GUI at the top level where users can request and manage services and VNFs. The GUI has been significantly redesigned and now we are working on the integration of Juju and ESCAPE. This will make a more professional, Juju-based user interface available by the end of the next phase. Currently, a REST API can be used for interacting with this layer. For example, REST client plugins are available for popular web browsers, hence the requests can be sent simply from browsers. The API is capable of formulating a Service Graph (SG) from the request (which should follow a given json format describing the SG) and passes that to a dedicated service orchestrator. This element is responsible for gathering resource information provided by the Virtualizer of the lower layer (e.g, BiS-BiS view). This information on the virtual resources is stored in our internal NFFG format. Additionally, the service orchestrator can retrieve information from the NFIB (see later). Mapping of SG to available resources is delegated to the SG mapper module which constructs an NFFG storing the request, the virtual resources and the mapping between NFs and infrastructure nodes in a common data structure.

Orchestration Layer

OL encompasses the most important components of the resource orchestration process which replaces ETSI's VIM. An API is set up on the top centralizing the interaction with the upper layer (SL} or other orchestrators) and realizing the (i) full functionality of the Sl-Or interface, and (ii) an initial version of the Cf-Or interface.

According to our concept, the virtualizer is the managed entity while the other peer of the relation is the manager. The manager can configure given deployments based on the requests coming from upper levels, and it can also retrieve the running configuration. This is a NETCONF like configuration approach, however, currently we are using a simple HTTP based transport to convey our own XML structures following the virtualizer YANG data model. In long term, we will integrate available NETCONF clients and servers with the framework and the standard NETCONF protocol will be used for the communication. The main (NETCONF compliant) functions supported by the API are the following:

  • get-config
  • edit-config

On the one hand, the request coming as an NFFG in an edit-config call is forwarded to the Resource Orchestrator (RO) via the corresponding Virtualizer (which is responsible for policy enforcement as well). On the other hand, the virtual view and the current configuration (containing deployment information as well) of the Virtualizer is provided as another NFFG to the upper layer via get-config calls.

RO is the key entity managing the components involved in the orchestration process. The input is an NFFG which should be deployed on top of the abstract domain resource view provided by the Domain Virtualizer. RO collects and forwards all required data to RO mapper which multiplexes the requests coming from different virtualizers. More specifically, the NFFG, the domain view and the NFIB are passed to the RO mapper which invokes the configured mapping strategy and optionally interacts with the Neo4j-based graph database containing information on NFs and decomposition rules. NFIB corresponds to "VNF Catalogue" in NFV MANO with the difference of supporting service decomposition. The outcome is a new NFFG which is sent to the Controller Adapter (CA) component.

The role of the CA is twofold. First, it gathers technology specific information on resources of different domains then builds an abstract domain view (bottom-up process flow). The RO works with this abstract model. Second, the result of the mapping process, which is an NFFG describing the deployment on the Domain Virtualizer, is decomposed according to the lower level domains and delegated to the corresponding lower level orchestrators or controllers (top-down process flow). The interaction with different types of technology domains are handled by domain managers and adapter modules. (A given domain manager can use multiple adapters for different control channels.) Currently, we have the following domain managers exploiting the given adapters:

  • SDN Domain manager: handle OpenFlow domains, convert abstract flow rules to OpenFlow flow entries
    • POX adapter: use POX OpenFlow controller to configure switches
  • Internal Domain manager: enable internal Mininet domains mainly for rapid prototyping
    • Mininet adapter: build/start/stop local Mininet infrastructure, the topology can be given as input in a config file
    • POX adapter: use POX OpenFlow controller to configure switches
    • VNFStarter / Netconf adapter: initiate/start/stop NF, connect/disconnect NF to/from switch, get info on running NF via the Netconf protocol
  • Remote Domain manager: control remote ESCAPE orchestrating e.g., a local Mininet domain (recursive orchestration)
    • UNIFY adapter: implement the Sl-Or interface based on our internal NFFG data model
  • OpenStack Domain manager: control an OS domain
    • UNIFY adapter: implement the Sl-Or interface based on the Virtualizer3 library and NFFG data model
  • Universal Node Domain manager: control an UN domain
    • UNIFY adapter: implement the Sl-Or interface based on the Virtualizer3 library and NFFG data model

Supported infrastructure domains

ESCAPE supports several infrastructure domains based on the previously presented domain managers and adapters. If we use our novel Sl-Or interface, on top of the infrastructure domains, additional components have to be implemented. To make this integration with legacy domains easier, a dedicated library called Virtualizer3 has been implemented. As it is shown in Figure 4, this library is integrated with ESCAPE and we also made it use in the local orchestrator of the OpenStack domain and the Universal Node domain, respectively.

Figure 4: Integrating ESCAPE with different infrastructure domains

The multi-domain setup shown in Figure 4 has been demonstrated at Sigcomm 2015. At the infrastructure level, different technologies are supported and integrated with the framework. We kept our previous Mininet based domain orchestrated by a dedicated ESCAPE entity via NETCONF and OpenFlow control channels. Here, the NFs are run as isolated Click processes. As a legacy data center solution, we support clouds managed by OpenStack and OpenDaylight. This requires a UNIFY conform local orchestrator to be implemented on top of an OpenStack domain. The control of legacy OpenFlow networks is realized by a POX controller and the corresponding domain manager with the adapter module. And finally, a proof of concept implementation of our Universal Node concept is also provided. UN local orchestrator is responsible for controlling logical switch instances (of e.g., xDPd) and for managing NFs run as Docker containers. The high performance is achieved by Intel's DPDK based datapath acceleration.

During the demo, we have showcased i) how to create joint domain abstraction for networks and clouds; ii) how to orchestrate and optimize resource allocation and deploy service chains over these unified resources; iii) how we can take advantage of recursive orchestration and NF decomposition.

A detailed illustration of the bottom-up and top-down process flows implemented by ESCAPE is shown in the next Section.


Bottom-up process flow

In Figure 5, the current implementation of the bottom-up process flow in ESCAPE is illustrated by a simple example. During the bootstrap phase, ESCAPE gathers the resource information from the available domains and constructs the abstract domain view by merging individual domains. The connections between different domains are identified by inter-domain SAPs with the same id.

Figure 5: Illustration of the bottom-up process flow in ESCAPE

The steps of the bottom-up process flow are the following:

  1. get resource info from Mininet domain
  2. send resource info
  3. get resource info from SDN domain
  4. send resource info
  5. get resource info from OpenStack domain
  6. send resource info
  7. get resource info from UniversalNode domain
  8. send resource info
  9. construct global abstract domain view
  10. construct single BiS-BiS virtualizer view
  11. send status info

Top-down process flow

In Figure 6, a simple example is given illustrating the top-down process flow of ESCAPE. A request is formulated as an SG and sent to the REST API of the Service Layer. The Service Orchestrator in this layer maps the request to a single BiS-BiS then sends the configuration to the Orchestration Layer. The Resource Orchestrator maps the service components to its global resource view and updates the configuration of the global domain virtualizer. As a final step, the result given as an NFFG is splitted according to the domains and sent to corresponding resource agents or controllers or local orchestrators.

Figure 6: Illustration of the top-down process flow in ESCAPE

The steps of the top-down process flow are the following:

  1. send \SG request via REST API
  2. map the request to the single BiS-BiS virtualizer
  3. send edit-config request to the Orchestrator
  4. map the service to the global domain virtualizer
  5. split the \NFFG according to the domains
  6. send given part of the \NFFG to the Mininet domain
  7. send given part of the \NFFG to the SDN domain
  8. send given part of the \NFFG to the OpenStack domain
  9. send given part of the \NFFG to the UniversalNode domain

Implementation view

Coming soon...

This section provides implementation details on the latest version of ESCAPE. The software framework is organized into the following packages (a package contains several modules, but sometimes we use the term module for the whole package as well):

  • escape.service: implements the Service Adaptation Sublayer (SAS) of the SL of the UNIFY architecture.
  • escape.orchest: implements the components of the Resource Orchestration Sublayer (ROS) which encompasses the resource orchestration related functions of the UNIFY OL.
  • escape.adapt: implements the components of the Controller Adaptation Sublayer (CAS) which encompasses the controller adaptation related functions of the UNIFY OL.
  • escape.infr: implements a Mininet based infrastructure domain (IL of the UNIFY architecture).
  • escape.util: includes utilities and abstract classes used by other modules.

The rest of this section gives low level implementation details of these main points following the structure of ESCAPE packages. First, the static model of a given module is described. Second, the class structure of the implemented layer and the relations between the internal components are illustrated by an UML class diagram. Finally, the cooperation of the introduced architectural parts and the interaction steps between them is highlighted through a specific case study. This example shows the main steps of the UNIFY top-down process flow.

The final subsection presents the main elements of our internally used NFFG library.

Service module

The Service module represents the Service (Graph) Adaptation Sub-layer (SAS) of the UNIFY SL. Additionally, it contains a REST API on top of the layer for unified communication with upper components such as UNIFY actors via a GUI or other standalone applications. The static class structure of this main module is shown in Figure 8.

Figure 8: Class diagram of the Service module

One of the main logical parts of Service module is the REST API. The REST API provides a unified interface based on the HTTP protocol and the REST design approach. The RESTServer class encompasses this functionality. It runs a custom-implemented HTTP server in a different thread and initializes the ServiceRequestHandler class which defines the interface of the relevant UNIFY reference point (namely the U-Sl interface). The class consists of purely the abstract UNIFY interface functions therefore it can be replaced without the need to replace or modify other components. Via this API, a service request can be initiated (with sg} function) or the resource information provided by a Virtualizer can be queried (with topology function). The Virtualizer object of the Service module is created and initiated during the bootstrap process when it gathers the resource information from the lower layer (OL). The Virtualizer in the SL offers a single BiS-BiS virtual view by default. This is generated from the global resource view of the OL (DoV component). The RESTServer uses a RequestCache instance to register every service in order to follow the status of the initiated services.

In order to separate the UNIFY API from the REST behaviour, the general functionality of HTTP request handling is defined in an abstract class called AbstractRequestHandler. This class contains the basic common functions which

  • parse the HTTP requests,
  • split and interpret the URLs according to the REST approach to determine the UNIFY API function need to be called,
  • parse the optional HTTP body as the parameter with respect to security requirements,
  • and delegate the request process to the actual module-level API function with the processed parameters in a common form (as an NFFG).

The other main part of Service module represents the Service Adaptation Sub-layer. The main entry and exit point is the ServiceLayerAPI class. This element realizes the actual interface of the SAS sub-layer and proceeds the calls comes from external source (e.g. REST API, file, other modules) to the appropriate subcomponents. The general behaviour for each top-layer module of ESCAPE is defined in the AbstractAPI class. This class contains the

  • basic and general steps related to the control of module's life cycle,
  • definition of dependencies on other components,
  • initiation and tear down of internal elements,
  • general interface for interaction with other modules and external actors,
  • and the management of communication between internal elements.

According to these functions the role of the actual API classes derived from AbstractAPI is threefold. First, it hides the implementation and behaviour of POX to make the modules' implementation more portable and easily changeable. Second, it handles the module dependencies to grant a consistent initialization process. Third, it handles the event-driven communication between modules so internal elements only have to know and use the common functions of the derived AbstractAPI class defined in each top-level module. Furthermore, with these functionalities provided by the AbstractAPI base class the main modules of ESCAPE can achieve a loosely coupled, transparent communication and easily adjustable module structure.

The central point of the Service Layer is the ServiceOrchestrator class derived from the common AbstractOrchestrator base class. The base class initializes, contains and handles the internal elements which are involved in the highest level of the service chaining process. The derived orchestrator class also supervises the supplementary functions related to the service orchestration such as managing, storing and handling service requests, handling virtual resource information and choosing the algorithm for service-level mapping.

The service request managing functionality is realized by the SGManager wrapper class which offers a common interface for handling and storing service requests in a platform and technology independent way. The format in which the service requests are stored is the same internal NFFG representation class (called NFFG) which is used to store the polled resource information.

The VirtualResourceManager class handles the virtual resource information assigned to the service module in the same way as the SGManager for Service resources. In the background the resource information is not stored explicitly in a standalone NFFG instance. Instead the manager class have a reference to a dedicated Virtualizer element, which can generate the resource information on the fly. Due to the wrapper classes the storing format can be modified easily to use only NFFG representation and a fully separated module design can be achieved. This manager class as all Manager classes in ESCAPE hides the actual format of the stored resources and provides the opportunity to change its implementation transparently. The assigned Virtualizer of the SL is the default SingleBiSBiSVirtualizer inherited from the common AbstractVirtualizer class. This Virtualizer offers the trivial one BiS-BiS virtualization which is generated from the collected global resource information (DoV) and consists of only one infrastructure node with the aggregated resource values and the detected SAPs. The generation of the resource information and the requesting of the global resource view are executed on the fly, during the Service Layer orchestration process.

The orchestration steps are encompassed by the ServiceGraphMapper class, which pre-processes and verifies the given information and provides it in the appropriate format for the mapping algorithm.

The mapping algorithm is defined in a separate element for simplicity and clarity. The trivial service level orchestration which is carried out by the same mapping algorithm used in the Orchestration module is executed by the DefaultServiceMappingStrategy class. The general interfaces for the mapper and strategy classes are defined in the AbstractMapper and AbstractStrategy classes.

The communication between the elements inside the modules is based on events. The Event classes in the layer components represent the different stages during the ESCAPE processes. The event-driven communication relies on POX's own event handling mechanism, but every communication primitive is attached to well-defined functions for the purpose of supporting other asynchronous communication forms, e.g., different implementations of event-driven communication based on Observer design pattern, Asynchronous Queuing or Message Bus architecture based on ZeroMQ.

Steps of a service request in the Service module

  • On the top of the UNIFY hierarchy a UNIFY actor can request the virtual view / resource topology of the Service Layer to collect available resource information or to show the topology on a GUI application.
  • The service request can be given via the REST API in a HTTP message. The function is defined in the URL (the general sg function along with POST HTTP verb) with a formatted body as an NFFG in JSON format.
  • The message is processed; the optional parameters are parsed and converted concerning the HTTP verb and delegated to the sg() function which is part of the UNIFY U-Sl API representation in the ServiceRequestHandler class. The REST API also caches the service request.
  • The main ServiceLayerAPI class receives the UNIFY API call and forwards to the central ServiceOrchestrator element.
  • The orchestrator saves the generated Service Graph in the SGManager with internal NFFG format, obtains the resource information via the assigned OneBiSBiSVirtualizer with the help of the VirtualResourceManager and invokes the SGMapper in order to start the mapping process.
  • The SGMapper requests the resource information from the given OneBiSBiSVirtualizer in the NFFG format, validates the service request against the resource info in respect of sanity and syntax, perform the configured pre- and post-processing tasks and finally invokes the actual mapping algorithm of the DefaultServiceMappingStrategy.
  • The \texttt{DefaultServiceMappingStrategy} calls directly the configured orchestration algorithm and handles any mapping errors.
  • After the mapping process is finished, the actual Strategy element returns the outcome in an \texttt{SGMappingFinished} event, which is processed by the module API class and proceeds the given \NFFG to the lower layer for instantiation via a general function. The instantiation notification is realized via the \texttt{InstantiateNFFGEvent} class.

Orchestration module

The Orchestration module represents the Resource Orchestration Sub-layer (ROS) of the UNIFY OL. The communication with the upper and lower layer is managed by the POX event mechanism as it is implemented in the Service module. The static class structure of this main module is shown in Figure 9. The structure of this module, the separation of internal components and their relations are designed in compliance with the Service module as precisely as possible in order to support the transparency and consistency of the ESCAPE architecture.

Figure 9: Class diagram of the Orchestration module

The Orchestration layer has its own REST API. With the ROSAgentRequestHandler class, it can provide resource information for third-party applications or a GUI and implements the UNIFY interface for the local orchestration mode (edit-config function). It can also be used for requesting the resource information of the ROS layer explicitly (get-config function). Here, we use a non-filtering Virtualizer, namely the GlobalViewVirtualizer which skips the resource virtualization and offers the global domain resource completely to the upper layers or external entities.

Additionally the Orchestrator module can be initiated with a special REST API to implement the Cf-Or interface. With this extension, ESCAPE will support elastic NFs and the special elastic router/firewall use-case proposed by UNIFY project. The interface functions are defined in the CfOrRequestHandler class.

The main interface of the Orchestration module is realized by the ResourceOrchestrationAPI class. It has a similar role than the ServiceLayerAPI in the Service module. It manages the module's life cycle, handles internal and external communications and translates calls from events to class functions. Based on the external event-driven communication, the ResourceOrchestrationAPI realizes the relevant Sl-Or and Cf-Or interfaces.

The central component of this module is the ResourceOrchestrator which is responsible for the orchestration at the level of global domain view (DoV). It initializes, contains and controls internal module elements and gathers needed information similarly to the ServiceOrchestrator class.

The management of requested and already installed NFFG instances is performed by the NFFGManager class. The manager class uses the internal NFFG representation as the storage format.

The orchestration steps are encompassed by the ResourceOrchestrationMapper class similarly to the mapper used in the Service module. The mapping algorithm of this layer is defined in a derived AbstractStrategy class, namely in the ESCAPEMappingStrategy class. It uses the request stored in an NFFG and the actual resource information. The Network Function descriptions can be requested via a wrapper class, i.e., the NFIBManager, which hides the implementation details of the NFIB and offers a platform-independent interface. The current version of the NFIB is implemented in a neo4j database. This manager class is provided for the orchestration-level mapping algorithm by default.

An important task of this module is the proper handling of the virtualizers and virtual resource views. The orchestration module is responsible for the creation, assignment and storing of virtual resource views. The functionality of these virtual views is encompassed by the AbstractVirtualizer base class. This class offers a general interface for the interaction with the Virtualizers and contains the common functions, such as generating the virtual resource information in our internal NFFG format. The derived classes of the AbstractVirtualizer represents the different kind of Virtualizers defined in the UNIFY architecture and contains the metadata for the resource information filtering. The derived classes such as SingleBiSBiSVirtualizer and GlobalViewVirtualizer represent the virtual views and offer the virtualized resource information for the assigned upper layer(s). The DomainVirtualizer class is a special kind of Virtualizer which represents the abstract global resource view, namely the Domain Virtualizer (DoV) created by and requested from the lower layer. The Virtualizer instances are managed and created by the VirtualizerManager class. This manager class also stores the DomainVirtualizer instance which is used for the creation of the virtual views.

The policy enforcement functions which are closely related to the Virtualizers are defined in the PolicyEnforcement class. This class implements the enforcement and checking functions. In every case when a derived AbstractVirtualizer instance is created the PolicyEnforcement class is attached to that Virtualizer in order to set up the policy related functionality automatically. The attachment is performed by the PolicyEnforcementMetaClass. The policy enforcement functionality which realized by the previous classes follows the Filter Chain approach associated with the functions of the Virtualizers. That design allows defining and attaching a checking or enforcing function before and/or after the involved function of a Virtualizer is invoked by other internal module components.

Steps of a service request in the Orchestration module

The input parameter is an InstantiateNFFGEvent event which is raised by the ServiceLayerAPI at the end of the process flow presented for the Service module. Beside this internal event, service requests can be received on the REST APIs as well, but the process flow is similar.

  • The NFFG request can be originated from an upper layer in a form of an internal event or from a separated entity interacting via the REST API.
    • If the NFFG configuration comes from the Service module, the triggering event called InstantiateNFFGEvent is handled by the ResourceOrchestrationAPI class which is the communication point between the internal components and other top modules. The event contains the mapped service description in the format of the internal NFFG. Based on the type of the event, a dedicated event handler is invoked. These handlers in the actual top module class represent Sl-Or API.
    • The requests received from other external entities are parsed and processed by ROSAgentRequestHandler or CfOrRequestHandler and then forwarded to the ResourceOrchestrationAPI directly.
  • The request is delegated to the central ResourceOrchestrator via the corresponding API function. The process is similar to the service request process of the Service module described in the previous section.
  • The orchestrator saves the generated NFFG using the NFFGManager wrapper (using the internal NFFG format); obtains the global resource view as a DomainVirtualizer by invoking the VirtualizerManager class.
  • ResourceOrchestrator invokes the orchestration() function of the ResourceOrchestrationMapper class in order to initiate the NFFG mapping process.
  • The orchestration process requests the global resource information via the DomainVirtualizer and invokes the actual mapping algorithm of the ESCAPEMappingStrategy. The validation of the inputs of the mapping algorithm can be performed by the ResourceOrchestrator, as well.
  • The ESCAPEMappingStrategy uses the NFIBManager to run the algorithm and returns with the mapped NFFG in the common NFFG format in an asynchronous way with the help of the NFFGMappingFinishedEvent.
  • The event is handled by the ResourceOrchestrationAPI class and it proceeds the on-going service request by invoking a general communication function.
  • The mapped NFFG is forwarded to the lower layer via the InstallNFFGEvent.

Adaptation module

The Adaptation module represents the Controller Adaptation Sublayer (CAS) of UNIFY OL. The communication with the upper layer is managed by the POX event mechanism similarly to other modules. The static class structure of the Adaptation module is shown in Figure 10. The structure is in line with the previously introduced top API modules.

Figure 10: Class diagram of the Adaptation module

The main interface of the Adaptation module is realized by the ControllerAdaptationAPI. Its functions and responsibilities are identical to the other top API classes derived from AbstractAPI. This API realizes the corresponding UNIFY reference point, more exactly, the Or-Ca interface.

The central component of this module is the ControllerAdapter. This component initializes the domain handler component with the help of the ComponentConfigurator class. It initializes, contains and handles the internal DomainManager elements based on the global configuration of ESCAPE. Here, we follow the Component Configurator design pattern. The main tasks of the ControllerAdapter class can be split into two major parts.

First, it handles the incoming NFFG instances coming from the upper Orchestration module. For this task, the ControllerAdapter contains the functions for processing the mapped NFFG instances, splitting into subsets of NFFG descriptions based on the initiated domain managers. Control of the installation of the sub-parts is carried out by the ComponentConfigurator which forwards the relevant NFFG part to the appropriate domain managers. Here, we use the same internal NFFG format.

Second, it handles the domain changes originated from the lower layer (IL). For this task, the ControllerAdapter initiates and manages the domain managers derived from the AbstractDomainManager base class. This base class contains the main functionality for the domain management, such as polling the registered domain agents, parsing and converting the collected domain resource information into the internal NFFG format and handling the topology changes. Every domain has its own domain manager. We have implemented the InternalDomainManager for our internal, Mininet based infrastructure, the UniversalNodeDomainManager for the remote management of Universal Nodes, the OpenStackDomainManager for the remote management of an OpenStack domain, the SDNDomainManager for OpenFlow domains, and the RemoteESCAPEDomainManager for a remote domain controlled by ESCAPE (recursive orchestration via the Sl-Or interface).

For the protocol specific communication with the domain agents (such as NETCONF-based RPCs or REST based requests in XML format), the domain managers initiate and use adapter classes inherited from the AbstractESCAPEAdapter class. This base class defines the common management functionality and also offers a general interface for the ControllerAdapter. The adapters wrap and hide the different protocol specific details and give a general and reusable way for the interactions with domain agents, e.g., adapters inherited from the AbstractRESTAdapter class for using RESTful communications or adapters inherited from AbstractOFControllerAdapter class for managing SDN / OpenFlow controllers. These adapters are also responsible for the process and conversion of the received raw data into the internal NFFG representation using the NFFG class.

The domain / topology information from the lower layers is stored via the DomainResourceManager wrapper class which managed by the ControllerAdapter, too. The ControllerAdapter implements the connection between the domain adapters and the domain resource database. This manager class manages the global resource information and contains the functionality of setting, merging and updating different domain resource information (in the internal NFFG format) provided by the domain managers.

The DomainResourceManager class offers an abstract global view of the provisioned elements hiding the physical characteristics via the DomainVirtualizer class. The DomainVirtualizer is inherited from the AbstractVirtualizer, therefore the common Virtualizer interface can be used to interact with that element. The DomainVirtualizer stores the abstract global resource information in the internal NFFG format and also contains the algorithm for merging NFFG objects received from different domains.

Steps of a service request in the Adaptation module

The input parameter is an InstantiateNFFGEvent event which is raised by the ResourceOrchestrationAPI at the end of the process flow presented for the Orchestration module.

Before the orchestration process the ControllerAdapter initiates the domain managers with the help of the ComponentConfigurator. The managers collect the resource information using the ESCAPE adapters which is converted and merged into one global resource view supervised by the DomainVirtualizer class.

  • The triggering event called InstallNFFGEvent is handled by the ControllerAdaptationAPI class. The event contains the mapped NFFG generated by the Orchestration module. Based on the type of the event, a dedicated event handler is invoked (Or-Ca interface).
  • The request is delegated to the corresponding function of the central ControllerAdapter class.
  • The ControllerAdapter uses its own splitting algorithm to split the given orchestrated service request according to the domains. The domains are defined from the registered domain managers.
  • The ControllerAdapter initiates the installation process in which it uses the ComponentConfigurator component to notify the domain managers and forwards the relevant NFFG parts.
  • The domain managers install the given NFFG using the ESCAPE adapters and update their topology information.
  • When the adapters finish, the global resource view is updated by the DomainResourceManager and an InstallationFinishedEvent is raised by the top API class to notify the upper layers about the successful service request process. If the Infrastructure module was initiated the InternalDomainManager will start the network emulation and deploy the given NFFG part.

The changes of the distinct domains are propagated upwards via the Virtualizer instances with the help of the DomainChangedEvent which is raised by the actual domain adapter classes. If a top module does not have an instantiated Virtualizer, a specific event is raised to request the missing Virtualizer. The process is the following.

  • If an ESCAPEVirtualizer is missing in the VirtualResourceManager of the Service module, then a MissingVirtualViewEvent is raised which is forwarded to the Resource module via the ServiceLayerAPI.
  • The ResourceOrchestrationAPI receives the event and applies for an ESCAPEVirtualizer at the VirtualizerManager.
  • If the needed Virtualizer does not exist yet, the ESCAPEVirtualizer is generated by the VirtualizerManager using the DomainVirtualizer.
  • If the DomainVirtualizer is not available for the VirtualizerManager, a GetGlobalResInfoEvent is raised to request the missing DoV.
  • The event is forwarded to the ControllerAdaptationAPI which responds the requested DomainVirtualizer in a GlobalResInfoEvent.
  • The event is handled by the ResourceOrchestrationAPI; the DomainVirtualizer is extracted from the event and stored in the VirtualizerManager.
  • The requested ESCAPEVirtualizer is generated by the VirtualizerManager using the DomainVirtualizer and returned via a VirtResInfoEvent to the ServiceLayerAPI which stores the Virtualizer in the VirtualResourceManager.

Infrastructure module

The Infrastructure module provides a simple implementation for an IL of the UNIFY architecture. The communication with the upper layer is managed by the POX event mechanism as in the case of the other modules. The static class structure of the Infrastructure module is shown in Figure 11.

Figure 11: Class diagram of the Infrastructure module

The main interface of the Infrastructure module is realized by the InfrastructureLayerAPI which is derived from the AbstractAPI class.

The API class has the reference to the emulated Mininet-based network object which is realized by the ESCAPENetworkBridge class. The bridge class defines the top interface for the management of the network and hides the Mininet-specific implementations. The class is also responsible for the cleanup process to remove any remained temp files or configuration which Mininet cannot remove.

The building of the emulated network is carried out by the ESCAPENetworkBuilder class. This class can use predefined Topology class similar to the Topo class in Mininet to build the network. The necessary functions are defined in the AbstractTopology class. The builder class has the ability to parse a resource graph stored in the internal NFFG format and build the Mininet network based on this resource representation.

ESCAPE uses a modified version of Mininet which was extended with specific Node types such as the EE (execution environment) or NetconfAgent class. The created network uses a specific RemoteController class, namely the InternalControllerProxy by default to connect the control channel of the initiated OpenFlow switches directly to the InternalPOXAdapter of the Adaptation module.

NFFG module

In ESCAPE, we use an internal NFFG model and representation to store SG, NFFG and resource view in a common format. This model is an evolution of the models presented in D32. The goal of the refinement was to give a general graph representation which is in line with several embedding algorithms. We have described the YANG data model, and its tree representation is shown here. It is worth noting, that regarding the main elements, there is a one-to-one mapping between the virtualizer model and the model used internally. We have defined three different types of nodes and edges, and additional metadata on the NFFG itself. More specifically, node_nfs, node_saps and node_infras are lists storing NF, SAP and infrastructure (BisBis) nodes, respectively. While edge_links, edge_sg_nexthops and edge_reqs are lists for describing static and dynamic infrastructure links (static links: BisBis → BisBis, SAP → BisBis; dynamic links: NF → BisBis), SG links (NF → NF, SAP → NF) and virtual links defining requirements between NFs (NF → NF, SAP → NF, NF → SAP).

module: nffg

  +--rw nffg
     +--rw parameters
     |  +--rw id         string
     |  +--rw name?      string
     |  +--rw version    string
     +--rw node_nfs* [id]
     |  +--rw id                 string
     |  +--rw name?              string
     |  +--rw functional_type    string
     |  +--rw specification
     |  |  +--rw deployment_type?   string
     |  |  +--rw resources
     |  |     +--rw cpu          string
     |  |     +--rw mem          string
     |  |     +--rw storage      string
     |  |     +--rw delay        string
     |  |     +--rw bandwidth    string
     |  +--rw ports* [id]
     |     +--rw id          string
     |     +--rw property*   string
     +--rw node_saps* [id]
     |  +--rw id        string
     |  +--rw name?     string
     |  +--rw domain?   string
     |  +--rw ports* [id]
     |     +--rw id          string
     |     +--rw property*   string
     +--rw node_infras* [id]
     |  +--rw id           string
     |  +--rw name?        string
     |  +--rw domain?      string
     |  +--rw type         string
     |  +--rw supported* [functional_type]
     |  |  +--rw functional_type    string
     |  +--rw resources
     |  |  +--rw cpu          string
     |  |  +--rw mem          string
     |  |  +--rw storage      string
     |  |  +--rw delay        string
     |  |  +--rw bandwidth    string
     |  +--rw ports* [id]
     |     +--rw id           string
     |     +--rw property*    string
     |     +--rw flowrules* [id]
     |        +--rw id           string
     |        +--rw match        string
     |        +--rw action       string
     |        +--rw bandwidth?   string
     +--rw edge_links* [id]
     |  +--rw id          string
     |  +--rw src_node    string
     |  +--rw src_port    string
     |  +--rw dst_node    string
     |  +--rw dst_port    string
     |  +--rw backward?   string
     |  +--rw reqs
     |     +--rw delay?       string
     |     +--rw bandwidth?   string
     +--rw edge_sg_nexthops* [id]
     |  +--rw id           string
     |  +--rw src_node     string
     |  +--rw src_port     string
     |  +--rw dst_node     string
     |  +--rw dst_port     string
     |  +--rw flowclass?   string
     +--rw edge_reqs* [id]
        +--rw id          string
        +--rw src_node    string
        +--rw src_port    string
        +--rw dst_node    string
        +--rw dst_port    string
        +--rw reqs
        |  +--rw delay?       string
        |  +--rw bandwidth?   string
        +--rw sg_path* [edge_sg_nexthop_id]
           +--rw edge_sg_nexthop_id    string

The static object model of our internal NFFG implementation, including the classes and their relations, is shown in Figure 12.

Figure 12: Class diagram of the NFFG module

The functionality and attributes of NFFG elements are defined in a hierarchical structure. The common attributes are defined in the Element class. The main base classes, namely the Link and Node classes implement the main functionality for the node and link instances of the graph representation. The specific Node and Edge classes define correspondingly the specific functions. For the complex attributes, particular classes are created, such as the Flowrule or the NodeResource classes. These main classes are grouped and stored in lists by the NFFGModel. This container class is only used in case of persisting or parsing the NFFG representation.

Our NFFG implementation uses the MultiDiGraph class of the networkx library to store the specific link and node classes. The MultiDiGraph object is wrapped by the NFFG class. The NFFG class defines the interface of the internal NFFG representation, contains the helper functions for building NFFG, graph operation, etc. This class also implements a workaround which is responsible for ensuring the MultiDiGraph class to use the own Edge and Node classes.

Programming documentation

The online programming documentation can be found here:

ESCAPEv2 documentation


This work is carried out within the UNIFY FP7 EU project.

Logo-unify-claim-rgb 72dpi.png

Personal tools