Software-Defined Networking (SDN) Lab

Software-Defined Networking (SDN) Lab

Responsible: Mario Minardi

Short Description

During the last decade, networks have largely increased in size and complexity due to the wide adoption of mobile devices and wireless access. In parallel, the prospection of new verticals in the context of 5G (Internet of Things, Vehicles, and Drones) has necessitated the support of multiple Service Level Agreements (SLAs) with heterogeneous guarantees (latency, reliability, rate, terminal number). In an attempt to streamline the network management, both research community and industrial stakeholders have been progressively adopting network virtualization and softwarization technologies. In addition, the combination of terrestrial and non-terrestrial links (e.g. satellite) in transport networks has introduced new dimensions of network heterogeneity and dynamicity. In this context, the main challenge is to devise network-slicing algorithms that can efficiently and autonomously configure the large number of parameters present in a virtualized dynamic graph representing an integrated satellite-terrestrial transport network.

In this context, the proposed network virtualization testbed setup an experimentation software platform based on Software Defined Networking (SDN), for validation of the new autonomous network-slicing algorithms and their performance evaluation in integrated terrestrial-satellite systems.

 

SDN-Testbed in a nutshell:

The Network Virtualization Testbed considers the network as a graph and the slicing operation as a Virtual Network-Embedding (VNE) problem. The schematic diagram of the elements of the Virtual Network Testbed traffic is described in Figure 1. 

 

Figure 1. Illustrative view of a SDN-based Network Virtualization Testbed elements.

To emulate the dynamic network environment and to evaluate the developed VNE algorithms, the experimental platform utilize well-established open source tools based on OpenFlow. OpenFlow, as a communication protocol, enables controllers in the network to determine the forwarding path of data packets across switches. For the physical network and traffic emulation, software-based emulators such as Mininet emulate a collection of end-hosts, switches, routers, and links. The network emulator is initiated based on scripts that import real network graph datasets or alternatively randomly generated graphs based on the scenario parameters. An open-source SDN controller as Ryu, is used to install forwarding rules and traffic rate limiters on the emulated network and also collect feedback in the form of flow statistics. The Illustrative view of the physical network and the experimental testbed components is presented in Figure 2.

Figure 2. Illustrative view of the VNE process in a Physical Network and High level view of the experimental testbed components.

 

Capabilities

 

1. Basic SDN Concept:

The most basic functionality of the testbed allows to:

  • Create Virtual Network (VN) which, for the sake of the emulation, are ping sessions between the virtualized hosts;
  • Compute the necessary and optimized embedding through VNE algorithm;
  • Use the SDN controller to fill the forwarding tables of the switches, in order to provide connection between the hosts (source and destination).

Figure 3 depicts all steps involved in this process. 

 

Figure 3. Basic steps to provide connection for each new Virtualized service

 

  1. The ping session is generated between a source and a destination host (pair of IP addresses H2-H3). The first generated packet arrives to the first switch connected to H2;
  2. The switch triggers the flow rules request, which are sent to the SDN controller;
  3. The SDN controller recognizes the VN, based on IP addresses (source-destination) and reads the output (embedding path) of the VNE for the identified VN;
  4. The SDN controller installs, in all switches between the source and the destination nodes, the necessary flow rules, in the forwarding tables, in order to provide connectivity;
  5. The traffic flows between the source and destination nodes.

 

Figure 4. Ping session between source and destination

From Figure 4, it can be noticed that the ping session is impacted only for the first packet (Round Trip Time = 37 ms) due to all steps described above. Afterwards, when the flow rules are installed, the traffic flows without suffering any additional delay.

 

2. Satellite dynamic scenarios:

 

[REF 1] F. Mendoza, M. Minardi, S. Chatzinotas et al.  “An SDN Based Testbed for Dynamic Network Slicing in Satellite-Terrestrial Networks”, IEEE International Mediterranean Conference on Communications and Networking (MEDITCOM) 2021, p. 1–6.

The testbed has been updated with satellite features such as temporal links’ availability and link delay. In fact, Mininet emulates dynamic connections (to simulate either satellite temporal visibility or link failure) and the VNE algorithm takes new decisions based on the current status of the network (link availability). For this scenario, the testbed emulates a simple integrated Satellite-Terrestrial network, with few devices, in order to facilitate the initial attempts.

Figure 5(a) shows the emulated network with 16 nodes. The nodes S13-S16 simulates MEO satellites. Every pre-defined period of time (e.g. 30 seconds), Mininet activates the links between ground stations (S4, S5 and S9) and one of the MEO satellites. For example, as depicted in Figure 5(a), the testbed starts with S13 to serve the satellite link. After 30 seconds, the satellite links to S13 are deactivated, and the links to S14 are activated. Two VNs, named VN1 (green line) and VN2 (red line) are served. This is a simplified scenario to show the dynamicity that the testbed is able to cope with. In this scenario, Figure 5(b) plots the throughput over time for VN1. It can be noticed that the throughput decreases significantly for some small periods of time. Indeed, during the change of the topology (every 30 s. for this scenario), some time is needed:

  • by the VNE algorithm to realize that the topology has changed (Figure 3 shows that the topology is constantly retrieved from the SDN controller and sent to Matlab algorithm);
  • for computing the new path (VNE in Matlab);
  • for the SDN controller to apply the new flow rules.

 

Figure 5. Integrated Satellite-Terrestrial simple scenario

 

Mininet allows to emulate a delay for each link. For example, in this scenario we consider a delay of ~27ms for each MEO satellite link, which correspond to ~108 ms as RTT, while for terrestrial link, an almost null delay is considered. In order to show that the delays are properly emulated, the following scenario considers a terrestrial link failure, happening after 70 seconds from the beginning of the simulation. Initially, three VNs (VN1, VN4 and VN5) are embedded over the terrestrial links. We simulate a failure of one terrestrial link, and the VNE algorithm computes the new paths for the affected services (VN4 and VN5). VN5 (red line in Figure 6(b)) is forwarded to the satellite link and VN4 is forwarded to another terrestrial link.

Figure 6(c) shows the latency for all three VNs. It can be noticed that the terrestrial embedded ones (VN1 and VN4), experience always ~1 ms of latency, despite the peaks when topology changes happen. While VN5 experiences the terrestrial latency before the failure, and MEO satellite latency after the failure. This proves that Mininet correctly emulates the expected delays.

 

Figure 6. Integrated Satellite-Terrestrial simple scenario - terrestrial failure 

 

[REF 2] M. Minardi, T. X. Vu, L. Lei, S. Chatzinotas and C. Politis, “Virtual Network Embedding for Dynamic NGSO Systems: Algorithmic Solution and SDN-Testbed Validation”, submitted to IEEE Transactions on Networking, December, 2021

In order to make the satellite-oriented testbed more realistic, we introduce (Figure 7) the Systems Tool Kit (https://www.agi.com/products/stk), to simulate a real satellite constellation. Indeed, STK provides the access times between any satellite and any ground station. A Matlab function translates the access times into Mininet code. It is worth underlining that this process is done before running the simulation (Mininet, SDN Controller and VNE algorithm) because the Mininet code is written based on the output of STK.

 

 

Figure 7. STK integration into the testbed

 

Figure 8. O3b satellite constellation in STK

 

The simulated O3b MEO constellation details (location of ground stations and satellites) can be found at https://www.ses.com/our-coverage/teleport-map. Figure 8 shows the constellation in STK, composed of 20 MEO satellites and 9 MEO gateways. Figure 9 shows the access times related to only one ground station (located in Hawaii) to every satellite, over time. It can be noticed that, the behavior is periodic, with a typical LoS duration of ~ 2 hours. These data are translated into Mininet code, and the dynamicity is emulated accordingly.

 

 

Figure 9. Access times from one MEO gateway

 

This experiment gives, to the testbed, the capabilities of automatically simulate any type of satellite/terrestrial network without manually configuring the code of the Mininet, with hundreds of lines of codes.

 

3. (Dynamic) Rate Limiters:

 

So far, the testbed has given priority to the link connections without assigning maximum datarate for each VN. Since it is relevant to have a capacity control of each link, and to limit the datarate of each VN, depending on their request, the testbed has been updated with the following feature, under the name of “Rate Limiters”. The SDN controller assigns a maximum datarate to each VN. Since the testbed has initially been thought to support network slicing algorithms, having the possibility to differentiate the VNs (slices), makes the testbed more network-slicing oriented. In addition, rate limiters can also be dynamic. Figure 10 shows a simplified testing example, where we simulate dynamic rate limiters, assigned to the VN (end-to-end connection). In the considered example, as the timeline describes, the initial 24 seconds are without traffic limitation. Therefore, if we test the link with iperf (right side of Figure 10), we get the link capacity (~300Mbps). Afterwards, 50Mbps is assigned and it can be checked again in the iperf results. Then, few instants without limits, and a new datarate limit is set to 10 Mbps for 60 seconds. Finally, absence of rate limiters.

 

Figure 10. Dynamic rate limiters - simple scenario

 

4. Traffic statistics collection:

 

One of the purposes of the SDN Testbed is to test AI/ML traffic engineering applications with real traffic. Thus, it is essential to collect statistics of the traffic, which can be analyzed by the AI/ML application and used as a reward for the future embedding decisions. To this purpose, we additionally added, to the SDN controller, the traffic statistic collection feature for each mapped VN. OpenFlow has several parameters for each flow rule and for each switch port. Among the parameters defined for each port, “Received Bytes” is used to compute the actual datarate, as shown in Figure 11. For each active VN, the computation is done at the last switch before the destination host. This is implemented to have a precise value of the experienced datarate at the destination side. The datarate, at any instant of time t, is computed as shown in Figure 11. The interval of time Δt is the interval of time with which the traffic statistics are collected.  

Figure 11. Computation of real-time throughput 

 

Simulations have shown how Δt should not be less than 5-10 seconds in this testbed in order to get meaningful statistics. Figure 12 shows the comparison between different values of Δt and proves that reliable values are obtained for Δt > 5 s. The VN sends 4 Mbps of traffic, which is limited to 1.6 Mbps from the SDN controller. The values of measured traffic are, as expected, close to the rate limiter. The high variability of the collected traffic for Δt < 3 s. is due to the not ideality of the traffic which flows through the substrate network.

 

Figure 12. Comparison of collection statistics frequency

 

5. Traffic generator OSTINATO:

 

For previously simulations, the traffic has been generated with either ping or Iperf. However, it is very convenient, for testing VNE algorithms which have a pre-defined VNs random generation model (e.g. Poisson process), to have an automated traffic generator tool which allows to automatically test several VNs at the same time, without the need to manually start each one of them. For this reason, the traffic generator tool Ostinato https://ostinato.org/docs/, is added to the testbed. Ostinato allows to create traffic with differentiated criteria (from L2 to L4), with varying datarate. An example is shown in Figure 13, where in the GUI, we can a pattern of traffic is implemented with varying datarate. Ostinato has also a built-in API which will be used to read traffic models and generate them. In the following section, a more detailed explanation is provided. 

Figure 13. Ostinato GUI

 

OSTINATO API: 

The Ostinato API automatizes the process of instantiating multitude of traffic streams together with the differentiated traffic properties (source, destination, datarate, data to be transmitted). An example with 5 different applications is provided. We simulate 5 VNRs as follows: 

VNR ID

Type of traffic

Datarate

Transmitted Data

Latency requirements

1

In Flight video streaming

25 kbps

2.5 MB

no

2

In Flight text messaging

1 kbps

1 KB

no

3

Autonomous ship navigation

10 kbps

10 KB

no

4

Vehicle Collision Avoidance

10 kbps

100 KB

≤ 10 ms

5

Manufacturer updates

5 kbps

375 KB

no

 

The VNRs are not generated simultaneously, but follow the timeline as shown in Figure 14 (randomly generated for the purpose to show an example): 

 

Figure 14. Timeline for VNRs' generation

 

As explained in https://userguide.ostinato.org/architecture/, the traffic is generated by the OSTINATO drone application. For this reason, the testbed has been adapted to run in every host the drone application. In Figure 15 we show the specific networking setup to handle the OSTINATO architecture. On the right side, the VNE algorithm is externally computed. The main two outputs are the embedding of each VNR, which is sent to the controller to install the necessary flow rules, and the traffic models, which is sent to the OSTINATO API to be generated and sent through the substrate network (emulated in Mininet). Each VNR is a pair-to-pair traffic, which means traffic from one host to another. Each host has 2 interfaces, eth0 and eth1. The eth0 interface is used to send the traffic over the network (substrate network, links among SDN-enabled switches) while eth1 is used to communicate the traffic stream(s) to the drone application, running in each host. We use an additional switch, external to the substrate network, with routing capabilities (IP setting shown in Figure 15) to connect to each host and to send the stream command. This network architecture is implemented to avoid interference between management and useful traffic. In fact, the useful traffic is on the network 10.0.0.0/8, as set by default in Mininet, while the management traffic is on 10.0.1.0/24. 

 

 

Figure 15. OSTINATO API Architecture

 

In Figure 15, we collect the traffic from each VNR and we check that the traffic is correctly sent. 

Figure 16. Collected traffic

It is worth underlining that the use-case considered here is a very simple scenario, made on purpose to show the capabilities of OSTINATO API. In fact, thanks to this automation process, mapping algorithms with hundreds of traffic requests can be tested with real traffic in the testbed, without manually configuring the traffic (time-consuming process). 

 

 

Up-to-date Testbed setup:

 

The current setup of the testbed, including all functionalities explained above, is shown in Figure 17. All components are software and run within a single virtual machine. 

 

 

Figure 17. Up-to-date testbed setup

 

 

Related projects:

- ASWELL: AutonomouS NetWork Slicing for IntEgrated SateLlite-TerrestriaL Transport Networks

 

Contact: mario.minardi@uni.lu