• MININET
Before measuring any performance metric, the PINGALL command was used to ensure that the hosts in the network communicate with each other. Figure 08 shows the result after testing connectivity among the hosts on the Mininet.
Figure 09 shows the results of packet loss and Latency on the Mininet.
50 packets transmitted: This indicated that the sender sent a total of 50 ICMP (Internet Control Message Protocol) echo request packets to the destination host.
50 received: This means that all 50 of the ICMP echo request packets sent were successfully received by the destination host. In other words, none of the packets were lost in transit.
0% packet loss: This is a summary statement based on the “50 packets transmitted” and “50 received” values. It means that there was no packet loss during the test. All transmitted packets were successfully received; thus, the packet loss rate is 0%.
Time 50192ms: This indicated the total time taken for the entire ping test. In this case, it took 50,145 milliseconds (or approximately 50.2 seconds) to send all 50 packets and receive responses from the destination host.
Rtt min/avg/max/mdev = 0.069/0.10/0.935/0.126 ms:
rtt: Stands for “round-trip time,” which is the time it takes for a packet to travel from the sender to the receiver and back. It is measured in milliseconds (ms).
Min: The minimum round-trip time observed during the test. In this case, the minimum round-trip time was 0.069 ms.
Avg: The average round-trip time calculated from all the packets sent and received. In this case, the average round-trip time was 0.100 ms.
Max: The maximum round-trip time observed during the test. In this case, the maximum round-trip time was 0.935 ms.
Mdev: Stands for “mean deviation.” It is a measure of the variation or dispersion of round-trip times. In this case, the mean deviation was 0.126 ms. Figure 10 shows the CPU and memory utilization by the Mininet.
• NS3
Figure 11 shows the results of the packet loss and latency on NS3.
• GNS3
Figure 12 shows the Faucet controller configuration on GNS3, while Fig. 13 shows the configuration of routers to generate Packet loss and Latency results.
• OPNET
Figure 14 shows the attributes configuration for the sensor, actuator and controller on OPNET 14.5.
Figure 15 shows the configuration of DES statistics on OPNET 14.5 to generate results for performance metrics.
Table 2 shows all the results on performance metrics from all the simulation environments.
Table 2
Performance metrics from all the simulators.
Metric | MININET | NS3 | GNS3 | OPNET | OMNET++ |
---|
Throughput | 6.2Gb/s | 0.99Mb/s | 350mb/s | 4.5mb/s | 150mb/s |
Packet loss | 0% | 0% | 20% | 0.3% | 1% |
Latency | 0.190ms | 10.56ms | 21ms | 1.5ms | 6ms |
CPU utilization | 5.9% | 30% | 2% | 45% | 35% |
Memory utilization | 11.1% | 60% | 19.2% | - | 50% |
ANALYSIS
Throughput:
Mininet demonstrates the highest throughput, making it the best choice when a high data transfer rate is of utmost priority. However, it is a lightweight simulator that is not designed for simulating large networks with a lot of traffic. GNS3, OPNET and OMNET + + also offered decent throughput. Though NS3 has the lowest throughput. yet it is a more complex simulator that is designed for simulating large networks with a lot of traffic.
Packet loss:
OPNET, Mininet and OMNET + + exhibited low packet loss rates, and this is very good as the low packet loss is very crucial for data integrity. Thus, making any of them a preferred choice when low packet loss is paramount. NS3 and GNS3 have higher packet loss rates, which would be a concern for applications sensitive to data loss.
Latency:
High latency affects real-time applications adversely, thus, making GNS3 and NS3 that had higher latency values unpreferred choice for real-time applications. On the other hand, Mininet demonstrated the lowest latency, and this is beneficial for low-latency network designs.
CPU and Memory Utilization:
Very unlike the Mininet and GNS3 have moderate resource utilization which makes them suitable for projects with moderate hardware constraints, NS3 and OPNET had the highest CPU and memory utilization which could be a concern if resource efficiency is of concern to the user. However, Mininet strikes a balance between resource utilization and performance.
Comparison of the SDN controllers
Table 3 shows the comparisons of the SDN controllers being investigated in this study.
Table 3
Factor | POX | FAUCET |
---|
Community support | Large and active community | Large and active community |
Complexity | Light weight | Complex |
Ease of use | Easy to use | More difficult to learn and use |
Deployment | Good for small networks and for rapid prototyping of new SDN applications | Good for large networks or for deploying complex SDN applications |
Features | Basic features | Wide variety of features |
Flexibility | Flexible | flexible |
Programming language | Python | Java |
Comparison of the selected Simulation Environment
Table 4
The Selected Simulation Environments Compared.
Simulation environment | Analysis of SDN scenarios |
---|
Mininet | Good for small and medium-sized networks. Easy to use and can be used to create virtual networks with OpenFlow switches. |
GNS3 | Could be used to simulate SDN scenarios, but it is not as lightweight as Mininet |
NS3 | Good for researchers and developers who need to create detailed simulations of real-world networks. Could be used to simulate SDN scenarios, though it is not as easy to use, compared to Mininet. |
OMNET++ | Similar to NS3 in terms of its features and complexity. A great choice for researchers and developers who need a powerful and customizable network simulator. |
OPNET | Very powerful and could be used to simulate large and complex networks. However, it is not as easy to use as Mininet, as observed when it was being used to simulate SDN scenarios in this study. |