Use iPerf to test the throughput inside an OCI Hub and Spoke VCN Routing architecture
In today's rapidly evolving cloud environments, ensuring optimal network performance is crucial for seamless operations. Oracle Cloud Infrastructure (OCI) provides robust networking capabilities, including the Hub and Spoke Virtual Cloud Network (VCN) architecture, to facilitate efficient communication and resource management. One essential aspect of maintaining this architecture is regularly testing the network throughput to identify potential bottlenecks and optimize performance.
In this tutorial, we will use iPerf, a powerful network testing tool, to measure and analyze the throughput within an OCI Hub and Spoke VCN Routing architecture. By the end of this guide, you'll be equipped with the knowledge to effectively assess and enhance your OCI network's performance, ensuring your applications and services run smoothly.
Disclaimer
The test results obtained using iPerf depend highly on various factors, including network conditions, hardware configurations, and software settings specific to your environment. As such, these results may differ significantly from those in other environments. Please do not use these results to make any definitive conclusions about the expected performance of your network or equipment. They should be considered as indicative rather than absolute measures of performance.
The Steps
- [ ] STEP 01: Review the OCI Hub and Spoke VCN Routing architecture
- [ ] STEP 02: Install iPerf on the Hub Instances
- [ ] STEP 03: Install iPerf on the Spoke Instances
- [ ] STEP 04: Install iPerf on the ONPREM Instances
- [ ] STEP 05: Install iPerf2
- [ ] STEP 06: Define the iPerf Tests and prepare the iPerf commands
- [ ] STEP 07: Perform iPerf tests within the same VCN in the same subnet
- [ ] STEP 08: Perform iPerf tests within the same VCN across different subnets
- [ ] STEP 09: Perform iPerf tests between two different VCNs
- [ ] STEP 10: Perform iPerf tests between different VCNs (bypassing the pfSense Firewall)
- [ ] STEP 11: Perform iPerf tests between ONPREM and OCI Hub VCN
- [ ] STEP 12: Perform iPerf tests between ONPREM and OCI Spoke VCN
- [ ] STEP 13: Perform iPerf tests between ONPREM and OCI Spoke VCN (bypassing the pfSense Firewall)
- [ ] STEP 14: Perform iPerf tests between the INTERNET and the OCI Hub VCN
- [ ] STEP 15: Perform iPerf tests within the same subnet ONPREM
iPerf versions
iPerf, iPerf2, and iPerf3 are tools used to measure network bandwidth, performance, and throughput between two endpoints. However, they have some key differences in terms of features, performance, and development status. Here's a breakdown:
iPerf (original)
- Release: Initially released around 2003.
- Development: The original iPerf has largely been replaced by its successors (iPerf2 and iPerf3).
- Features: Basic functionality for testing network bandwidth using TCP and UDP.
- Limitations: Over time, it became outdated due to a lack of support for modern networking features.
iPerf2
- Release: Forked from the original iPerf and maintained independently.
- Development: Actively maintained, especially by ESnet (Energy Sciences Network).
- Features:
- Supports both TCP and UDP tests.
- Multithreading: iPerf2 supports multithreaded testing, which can be useful when testing high-throughput environments.
- UDP multicast and bidirectional tests.
- Protocol Flexibility: Better handling of IPv6, multicast, and other advanced networking protocols.
- Performance: Performs better than the original iPerf for higher throughputs due to multithreading support.
- Use Case: Best for situations where legacy features, such as IPv6 and multicast, are necessary, or if you require multithreading in testing.
iPerf3
- Release: Rewritten and released by the same team (ESnet) that maintains iPerf# The rewrite focused on cleaning up the codebase and modernizing the tool.
- Development: Actively maintained with frequent updates.
- Features:
- Supports both TCP and UDP tests.
- Single-threaded: iPerf3 does not support multithreading, which can be a limitation for high throughput in certain environments.
- Supports reverse mode for testing in both directions, bidirectional tests, and multiple streams for TCP tests.
- JSON output for easier integration with other tools.
- Improved error reporting and network statistics.
- Optimized for modern network interfaces and features like QoS and congestion control.
- Performance: iPerf3 is optimized for modern networks but lacks multithreaded capabilities, which can sometimes limit its performance on high-bandwidth or multi-core systems.
- Use Case: Best for most modern networking environments where simpler performance tests are required without the need for multithreading.
Key Differences
Feature | iPerf | iPerf2 | iPerf3 |
---|---|---|---|
Development | Discontinued | Actively Maintained | Actively Maintained |
TCP and UDP Tests | Yes | Yes | Yes |
Multithreading Support | No | Yes | No |
UDP Multicast | No | Yes | No |
IPv6 Support | No | Yes | Yes |
JSON Output | No | No | Yes |
Reverse Mode | No | Yes | Yes |
We will use iPerf2 where this is possible throughout this tutorial.
Best for High Throughputs?
For high-throughput environments, iPerf2 is often the best choice due to its multithreading capabilities, which can take full advantage of multiple CPU cores. This is especially important if you're working with network interfaces capable of handling multiple gigabits per second (Gbps) of traffic.
If multithreading isn't crucial, iPerf3 is a good choice for simpler setups or modern networks with features like QoS and congestion control. However, in very high-throughput environments, its single-threaded nature might become a bottleneck.
Why is MSS Clamping used?
When traffic is flowing through an IPSEC tunnel through the pfSense Firewall MSS is something to pay attention to.
MSS Clamping refers to "Maximum Segment Size Clamping," which is a technique used in network communications, particularly in TCP/IP networks, to adjust the maximum segment size (MSS) of a TCP packet during the connection setup process. The MSS defines the largest amount of data that a device can handle in a single TCP segment, and it's typically negotiated between the communicating devices during the TCP handshake.
MSS Clamping is often employed by network devices such as routers, firewalls, or VPNs to avoid issues related to packet fragmentation. Here's how it works:
- Packet Fragmentation Issues: If the MSS is too large, packets may exceed the Maximum Transmission Unit (MTU) of the network path, leading to fragmentation. This can cause inefficiency, increased overhead, or in some cases, packet loss if the network doesn't handle fragmentation well.
- Reducing the MSS: MSS Clamping allows the network device to adjust (or "clamp") the MSS value downward during the TCP handshake, making sure that the packet sizes are small enough to traverse the network path without needing fragmentation.
- Use in VPNs: MSS Clamping is commonly used in VPN scenarios where the MTU size is reduced due to encryption overhead. Without MSS Clamping, packets might get fragmented, reducing performance.
Example of MSS clamping
If a client device sends an MSS value of 1460 bytes during the TCP handshake but the network's MTU is limited to 1400 bytes due to VPN encapsulation, the network device can clamp the MSS to 1360 bytes (allowing for the extra overhead) to avoid fragmentation issues.
Important Information (before you start)
Ports Used
The default ports used by iPerf2 and iPerf3 for TCP and UDP are:
TCP Port | UDP Port | |
---|---|---|
iPerf2 | 5001 | 5001 |
iPerf3 | 5201 | 5201 |
Both versions allow you to specify a different port using the -p flag if necessary.
For testing purposes, I recommend opening ALL the ports between the SOURCE and DESTINATION IP addresses of the iPerf endpoints.
MTU Sizes
iPerf will send data between
When running an iPerf test, understanding the MTU (Maximum Transmission Unit) size is crucial because it directly impacts network performance, packet fragmentation, and test accuracy. Here's what you should consider regarding MTU sizes during an iPerf test:
Default MTU Size
- The default MTU size for Ethernet is 1500 bytes, but this can vary based on the network configuration.
- Larger or smaller MTU sizes can affect the maximum size of packets sent during the iPerf test. Smaller MTU sizes will require more packets for the same amount of data, while larger MTU sizes can reduce the overhead.
Packet Fragmentation
- If the MTU size is set too small, or if the iPerf packet size is larger than the network's MTU, packets may be fragmented. Fragmented packets can lead to higher latency and reduced performance in your test.
- iPerf can generate packets up to a specific size, and if they exceed the MTU, they’ll need to be split, introducing extra overhead and making the results less reflective of real-world performance.
Jumbo Frames
- Some networks support jumbo frames, where the MTU is larger than the standard 1500 bytes, sometimes reaching 9000 bytes. When testing in environments with jumbo frames enabled, configuring iPerf to match this larger MTU can maximize throughput by reducing overhead from headers and fragmentation.
MTU Discovery and Path MTU
- Path MTU discovery helps ensure that packets do not exceed the MTU of any intermediate network. If iPerf sends packets larger than the path MTU and fragmentation is not allowed, the packets might get dropped.
- It’s important to ensure that ICMP "Fragmentation Needed" messages are not blocked by firewalls, as these help with path MTU discovery. Without it, larger packets may not be successfully delivered, resulting in performance issues.
TCP vs UDP Testing
- In TCP mode, iPerf automatically handles packet size and adjusts according to the path MTU.
- In UDP mode, the packet size is controlled by the user (using the -l flag), and this size must be less than or equal to the MTU to avoid fragmentation.
Adjusting MTU in iPerf
- Use the -l option in iPerf to manually set the length of UDP datagrams.
- For testing with specific MTU sizes, it's useful to ensure that your network and interfaces are configured to match the desired MTU value to avoid mismatches.
Consistency Across Network Segments
- Ensure the MTU size is consistent across all network devices between the two endpoints. Mismatched MTU settings can cause inefficiency due to fragmentation or dropped packets, leading to inaccurate test results.
VPN Related
When using a VPN (Virtual Private Network), MTU size and network performance become even more significant due to the additional layers of encapsulation and encryption. VPNs introduce extra overhead, which can affect the performance of tools like iPerf.
Here’s a deeper look at VPN connections and their impact on network testing:
Key Concepts of VPN and MTU
- Encapsulation Overhead
- VPN protocols, such as IPsec, OpenVPN, WireGuard, PPTP, or L2TP, add extra headers to the original data packet for encryption and tunneling purposes.
- This extra overhead reduces the effective MTU size because the VPN must accommodate both the original packet and the added VPN headers. For example:
- IPsec adds around 56 to 73 bytes of overhead.
- OpenVPN adds about 40-60 bytes, depending on the configuration (e.g., UDP vs. TCP).
- WireGuard adds around 60 bytes.
- If you don’t adjust the MTU, packets larger than the adjusted MTU may get fragmented or dropped.
- MTU and Path MTU Discovery in VPNs
- VPNs often create tunnels that span multiple networks, and the path MTU between the two ends of the tunnel can be smaller than what would be used on a direct connection. Path MTU discovery helps VPNs avoid fragmentation, but some networks block ICMP messages, which are essential for this discovery.
- If ICMP messages like "Fragmentation Needed" are blocked, the VPN tunnel may send packets that are too large for an intermediate network, causing packet loss or retransmissions.
- Fragmentation Issues
- When an MTU mismatch occurs, the VPN will either fragment the packets at the network level or, if fragmentation is not allowed (DF, or "Don’t Fragment" bit is set), drop the packets. Fragmentation introduces additional latency, lowers throughput, and can cause packet loss.
- VPNs often have a lower effective MTU (e.g., 1400 bytes instead of 1500), which accounts for the added headers and prevents fragmentation.
- Adjusting MTU for VPN Connections
- Most VPN clients or routers allow the user to adjust the MTU size to avoid fragmentation. For example, reducing the MTU size on a VPN tunnel to 1400 or 1350 bytes is common to account for VPN overhead.
Instance Network Speeds
Within OCI the speed of the Network Adapter (vNIC) or your instance is bound to the Instance Shape and the amount of CPUs you have assigned to that shape. In this tutorial, I am using E4.Flex shapes with an Oracle Linux 8 Image with 1 OCPU. This means I will get a (maximum) network bandwidth of 1 Gbps (for all my iPerf test results).
Below I have provided an example of one of my instances.
- Notice that the shape is E4.Flex.
- Notice that the OCPU count is 1.
- Notice that the network bandwidth is 1 Gbps.
It is possible to increase the network bandwidth by choosing another shape and increasing the amount of OCPUs.
STEP 01: Review the OCI Hub and Spoke VCN Routing architecture
We will use the following architecture below for all the iPerf throughput tests throughout this tutorial.
Notice that this is a full hub and spoke routing architecture with ON-PREM connected with an IPSec VPN tunnel. If you want to recreate this routing topology please read the following tutorials:
- [Route Hub and Spoke VCN with pfSense Firewall in the Hub VCN]
- [Connect On-premises to OCI using an IPSec VPN with Hub and Spoke VCN Routing Architecture]
STEP 02: Install iPerf3 on the Hub Instances
Before we can use iPerf we need to make sure iPerf is installed. We will assume that iPerf is not installed.
In this step we will install iPerf3, and we will install iPerf2 in the next step.
Hub Stepstone
The Hub Stepstone is a Windows Server Instance. There are different iPerf distributions available for [windows] and I have downloaded this one [here].
Download the zip file and unpack the file on the Hub Stepstone.
- Browse to the directory where you have unpacked the iPerf zip file.
- Verify if the unpacked folder is available.
- Notice another iPerf folder is there.
- Change the directory to go one level deeper in the folder.
- Verify what files are inside the iPerf folder.
- Notice the iPerf.exe file that we need to perform the actual tests.
- Execute the iPerf.exe command just to see if it works.
pfSense Firewall
To install iPerf on the pfSense we need to install a package through the Package Manager.
- Browse to System Menu.
- Select Package Manager.
- Click on Available Packages.
- Type in the keyword "iPerf".
- Click on the Search button.
- Notice that there will be one result and this is the iPerf package version 3.0.3 (at the time of writing).
- Click on the +Install button.
- Click on the Confirm button.
- Notice that the number of packages installed is 2.
- Browse to Diagnostics Menu.
- Select iPerf.
- Click on the Client tab.
- Click on the Server tab.
The pfSense firewall does not have the option (by default) to install the iPerf version 2 packages.
STEP 03: Install iPerf3 on the Spoke Instances
Now we are going to install iPerf3 on the Linux Instances (inside OCI) we have in our architecture.
Spoke Instance A1 and A2
The instance A1 already has iPerf3 installed.
- Connect to the Instance A1.
- Issue the following command: sudo dnf install iPerf3
- Notice that iPerf3 is already installed.
- Issue the command iPerf3 -v to verify the iPerf version that is installed.
The instance A2 does not have iPerf3 installed.
- Connect to the Instance A2.
- Issue the following command: sudo dnf install iPerf3
- Type in "Y".
- iPerf3 will install and notice that the installation has been completed.
Spoke Instance B
- Connect to the B Instance.
- Issue the command the install iPerf 3 (provided in the previous section) and if required, complete the installation and if iPerf3 is already available, you will get a message that iPerf is already installed.
Spoke Instance C
- Connect to the C Instance.
- Issue the command the install iPerf 3 (provided in the previous section) and if required, complete the installation and if iPerf3 is already available, you will get a message that iPerf is already installed.
Instance D
- Connect to the D Instance.
- Issue the command the install iPerf 3 (provided in the previous section) and if required, complete the installation and if iPerf3 is already available, you will get a message that iPerf is already installed.
STEP 04: Install iPerf on the ONPREM Instances
Now we are going to install iPerf3 on the Linux Instances (ONPREM) we have in our architecture.
Oracle Linux Client
- Connect to the ON-PREM Linux Client Instance.
- Issue the command the install iPerf 3 (provided in the previous section) and if required, complete the installation and if iPerf3 is already available, you will get a message that iPerf is already installed.
Oracle Linux Client CPE
- Connect to the ON-PREM Linux CPE Instance.
- Issue the command the install iPerf 3 (provided in the previous section) and if required, complete the installation and if iPerf3 is already available, you will get a message that iPerf is already installed.
STEP 05: Install iPerf2
Now that we have installed iPerf3, we are going to install iPerf2 on ALL the Linux Instances throughout the architecture.
We are using Oracle Linux 8 so we will need the following iPerf 2 package: [Oracle Linux 8 (x86_64) EPEL]
If you are using Oracle Linux 9, use this package: [Oracle Linux 9 (x86_64) EPEL]
When you use another OS or Linux distribution use a package that is compiled for your OS.
- Use the command below to install iPerf 2 on all Oracle Linux 8 Instances.
sudo dnf install https://yum.oracle.com/repo/OracleLinux/OL8/developer/EPEL/x86_64/getPackage/iPerf-2.1.6-2.el8.x86_64.rpm
- Confirm the installation with Y.
- iPerf2 will install and notice that the installation has been completed.
- Issue the command iPerf -v to verify the iPerf version that is installed.
- Notice that iPerf v 2.1.6 is installed.
Make sure you install iPerf2 on all other Instances as well.
For the Windows-based Hub Stepstone, we can download a standalone [iPerf-2.2.n-win64] executable.
- Execute the iPerf.exe command just to see if it works.
- Issue the command iPerf -v to verify the iPerf version that is installed.
- Notice that iPerf v 2.2.n is installed.
STEP 06: Define the iPerf Tests and prepare the iPerf commands
Below I will provide some iPerf commands with the additional flags and explain what they mean. Some more information on the commands can be found here: [Oracle Network Performance documentation].
Basic iPerf commands for testing with TCP:
On the iPerf server side:
iPerf3 -s
On the iPerf client side:
iPerf3 -c <server_instance_private_ip_address>
iPerf commands that we will use for testing with TCP:
- Bi-directional bandwidth measurement: (-r argument)
- TCP Window size: (-w argument)
On the iPerf server side:
iPerf3 -s -w 4000
On the iPerf client side:
iPerf3 -c <server_instance_private_ip_address> -r -w 2000 iPerf3 -c <server_instance_private_ip_address> -r -w 4000
iPerf commands that we will use for testing with UDP:
- UDP tests: (-u), bandwidth settings (-b)
On the iPerf server side:
iPerf -s -u -i 1
On the iPerf client side:
iPerf -c <server_instance_private_ip_address> -u -b 10m iPerf -c <server_instance_private_ip_address> -u -b 100m iPerf -c <server_instance_private_ip_address> -u -b 1000m iPerf -c <server_instance_private_ip_address> -u -b 10000m iPerf -c <server_instance_private_ip_address> -u -b 100000m
iPerf commands that we will use for testing with TCP (with MSS):
- Maximum Segment Size (-m argument) display:
On the iPerf server side:
iPerf -s
On the iPerf client side:
iPerf -c <server_instance_private_ip_address> -m
iPerf commands that we will use for testing with TCP (parallel ):
On the iPerf server side:
iPerf -s
On the iPerf client side:
iPerf -c <server_instance_private_ip_address> -P 2
For all of the tests we will perform in this tutorial we will use the commands below.
iPerf FINAL command for testing
- Bandwidth settings (-b)
- Parallel tests (-P argument):
To test the throughput for a 100Gb connection with 100Gbps we set the throughput to 9Gbps with 11 parallel streams.
On the iPerf server side:
iPerf -s
On the iPerf client side:
iPerf -c <server_instance_private_ip_address> -b 9G -P 11
STEP 07: Preform iPerf tests within the same VCN in the same subnet
During this step we are going to perform an iPerf2 throughput test within the same VCN and the same subnet. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From Instance-A1 to Instance-A2
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.1.50 |
IP of the iPerf client | 172.16.1.93 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.1.50 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.05 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Instance-A2 to Instance-A1
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.1.93 |
IP of the iPerf client | 172.16.1.50 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.1.93 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.05 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 08: Preform iPerf tests within the same VCN across different subnets
During this step, we are going to perform an iPerf3 throughput test within the same VCN but two different subnets. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From pfSense Firewall to hub Stepstone
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.0.252 |
IP of the iPerf client | 172.16.0.20 |
iPerf command on the server | iPerf3 -s |
iPerf command on the client | iPerf3 -c 172.16.0.252 |
Tested Bandwidth (SUM) | 958 Mbytes/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From hub Stepstone to pfSense Firewall
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.0.20 |
IP of the iPerf client | 172.16.0.252 |
iPerf command on the server | iPerf3 -s |
iPerf command on the client | iPerf3 -c 172.16.0.20 |
Tested Bandwidth (SUM) | 1.01 Gbit/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 09: Preform iPerf tests between two different VCNs
During this step we are going to perform an iPerf2 throughput test between two different VCNs and two different subnets. Note that the test will go through a firewall that is located in the Hub VCN. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From Instance-A1 to Instance-B
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.2.88 |
IP of the iPerf client | 172.16.1.93 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.2.88 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.02 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Instance B to Instance A1
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.1.93 |
IP of the iPerf client | 172.16.2.99 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.1.93 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.02 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 10: Preform iPerf tests between different VCNs (bypassing the pfSense Firewall)
During this step we are going to perform an iPerf2 throughput test between two different VCNs and two different subnets. Note that the test will bypass the firewall that is located in the Hub VCN. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From Instance-C to Instance-D
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.4.14 |
IP of the iPerf client | 172.16.3.63 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.4.14 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.04 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Instance-D to Instance-C
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.3.63 |
IP of the iPerf client | 172.16.4.14 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.3.63 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.05 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 11: Preform iPerf tests between ONPREM and OCI Hub VCN
During this step we are going to perform an iPerf2 throughput test between ONPREM and OCI using a Site-to-Site IPSec VPN tunnel. Note that the test will go through the firewall that is located in the Hub VCN. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
When you are performing throughput tests (with or without iPerf) using a VPN IPSec tunnel and a pfSense (firewall) MTU and MSS an important factor to take into account, when this is done wrong, the throughput results will be invalid and not as expected.
With iPerf you can tweak the packet stream so that the packets are sent with a specific MSS, you can use this if you are not able to change the MSS settings on the devices in the path between your source or destination.
Maximum Segment Size Clamping
In my case the ONPREM side had an MTU of 9000 sending a packet with the MSS value of 1500 + IPSec overhead.
The pfSense Interface MTU is 1500 ... causing fragmentation issues.
By setting the interface MSS to 1300 it changes the size "on the fly" and this technique is called "Maximum Segment Size Clamping". More information about this is provided at the beginning of this tutorial.
MSS Change on the pfSense
From VPN Client Instance (ONPREM) to Hub Stepstone
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.0.252 |
IP of the iPerf client | 10.222.10.19 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.0.252 -b 9G -P 5 |
Tested Bandwidth (SUM) | 581 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Hub Stepstone to VPN Client Instance (ONPREM)
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.19 |
IP of the iPerf client | 172.16.0.252 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.19 -b 9G -P 5 |
Tested Bandwidth (SUM) | 732 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 12: Preform iPerf tests between ONPREM and OCI Spoke VCN
During this step we are going to perform an iPerf2 throughput test between ONPREM and OCI using a Site-to-Site IPSec VPN tunnel. Note that the test will go through the firewall that is located in the Hub VCN. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From VPN Client Instance (ONPREM) to Instance-A1
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.1.93 |
IP of the iPerf client | 10.222.10.19 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.1.93 -b 9G -P 5 |
Tested Bandwidth (SUM) | 501 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
NEW TESTS WITH MMS IN iPerf COMMAND
With iPerf you can tweak the packet stream so that the packets are sent with a specific MSS, you can use the following commands if you are not able to change the MSS settings on the devices in the path between your source or destination.
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.1.93 |
IP of the iPerf client | 10.222.10.19 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.1.93 -b 9G -P 5 -M 1200 |
Tested Bandwidth (SUM) | 580 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Instance-A1 to VPN Client Instance (ONPREM)
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.19 |
IP of the iPerf client | 172.16.1.93 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.19 -b 9G -P 5 |
Tested Bandwidth (SUM) | 620 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
NEW TESTS WITH MMS IN iPerf COMMAND
With iPerf you can tweak the packet stream so that the packets are sent with a specific MSS, you can use the following commands if you are not able to change the MSS settings on the devices in the path between your source or destination.
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.19 |
IP of the iPerf client | 172.16.1.93 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.19 -b 9G -P 5 -M 1200 |
Tested Bandwidth (SUM) | 805 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 13: Preform iPerf tests between ONPREM and OCI Spoke VCN (bypassing the pfSense Firewall)
During this step we are going to perform an iPerf2 throughput test between ONPREM and OCI using a Site-to-Site IPSec VPN tunnel. Note that the test will bypass the firewall that is located in the Hub VCN. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From VPN Client Instance (ONPREM) to Instance-D
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 172.16.4.14 |
IP of the iPerf client | 10.222.10.19 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 172.16.4.14 -b 9G -P 5 |
Tested Bandwidth (SUM) | 580 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From Instance-D to VPN Client Instance (ONPREM)
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.19 |
IP of the iPerf client | 172.16.4.14 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.19 -b 9G -P 5 |
Tested Bandwidth (SUM) | 891 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 14: Preform iPerf tests between the INTERNET and the OCI Hub VCN
During this step we are going to perform an iPerf2 throughput test between a client on the internet and OCI using the internet. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From Internet to Hub Stepstone
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | xxx.xxx.xxx.178 |
IP of the iPerf client | xxx.xxx.xxx.152 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c xxx.xxx.xxx.178 -b 9G -P 5 |
Tested Bandwidth (SUM) | 251 Mbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
STEP 15: Preform iPerf tests within the same subnet ONPREM
During this step we are going to perform an iPerf2 throughput test between two ONPREM instances. The image below shows the paths with the arrows between what two endpoints where we are going to perform the throughput tests.
From VPN Client Instance (ONPREM) to StrongSwan CPE Instance (ONPREM)
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.70 |
IP of the iPerf client | 10.222.10.19 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.70 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.05 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
From StrongSwan CPE Instance (ONPREM) to VPN Client Instance (ONPREM)
In the table below you will find the IP address of the client and the server (used in this test), and the commands used to perform the iPerf test with the test results.
IP of the iPerf server | 10.222.10.19 |
IP of the iPerf client | 10.222.10.70 |
iPerf command on the server | iPerf -s |
iPerf command on the client | iPerf -c 10.222.10.19 -b 9G -P 5 |
Tested Bandwidth (SUM) | 1.05 Gbits/sec |
In the next screenshots, you will also find the full testing outputs of the iPerf tests.
Conclusion
In this tutorial, we have performed different types of throughput tests using iPerf2 and iPerf# The tests were performed on various different sources and destinations in the full network architecture with different paths.
In the table below you can see a summary of of the test results that we collected.
Test Type | Bandwidth Result | |
---|---|---|
From Instance-A1 to Instance-A2 | 1.05 Gbits/sec | OCI internal |
From Instance-A2 to Instance-A1 | 1.05 Gbits/sec | OCI internal |
From pfSense Firewall to hub Stepstone | 958 Mbytes/sec | OCI internal |
From hub Stepstone to pfSense Firewall | 1.01 Gbit/sec | OCI internal |
From Instance-A1 to Instance-B | 1.02 Gbits/sec | OCI internal |
From Instance B to Instance A1 | 1.02 Gbits/sec | OCI internal |
From Instance-C to Instance-D | 1.04 Gbits/sec | OCI internal |
From Instance-D to Instance-C | 1.05 Gbits/sec | OCI internal |
From VPN Client Instance (ONPREM) to Hub Stepstone | 581 Mbits/sec | ONPREM to OCI trough firewall |
From Hub Stepstone to VPN Client Instance (ONPREM) | 732 Mbits/sec | ONPREM to OCI trough firewall |
From VPN Client Instance (ONPREM) to Instance-A1 | 501Mbits/sec | ONPREM to OCI trough firewall |
From Instance-A1 to VPN Client Instance (ONPREM) | 620 Mbits/sec | ONPREM to OCI trough firewall |
From VPN Client Instance (ONPREM) to Instance-D | 580 Mbits/sec | ONPREM to OCI firewall bypass |
From Instance-D to VPN Client Instance (ONPREM) | 891 Mbits/sec | ONPREM to OCI firewall bypass |
From Internet to Hub Stepstone | 251 Mbits/sec | INTERNET to OCI |
From VPN Client Instance (ONPREM) to StrongSwan CPE Instance (ONPREM) | 1.05 Gbits/sec | ONPREM to ONPREM |
From StrongSwan CPE Instance (ONPREM) to VPN Client Instance (ONPREM) | 1.05 Gbits/sec | ONPREM to ONPREM |