How to Measure Network Throughput: iperf3 + MTU + NIC Offload Settings

Measuring network throughput involves no guesswork or theoretical limits. It’s the actual measurement of how much data your network transfers in real conditions. It reflects how efficiently the same amount of data moves between systems over a defined period.
High throughput ensures data transfers quickly, which is essential for operations that require data transfer, significantly improving user experience and productivity.
At ServerMania, our infrastructure is built for consistent high network performance across demanding workloads. From 10Gbps Dedicated Servers to 25Gbps Dedicated Servers and Unmetered Dedicated Servers, every server runs on optimized hardware and network architecture designed for maximum throughput, low latency, and reliable connections at scale.
This guide shows how to measure real network throughput (actively or passively), identify bottlenecks, and optimize your network for consistent high performance.
What Is Real Throughput?
Throughput is the actual amount of traffic flowing from a specific source to a specific destination at a specific point in time, measured in packets per second, bytes per second, or bits per second. When you measure network throughput, you are evaluating how your network, hardware, and configuration perform under real conditions. This includes network latency, packet loss, processing overhead, MTU size, TCP receive window, and NIC offload settings.
Latency is a common performance problem that results in lower throughput at higher levels of the OSI model, as the underlying infrastructure can only go so fast. Also, network configuration issues, such as duplex mismatches, can negatively affect throughput, leading to significantly reduced network speeds.
See Also: How to Choose Server Bandwidth
Bandwidth Vs. Throughput
Before we start, it’s crucial to understand the difference between bandwidth and throughput. Bandwidth refers to the theoretical maximum amount of traffic that a given link can support, while throughput is the actual amount of data successfully transmitted over that link at a given time. Throughput is measured in bits per second (bps), while bandwidth is often described in terms of the maximum capacity of the link, which can lead to confusion if not properly understood.
See Also: How to Test Server Network Speed

How to Measure Network Throughput
To measure network throughput, we need to focus on the actual data transfer, not the theoretical limits that bandwidth represents. Many tools can mislead your measurements, showing the maximum amount of traffic a link can handle, while in reality, we want to capture the amount of data that passes through.
We’ll focus on measuring the actual amount of data moving across connections to a specific destination, accounting for packet loss, network latency, and processing overhead. This way, we get a clear insight into real throughput, as two systems with the same bandwidth can produce different throughput results.
1. Define the Measurements
Now that we’ve set clear boundaries between bandwidth and throughput, we need to define all metrics and measurements that impact network performance:
- Throughput: The primary metric, which is the amount of data successfully delivered between a client and a server for a specific period.
- Packet Loss: The number or percentage of network packets that failed to transmit during the time we’re performing the measurement.
- Network Latency: The delay between sending packets and receiving a response, during a data transfer between server and client.
- MTU Size: The packet size in bytes, where an incorrect MTU size leads to fragmented packets or a mismatched MTU, which increases overhead.
- TCP Window: Controls how much data can be received before acknowledgment. A small TCP receive window limits throughput on high-speed connections.
By performing precise measurements of each of these metrics, we can easily diagnose a network issue and tackle the root cause in any environment.
2. Choosing Testing Method
To measure real throughput, you need the correct tool. There are many methods, some of which provide surface-level metrics that won’t do, while others dig deep into your network infrastructure to bring up the true aspects of your network performance (no guesswork).
Active vs Passive Testing:
Active testing is a method to measure throughput by generating synthetic “fake” traffic from the server to a destination, while you have control over every single aspect. It means controlling the packet size, MTU size, number of requests, and more, which is an excellent way to measure the speed and performance.
In contrast, passive testing measures your actual traffic, more like an observation of what you already have going on, and captures real-world performance.
- Use active testing when you want controlled, repeatable measurements
- Use passive testing when you want real network throughput measured
We’re going to show you how to perform both. However, first let’s establish a reliable test environment.
3. Set Up Your Test Environment
To begin with, you’ll need a test environment with no interference. This means two separate machines, as testing on the same server won’t have any effect. The easiest way to perform such a test is to use your dedicated or cloud server in a data center and your home machine.
Then you need to install a testing tool on your server. We’re going to be using “ipref3“.
Install iperf3 on Linux (Ubuntu / Debian)
sudo apt update
sudo apt install iperf3 -yAfter installation, verify it:
iperf3 --versionThen also install ipref3 on the destination server; in many cases, that would be a home working PC that you’ll be using for the test. If you’re using Windows, you can visit the official ipref3 download page, and if you’re using macOS, just use Homebrew:
brew install iperf3By now, you should have ipref3 installed on your data center server and on the destination station, ready to begin testing your network throughput.
4. Run iperf3 for Accurate Results
Automated network benchmarking can be performed using tools like iPerf3, which allows for testing network throughput between nodes by sending data packets and measuring the bandwidth achieved during the test. We’re going to get controlled throughput results by generating traffic between your data center machine and the home client to isolate variables and get repeatable throughput calculations.
Start by launching iperf3 in server mode on your remote server:
iperf3 -sThen, from your home machine (the client), connect to the specific destination:
iperf3 -c <server_ip>This is the foundation network throughput test using default settings. It shows how much data moves in the network across a pre-defined time period.
Taking this one step further, to get accurate insights, we must adjust all the key parameters. You can use parallel streams to simulate real traffic load and achieve high throughput:
iperf3 -c <server_ip> -P 4You can also increase the packet size to test with larger packets:
iperf3 -c <server_ip> -l 64KIn addition, you can also increase the length of the testing period:
iperf3 -c <server_ip> -t 30Once the test completes, iperf3 returns detailed output with data transfer, bandwidth, retransmissions, and other key metrics. Here’s an example output:
| Test Type: | Transfer: | Bandwidth: | RT: | Notes: |
|---|---|---|---|---|
| Default TCP | 1.10 GB | 940 Mbps | 2 | Network throughput is stable, staying close to the link capacity. |
| Parallel Streams (4) | 1.25 GB | 1,020 Mbps | 5 | Much improved throughput with higher traffic load. |
| Large Packet Size | 1.30 GB | 1,050 Mbps | 3 | We can observe better efficiency with larger packets. |
| UDP Test (1 Gbps) | 1.00 GB | 980 Mbps | N/A | Shows significant packet loss and network latency impact. |
| Reverse Mode | 1.05 GB | 900 Mbps | 8 | Much lower throughput in the return direction. |
Here’s how to read the results accurately:
- Transfer (Bytes) shows the total amount of data sent during the test
- Bandwidth (Mbps) reflects the actual, maximum throughput achieved
- Retransmissions indicate packet loss or instability in the connection
Tip: Run multiple variations and compare outputs. This will help you identify network limits, performance drops, and where to focus your network optimization efforts.
5. Run a Passive Throughput Test
Contrasting active testing through synthetic traffic, passive testing measures the real network throughput by capturing metrics with existing traffic. If you haven’t already established real traffic, feel free to skip this step and proceed with the rest of the reading.
This method shows how your network performs under normal operations, including real applications, user behavior, and mixed connections. It gives a more accurate view of overall network performance, especially in production environments where customers rely on consistent network speed and efficiency.
First, let’s take a look at the most-popular tools you can use:
| Tool: | Type: | Primary Purpose: |
|---|---|---|
| Wireshark | Packet Analyzer | Packets and data transfer by capturing real traffic and analyzing each layer. |
| NetFlow | Flow Monitoring | Tracks traffic patterns across network devices between endpoints over a time period. |
| sFlow | Sampling Tool | Uses sample packets instead of capturing all the traffic, reducing overhead. |
| iftop / nload | Interface Monitoring | Quick view of throughput per connection by reading stats from the OS. |
| Prometheus + Grafana | Monitoring Stack | Long-term network performance tracking that visualizes traffit, bandwidthitdh and throughput. |
Wireshark Example
Wireshark is definitely the most commonly used tool for passive measuring throughput, as it offers full visibility into actual network traffic patterns. Production throughput is the number of good units produced within a specific timeframe.
Step 1: Install Wireshark
First, download and install Wireshark on your system.
sudo apt update
sudo apt install wireshark -yImportant: During installation, choose Yes when asked to allow non-root users to capture packets.
Step 2: Select Interface
The next step is to identify your active network interface:
ip aLook for interfaces like eth0, ens3, or wlan0. Choose the one handling your traffic.
Start capturing packets with tshark:
sudo tshark -i <interface>Note: Don’t forget to replace <interface> with your actual interface name.
Step 3: Capture Traffic
Allow the capture run for a time period while normal data transfer happens across your network.
You can generate traffic manually:
- Download a file from the web
- Transfer data between systems
- Access web services or APIs
You can simply run a tshartk with a time limit:
sudo tshark -i <interface> -a duration:60 -w capture.pcapStep 4: Analyze Throughput
After capturing, analyze the data to calculate throughput.
Use tshark to summarize total bytes:
tshark -r capture.pcap -q -z io,stat,1Example Results:
| Metric: | Value Example: | What It Means: |
|---|---|---|
| Average Throughput | 120 Mbps | Typical data transfer rate during the session |
| Peak Throughput | 350 Mbps | Maximum traffic observed in a short burst |
| Packet Loss | 0.5% | Minor loss that might be affecting efficiency |
| Network Latency | 25 ms | Delay between the devices’ source and target |
| Top Connection | 80 Mbps | Highest throughput from a single client point |
Network throughput can be affected by various factors, including congestion, latency, and packet loss, which can lead to significant performance issues if not monitored and managed effectively.
That’s how to measure real network throughput by testing passively. However, there are ways to quickly optimize network aspects, re-run the test, and observe immediate improvements. So, continue reading.
See Also: Best Server Monitoring Tools
6. MTU, TCP & NIC Optimization
The raw throughput heavily depends on how your network manages packets, offloading, and buffering. This means that even with the best network hardware and high port speed, at the MTU, TCP, and NIC levels, you might be facing packet loss, lower throughut and overhead which can be easily optimized.
- MTU (Maximum Transmission Unit): Defines the packet size used during transmission. Incorrect mtu size leads to fragmented packets or a mismatched MTU, which reduces efficiency.
- TCP Settings: Control how data flows between client and server. Parameters like TCP receive window impact how much data transfer happens before acknowledgment.
- NIC Offload Settings: Allow network devices to offload processing tasks from the CPU to the network card. Proper tuning reduces processing overhead and improves throughput.
So, optimizing these three areas improves efficient data transfer and stabilizes network performance across all connections. Let’s go through each configuration:
6.1 Optimize MTU Size
The first thing to do here is find the optimal MTU size for your specific network path. Therefore, first test with ping to avoid fragmented smaller packets:
ping -M do -s 1472 <destination_ip>In case the test fails, just decrease the packet size until it’s successful. Try adding 28 bytes to determine the best MTU. Then apply the new value:
sudo ip link set dev <interface> mtu <value>In short, a consistent MTU size will minimize the chance of dropped packets due to size, and therefore increase the consistency of the real throughput. In turn, a higher MTU size can cause severe congestion.
6.2 Optimize TCP Settings
The TCP settings control how much data is transferred before acknowledgment. If buffers are too small, the network cannot sustain high throughput, especially over long-distance connections with higher ms.
You can check the current values by running the following command:
sysctl net.ipv4.tcp_rmem
sysctl net.ipv4.tcp_wmemThen, you can increase the buffer limits and enable scaling:
sudo sysctl -w net.ipv4.tcp_rmem="4096 87380 33554432"
sudo sysctl -w net.ipv4.tcp_wmem="4096 65536 33554432"
sudo sysctl -w net.ipv4.tcp_window_scaling=1These TCP settings changes would allow much more data to go through, which would reduce the idle time between acknowledgements and generally boost the throughput across the highest speed networks.
6.3 Optimize NIC Offloading
NIC offloading can easily shift the processing from the processor (CPU) to your network devices. This greatly reduces processing overhead during heavy traffic, but in some cases, it can introduce instability or inaccurate throughput readings.
First, you must check the current settings:
ethtool -k <interface>Disable offloading features for testing:
sudo ethtool -K <interface> tso off gso off gro offOr enable them:
sudo ethtool -K <interface> tso on gso on gro onTest both configurations while measuring throughput. We recommend active testing after each change. Some systems achieve better performance with offloading enabled, while others benefit from disabling it, depending on hardware and workload.
7. Identify Common Bottlenecks
Bottleneck analysis determines the overall throughput of the production line by identifying the slowest machine or process. The quality of hardware, such as routers and switches, can influence throughput; outdated equipment may slow down network performance.
Here are the most common throughput bottlenecks:
- High Latency: Network latency is the delay between sending packets and receiving responses, and if high, it means too much physical distance, routing complexity, or internal delays.
- Packet Loss: Lost packets are data clusters that never reach the destination. It mainly happens when the destination server is offline or there is a defective piece of hardware.
- Congestion: Congestion occurs when too much traffic flows through the same network path. It stems from limited bandwidth, peak usage periods, or too many active connections.
- HW Limits: Hardware bottlenecks occur when CPUs, NICs, or network devices cannot process incoming traffic fast enough. It happens mainly from outdated equipment or limited capacity.
To identify potential bottlenecks, check each segment of your network step by step. Test all the different endpoints, monitor traffic on the router, compare wired vs. wireless networks, and review resource usage.
Quick Tip: Value Stream Mapping (VSM) is a lean management technique used to visually map the flow of materials and information, highlighting delays and bottlenecks.
Explore: Low Latency Servers For Faster Performance
Optimize Throughput & Network Performance with ServerMania

Measuring throughput is only the first step. Increasing throughput and improving network performance come next, which will reduce operating costs and increase customer satisfaction.
Whenever you focus on increasing your throughput, you ensure faster data transfer across your internet connection and enable seamless communication between your systems. A strong infrastructure removes bottlenecks, keeps performance fixed under load, and supports consistent results across all connections.
ServerMania delivers an automated approach to high-performance hosting through 10Gbps Dedicated Servers to 25Gbps Dedicated Servers, and Unmetered Dedicated Servers. These solutions provide the capacity and stability needed to handle high traffic, maintain efficiency, and scale network throughput.
💬 If you have questions, get in touch with our 24/7 customer support or book a free consultation with throughput experts to discuss your next project.
Was this page helpful?
