Measuring Network Throughput Using iperf3
Hello, I am incompetent.
Introduction
I once tried to receive metric data with Elasticsearch and visualize it with Kibana, but it wasn't eco-friendly enough for home use. In reality, even though it's a distributed system, the sheer amount of resources it consumed was too much for my mind and my weak home server to handle, so I'm looking for other methods.
It's truly pitiful for the computer to trigger the OOM Killer.
I already have alternative methods prepared, but if all I need is to send metrics, then I'll consider the data to be sent in the first place.
First, let's start with the lower layers before the application layer.
Install iperf3
This time, I will work with my ThinkPad X1C@Artix Linux as the sender and my home server@Devuan as the receiver.
ThinkPad X1C@Artix Linux
$ sudo pacman -S iperf3
Home Server@Devuan
$ sudo apt install iperf3
$ sudo ufw allow 5201/tcp
$ sudo ufw reload
When installing the iperf3 apt package, a TUI will appear asking whether to start it as a daemon, but this time I will choose n.
Now the preparations are complete.
Measurement
Listen on the server side.
Home Server@Devuan
$ iperf -s
Send from the client side. It's a Wi-Fi environment with a wall in between, so please bear with me... On the contrary, perhaps obtaining low-quality data serves its purpose?
ThinkPad X1C@Artix Linux
$ iperf3 -c DevuanSrvIP
Connecting to host DevuanSrvIP, port 5201
[ 5] local 192.168.10.118 port 41000 connected to DevuanSrvIP port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 10.8 MBytes 90.1 Mbits/sec 0 406 KBytes
[ 5] 1.00-2.00 sec 9.00 MBytes 75.5 Mbits/sec 0 484 KBytes
[ 5] 2.00-3.00 sec 9.38 MBytes 78.6 Mbits/sec 1 485 KBytes
[ 5] 3.00-4.00 sec 7.25 MBytes 60.8 Mbits/sec 0 167 KBytes
[ 5] 4.00-5.00 sec 8.25 MBytes 69.2 Mbits/sec 0 494 KBytes
[ 5] 5.00-6.00 sec 8.62 MBytes 72.3 Mbits/sec 0 496 KBytes
[ 5] 6.00-7.00 sec 9.38 MBytes 78.6 Mbits/sec 1 499 KBytes
[ 5] 7.00-8.00 sec 9.50 MBytes 79.7 Mbits/sec 0 501 KBytes
[ 5] 8.00-9.00 sec 7.62 MBytes 64.0 Mbits/sec 1 505 KBytes
[ 5] 9.00-10.00 sec 7.62 MBytes 63.9 Mbits/sec 0 508 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 87.4 MBytes 73.3 Mbits/sec 3 sender
[ 5] 0.00-10.02 sec 85.3 MBytes 71.5 Mbits/sec receiver
iperf Done.
Even with this alone, network quality can be evaluated from Retr as retries under TCP window control and Cwnd as the window size, and it also seems possible to evaluate quality from variations in other aspects.
If done simply, by extracting only the sender:
$ iperf3 -c DevuanSrvIP | grep "sender"
[ 5] 0.00-10.00 sec 83.8 MBytes 70.2 Mbits/sec 1 sender
From here, it seems possible to pipe it to awk and send the necessary information as metric data.
Conclusion
Another thing I'm curious about is disk I/O metric data, which seems interesting, but regularly measuring intense disk throughput would only be a nuisance, so I feel it's better to collect a reasonable amount of information in a small way. It's not something I usually pay much attention to, but it would be interesting to see when I/O waits occur unusually often.
I wonder how much application metric data should be collected? The scope that needs to be monitored feels quite broad.
Each software has its own design philosophy, so it's a matter of unraveling that to see how much output can be generated. As long as information is collected externally, it seems like the idea is to introduce modules and have them output data, but it might be more flexible to primarily collect data that can be gathered solely from the OS side.
Or rather, if system calls are being instructed to the CPU, it feels like it might be possible to trace them somehow, but that's another story...
By the way, Elasticsearch, which I've installed for various reasons in the past, is quite demanding unless you have an environment where you can say 'use as many resources as you want!'. So, if I ever get such an environment, I'll try it again.
However, OSS software that incorporates in-memory DB elements like this is too excellent, so if I'm going to use it, I need to properly judge whether it's worth the purpose and cost before introducing it, otherwise, I might just become a guy who watches RAM being heavily consumed, so I'll be careful.
See you next time. Thank you.