Tools: Automated VPS Speed Testing with Bash: A Practical Benchmarking Toolkit (with Real Data)

Tools: Automated VPS Speed Testing with Bash: A Practical Benchmarking Toolkit (with Real Data)

Why VPS Speed Testing Matters

The Toolchain

1. speedtest: Quick Public Internet Throughput Checks

Useful flags

Why server selection matters

How to read the output

2. iperf3: Controlled Point-to-Point Testing

Useful advanced options

When to use reverse mode

TCP vs UDP

3. mtr: Routing, Packet Loss, and Path Stability

Useful variants

How to Read MTR Output

What actually matters

1. Loss on the final hop

2. Loss on an intermediate hop only

3. Latency jumps

4. High StDev

4. bench.sh: Fast Baseline for System and Disk Checks

What to use it for

What not to use it for

Real Test Results: Los Angeles CN2 GIA Test Case

What this table suggests

Disk I/O

How to interpret this

Interpreting Results: What’s Good Enough?

Latency

Packet loss

Throughput

Jitter / variability

Automating Tests with Bash

What an automation script should do

Automating Tests with Cron

Cron tips that save pain later

A Few Practical Benchmarking Rules

1. Never trust a single run

2. Test the route that matches your users

3. Use more than one tool

4. Record context

Final Thoughts Choosing a VPS based on CPU, RAM, and "1 Gbps port" marketing is how people end up with servers that look good on paper but feel slow in production. For real workloads, the numbers that matter are usually network latency, routing stability, packet loss, sustained throughput, and disk behavior under load. That is especially true if your users are distributed across regions. A VPS that performs well from Los Angeles may feel completely different from Tokyo, Singapore, or mainland China. Raw bandwidth is only one part of the story; routing quality and congestion patterns often matter more. In this article, I’ll walk through a practical VPS benchmarking workflow using a few simple tools: speedtest, iperf3, mtr, and bench.sh. I’ll also show how to automate recurring tests with Bash and cron, how to read MTR output without guessing, and how to decide whether a result is actually good enough for your use case. As a concrete example, I’ll use one dataset collected from a BandwagonHost VPS in Los Angeles CN2 GIA. Treat it as a test case, not a universal recommendation: the point is the methodology. Most providers advertise headline specs like these: Those numbers are not useless, but they are incomplete. Here’s what usually changes real-world performance: A single benchmark is also misleading. If you only test once, from one city, at one time, you’re measuring a moment—not the network. That’s why I prefer a repeatable pipeline: None of the tools below are magic by themselves. Each answers a different question. If you just want to know how fast a VPS can reach public speed test servers, Ookla’s CLI is the fastest way to start. A speed test is only as meaningful as the server you choose. If your selected endpoint is overloaded or far away, you may be benchmarking the test target rather than your VPS. My rule: pick one or two stable servers per region and keep them fixed for historical comparisons. Look beyond the Mbps number: For example, 900 Mbps to Los Angeles and 300 Mbps to Shanghai may still be excellent if latency and routing are stable across those paths. speedtest is good for public Internet checks. iperf3 is better when you want controlled measurements between two known hosts. Run a server on the VPS: From a client machine: A lot of VPS paths are asymmetric. Download may be great while upload is poor, or vice versa. -R helps catch that. If TCP is fine but UDP loss is high, that’s a clue that the path is less healthy than a simple bandwidth number suggests. If I had to keep only one network diagnostic tool for VPS evaluation, it would probably be mtr. What these options are for: ICMP can be deprioritized by routers, so a plain traceroute sometimes makes healthy links look bad. Testing with TCP to a real service port often gives a more realistic picture. This is where many benchmark posts stop too early. An MTR table is not just "more hops = bad". A typical report includes columns like these: If the final destination shows packet loss, that matters. If hop 6 shows 70% loss but hop 7 onward and the final hop show 0% loss, that usually means the router is rate-limiting ICMP replies. It does not automatically mean real traffic is dropping there. A sudden jump that persists from one hop onward often reveals where distance or congestion enters the path. That suggests the long-haul segment begins around hop 6. Average latency can look fine while jitter is terrible. If Avg is 40 ms but Wrst is 180 ms and StDev is high, users may feel instability even though the average looks decent. For a quick overview, bench.sh is still a convenient shortcut. It typically reports CPU model, disk I/O, memory, kernel, and some network tests. Don’t treat bench.sh as a full benchmark methodology. It’s a summary tool, not a substitute for repeated measurements. Its disk section often wraps fio-style tests, which are useful but need interpretation. Below is one real dataset from a BandwagonHost VPS in the DC6 Los Angeles CN2 GIA location. A few things stand out immediately: These numbers do not mean every workload will feel identical. A file download, a web app, and an SSH session stress the path differently. The 4k numbers are more relevant to random small-block workloads such as: The 64k numbers are more relevant to: A common mistake is to look only at the highest MB/s number. In practice: For a general-purpose VPS, balanced small-block and mid-size sequential performance is usually more useful than flashy peak sequential numbers alone. A benchmark is only useful if you can turn it into a decision. These are my rough practical thresholds for a typical VPS: If latency swings wildly over the same route, the average is hiding a problem. Stable 90 ms often feels better than unstable 55–180 ms. In short: consistency beats peak numbers. Manual tests are useful during evaluation, but the real value comes from collecting repeatable data. I keep a simple automation script in this repo: At minimum, your script should: A simple JSON line might look like this: Once you have structured history, you can answer more interesting questions: If you want long-term data, run the benchmark on a schedule. Run the script every 6 hours and save logs: Or, if you want a lighter daily run at off-peak hours: If you’re collecting data from multiple regions, it’s also smart to stagger the jobs by a few minutes rather than hammering all endpoints at once. After doing this a few times, these rules have held up well for me: One benchmark can be an outlier. Run at different times. A VPS can be amazing for US traffic and mediocre for East Asia. Both can be true. Without context, benchmark history becomes useless quickly. If you’re evaluating a VPS, don’t ask only, “How fast is this server?” Ask instead: That framing turns benchmarking from a marketing screenshot into an engineering tool. The Los Angeles CN2 GIA dataset above is a good example: the interesting part is not just that local throughput is high, but that cross-region behavior remains reasonably strong and interpretable when combined with latency and route analysis. If you want to build your own repeatable workflow, start simple: one Bash script, fixed test targets, structured logs, and scheduled runs. That alone will tell you more than most hosting landing pages ever will. What tools do you use to benchmark your servers, and which metric has turned out to matter most in real production use? Templates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to ? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">curl -s https://packagecloud.io/-weight: 500;">install/repositories/ookla/speedtest-cli/script.deb.sh | -weight: 600;">sudo bash -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install speedtest speedtest --server-id=5145 -weight: 500;">curl -s https://packagecloud.io/-weight: 500;">install/repositories/ookla/speedtest-cli/script.deb.sh | -weight: 600;">sudo bash -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install speedtest speedtest --server-id=5145 -weight: 500;">curl -s https://packagecloud.io/-weight: 500;">install/repositories/ookla/speedtest-cli/script.deb.sh | -weight: 600;">sudo bash -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install speedtest speedtest --server-id=5145 speedtest --list | head speedtest --server-id=5145 --format=json speedtest --accept-license --accept-gdpr speedtest --list | head speedtest --server-id=5145 --format=json speedtest --accept-license --accept-gdpr speedtest --list | head speedtest --server-id=5145 --format=json speedtest --accept-license --accept-gdpr --server-id --format=json --accept-license --accept-gdpr -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install iperf3 -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install iperf3 -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install iperf3 iperf3 -s iperf3 -c YOUR_VPS_IP -t 30 iperf3 -c YOUR_VPS_IP -t 30 iperf3 -c YOUR_VPS_IP -t 30 iperf3 -c YOUR_VPS_IP -t 30 -P 4 iperf3 -c YOUR_VPS_IP -t 30 -R iperf3 -c YOUR_VPS_IP -t 30 -P 4 -R iperf3 -c YOUR_VPS_IP -u -b 100M -t 30 iperf3 -c YOUR_VPS_IP -t 30 -P 4 iperf3 -c YOUR_VPS_IP -t 30 -R iperf3 -c YOUR_VPS_IP -t 30 -P 4 -R iperf3 -c YOUR_VPS_IP -u -b 100M -t 30 iperf3 -c YOUR_VPS_IP -t 30 -P 4 iperf3 -c YOUR_VPS_IP -t 30 -R iperf3 -c YOUR_VPS_IP -t 30 -P 4 -R iperf3 -c YOUR_VPS_IP -u -b 100M -t 30 -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install mtr -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install mtr -weight: 600;">sudo -weight: 500;">apt -weight: 500;">install mtr mtr --report --report-cycles 100 YOUR_VPS_IP mtr --report --report-cycles 100 YOUR_VPS_IP mtr --report --report-cycles 100 YOUR_VPS_IP mtr -rwzc 100 YOUR_VPS_IP mtr --tcp --port 443 --report --report-cycles 100 YOUR_VPS_IP mtr --udp --report --report-cycles 100 YOUR_VPS_IP mtr -rwzc 100 YOUR_VPS_IP mtr --tcp --port 443 --report --report-cycles 100 YOUR_VPS_IP mtr --udp --report --report-cycles 100 YOUR_VPS_IP mtr -rwzc 100 YOUR_VPS_IP mtr --tcp --port 443 --report --report-cycles 100 YOUR_VPS_IP mtr --udp --report --report-cycles 100 YOUR_VPS_IP --tcp --port 443 -weight: 500;">wget -qO- bench.sh | bash -weight: 500;">wget -qO- bench.sh | bash -weight: 500;">wget -qO- bench.sh | bash fio Disk Speed Tests (Mixed R/W 50/50): Block Size | 4k (IOPS) | 64k (IOPS) ---------- | --------- | ---------- Read | 45.2 MB/s (11.3k) | 298.5 MB/s (4.6k) Write | 45.3 MB/s (11.3k) | 300.1 MB/s (4.6k) fio Disk Speed Tests (Mixed R/W 50/50): Block Size | 4k (IOPS) | 64k (IOPS) ---------- | --------- | ---------- Read | 45.2 MB/s (11.3k) | 298.5 MB/s (4.6k) Write | 45.3 MB/s (11.3k) | 300.1 MB/s (4.6k) fio Disk Speed Tests (Mixed R/W 50/50): Block Size | 4k (IOPS) | 64k (IOPS) ---------- | --------- | ---------- Read | 45.2 MB/s (11.3k) | 298.5 MB/s (4.6k) Write | 45.3 MB/s (11.3k) | 300.1 MB/s (4.6k) -weight: 500;">git clone https://github.com/devguoo/bwg-speed-test.-weight: 500;">git cd bwg-speed-test chmod +x speedtest.sh ./speedtest.sh -weight: 500;">git clone https://github.com/devguoo/bwg-speed-test.-weight: 500;">git cd bwg-speed-test chmod +x speedtest.sh ./speedtest.sh -weight: 500;">git clone https://github.com/devguoo/bwg-speed-test.-weight: 500;">git cd bwg-speed-test chmod +x speedtest.sh ./speedtest.sh { "timestamp": "2026-03-11T19:30:00+08:00", "location": "tokyo", "download_mbps": 534, "upload_mbps": 412, "latency_ms": 105 } { "timestamp": "2026-03-11T19:30:00+08:00", "location": "tokyo", "download_mbps": 534, "upload_mbps": 412, "latency_ms": 105 } { "timestamp": "2026-03-11T19:30:00+08:00", "location": "tokyo", "download_mbps": 534, "upload_mbps": 412, "latency_ms": 105 } crontab -e 0 */6 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 0 */6 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 0 */6 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 15 3 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 15 3 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 15 3 * * * /usr/bin/bash /opt/bwg-speed-test/speedtest.sh >> /var/log/vps-speedtest.log 2>&1 >> file 2>&1 - 1 Gbps or 10 Gbps uplink - Premium routing - Optimized international bandwidth - Geography — physical distance still matters; longer paths increase RTT - Routing policy — premium transit and better peering can reduce jitter and packet loss - Time of day — congestion during peak hours can cut effective throughput dramatically - Protocol sensitivity — latency affects SSH responsiveness, database calls, APIs, and TCP ramp-up - Application pattern — bulk downloads, streaming, interactive apps, and web backends stress networks differently - Measure download/upload throughput - Measure end-to-end latency - Measure routing quality and loss - Measure disk I/O - Repeat tests on a schedule - Compare trends instead of snapshots - --list: shows nearby candidate servers - --server-id: forces a consistent endpoint so your results are comparable over time - --format=json: makes the output script-friendly - --accept-license --accept-gdpr: useful for unattended automation - Latency: lower is better, but consistency matters too - Download: useful for content delivery and package installation - Upload: important for backups, replication, and media pipelines - Server location: always note it, otherwise comparisons are meaningless - -t 30: run for 30 seconds instead of a too-short burst - -P 4: use 4 parallel TCP streams; helpful when one stream can’t fully saturate the path - -R: reverse mode, so the server sends and the client receives - -u: switch to UDP testing - -b 100M: set UDP target bandwidth to 100 Mbps - TCP shows what many real applications will experience - UDP helps expose jitter and packet loss, especially for streaming or real-time traffic - -r: report mode, good for logging and sharing - -w: wide output, avoids ugly line wrapping - -z: show AS numbers when available - -c 100: send 100 probes - --tcp --port 443: test using TCP probes to port 443, often closer to real web traffic - --udp: useful when ICMP behavior is misleading - Loss% — percentage of packets lost at that hop - Snt — number of probes sent - Last — latency of the most recent probe - Avg — average latency - Best — lowest observed latency - Wrst — highest observed latency - StDev — standard deviation, a quick indicator of jitter - < 1%: usually acceptable - 1-2%: worth watching - > 2%: often user-visible for interactive workloads - Hop 4: 8 ms - Hop 5: 12 ms - Hop 6: 96 ms - Final: 101 ms - quick first-pass validation after provisioning a VPS - comparing multiple candidate servers quickly - spotting obviously weak disk or CPU behavior - US West performance is excellent: Los Angeles and San Jose are near line rate - Asia throughput remains usable: 300–500 Mbps internationally is solid for many workloads - Latency tracks geography: Tokyo is much lower than Singapore and northern China routes - China routes are decent but not magical: the path is workable, but application behavior will still depend on congestion windows, protocol tuning, and time-of-day variance - metadata-heavy applications - package management - small file access - sequential reads/writes - log processing - media handling - small-block IOPS affects application responsiveness - large-block throughput affects bulk transfer efficiency - < 20 ms: excellent for same-city or nearby-region usage - 20–50 ms: very good for interactive apps - 50–100 ms: still good for most websites, APIs, and SSH - 100–180 ms: acceptable for cross-region traffic, but noticeable for chatty apps - > 180 ms: usable, but optimization starts to matter much more - < 1%: generally fine - 1–2%: warning sign, especially for voice/video or gaming-like workloads - > 2%: likely to impact real users - > 500 Mbps: strong for most single-server delivery scenarios - 200–500 Mbps: usually enough for many sites, mirrors, CI runners, and backups - < 100 Mbps: can still be fine depending on workload, but this is where bottlenecks become easier to notice - run tests with fixed server IDs or fixed endpoints - save timestamps - append results instead of overwriting them - emit structured output such as CSV or JSON - separate network tests from disk tests - Does performance drop every evening? - Is one route consistently worse on weekends? - Did routing change after the provider migrated something upstream? - Use absolute paths for scripts and binaries - Redirect both stdout and stderr with >> file 2>&1 - Keep log rotation in mind if the script runs often - Pin test endpoints so comparisons remain valid - Avoid running too frequently; every 4–6 hours is usually enough - speedtest for public throughput - iperf3 for controlled transfer tests - mtr for routing and packet health - bench.sh for quick baseline checks - time tested - target server or region - protocol used - whether the result was upload or download - How fast from where? - How stable at what time? - Under which protocol? - For which workload?