Tools: Ultimate Guide: Unix commands that feel like cheating — and why most devs have never heard of them

Tools: Ultimate Guide: Unix commands that feel like cheating — and why most devs have never heard of them

Command 01 — pv: pipe viewer, or "why is this taking forever?"

Command 02 — moreutils: vipe, sponge, chronic — the toolkit nobody installs

vipe — edit mid-pipe

sponge — safe in-place editing

chronic — silent success, loud failure

Command 03 — fd: find, but written for humans

Command 04 — hyperfine: benchmarking that's actually rigorous

Command 05 — atool: one command for every archive format

Command 06 — entr: run any command when files change You know grep, sed, awk. You've read the same "10 useful terminal commands" articles written since 2009. This isn't that. These are the commands that change how you think about the terminal — the ones that make a task you'd normally open Python for take 8 seconds instead. "I was copying a 40GB database dump over SSH. Zero feedback. Just a blinking cursor for 25 minutes. I had no idea if it was stuck, slow, or done. I killed it twice by accident and had to start over."

— every backend engineer, at least once pv inserts into any pipe and gives you a live progress bar, transfer rate, ETA, and bytes transferred. It's invisible to the data — it just watches and reports. Tip: Use pv -petra for all metrics at once: progress, ETA, timer, rate, and bytes. Paste that alias into your shell config. Result: never fly blind on a long pipe again. "I needed to edit the middle of a pipeline — transform some JSON, hand-fix two records, then pass it on. I ended up saving to a temp file, opening it, editing, piping again. Four steps for something that should've been one."— a data engineer mid-ETL moreutils is a collection of Unix tools that should have existed from the start. Three standouts: Opens your $EDITOR in the middle of a pipe. Edit stdin, save, and the result continues down the pipe. Reads all stdin before writing to the output file. Lets you safely read and write the same file in one command — something > will silently destroy. Runs a command silently on success, but shows full output on failure. Built for cron jobs — no more noisy emails for commands that succeed 99% of the time. Result: three commands that patch real Unix gaps. "I've been using find for 8 years and I still google the syntax every single time. The flags are backwards, case sensitivity is opt-in, and ignoring node_modules requires a paragraph of shell."— a senior engineer who finally switched fd is a modern replacement for find. It's faster (parallel by default), respects .gitignore, is case-insensitive by default, and has sane syntax. Tip: fd uses regex by default. Pass -g for glob patterns if that's more natural for the task. Warning: On some systems fd is installed as fdfind to avoid a naming conflict. Alias it: alias fd=fdfind Result: never google find syntax again. "I was arguing with a teammate about which implementation was faster. We were doing time ./script_a and time ./script_b back and forth. The results bounced around by 30% depending on system load. We had no idea which was actually faster."— an engineer mid code-review argument hyperfine runs commands multiple times, warms up the cache, computes mean and standard deviation, and gives you a statistically meaningful comparison. It's what time should have been. Tip: Use --export-json to pipe results into your own analysis. Pairs well with jq for custom reporting. Result: win every "which is faster" argument with data. "Someone sent me a .tar.bz2. I couldn't remember if it was -xjf or -xzf. I guessed wrong, it failed silently, and I spent 10 minutes wondering why the directory was empty."— a dev on their third Stack Overflow tab atool wraps tar, zip, rar, 7z, bz2, xz — every format — behind four consistent commands. You never look up flags again. The only four commands you need: Result: one muscle memory. Every archive format. Forever. "I was writing a C utility with no build tool, no webpack, no hot reload. Every edit meant switching windows, pressing up-arrow, enter. After the 40th time I started looking for a way to automate it. Turns out it already existed."

— a systems programmer who found entr at 2am entr watches a list of files and re-runs any command when they change. No config file, no daemon, no 500-line Makefile. Just a pipe. Tip: -r kills and restarts a long-running process. -c clears the screen. -d exits when a new file is added to the watched set — useful in scripts. Result: hot reload for literally anything. None of these are in the standard curriculum. None of them show up in "learn Linux" courses. They exist in blog posts, dotfiles repos, and the .bashrc of engineers who've been quietly shipping faster than everyone else for years. Now you have them too. Install one today. Add it to your dotfiles. Then forget you ever lived without it. Found one you didn't know? Drop a comment with your own hidden gem — I'll add the best ones to a follow-up post. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ -weight: 500;">brew -weight: 500;">install pv # macOS -weight: 500;">apt -weight: 500;">install pv # Debian/Ubuntu -weight: 500;">brew -weight: 500;">install pv # macOS -weight: 500;">apt -weight: 500;">install pv # Debian/Ubuntu -weight: 500;">brew -weight: 500;">install pv # macOS -weight: 500;">apt -weight: 500;">install pv # Debian/Ubuntu # Compress a large file with progress pv hugefile.sql | gzip > hugefile.sql.gz # Copy with progress bar pv source.tar.gz > /backup/source.tar.gz # Pipe through multiple commands pv dump.sql | gzip | ssh user@remote "cat > dump.sql.gz" # Throttle transfer rate to 1MB/s pv -L 1m source.iso > /dev/null # Compress a large file with progress pv hugefile.sql | gzip > hugefile.sql.gz # Copy with progress bar pv source.tar.gz > /backup/source.tar.gz # Pipe through multiple commands pv dump.sql | gzip | ssh user@remote "cat > dump.sql.gz" # Throttle transfer rate to 1MB/s pv -L 1m source.iso > /dev/null # Compress a large file with progress pv hugefile.sql | gzip > hugefile.sql.gz # Copy with progress bar pv source.tar.gz > /backup/source.tar.gz # Pipe through multiple commands pv dump.sql | gzip | ssh user@remote "cat > dump.sql.gz" # Throttle transfer rate to 1MB/s pv -L 1m source.iso > /dev/null -weight: 500;">brew -weight: 500;">install moreutils -weight: 500;">apt -weight: 500;">install moreutils -weight: 500;">brew -weight: 500;">install moreutils -weight: 500;">apt -weight: 500;">install moreutils -weight: 500;">brew -weight: 500;">install moreutils -weight: 500;">apt -weight: 500;">install moreutils # Generate JSON, hand-edit it, then process it -weight: 500;">curl -s api.example.com/data | vipe | jq '.results[]' # Filter a log, manually tweak some lines, send to file cat app.log | grep ERROR | vipe > errors-reviewed.log # Generate JSON, hand-edit it, then process it -weight: 500;">curl -s api.example.com/data | vipe | jq '.results[]' # Filter a log, manually tweak some lines, send to file cat app.log | grep ERROR | vipe > errors-reviewed.log # Generate JSON, hand-edit it, then process it -weight: 500;">curl -s api.example.com/data | vipe | jq '.results[]' # Filter a log, manually tweak some lines, send to file cat app.log | grep ERROR | vipe > errors-reviewed.log # DANGEROUS — truncates file before sort finishes reading it sort file.txt > file.txt # SAFE — sponge buffers everything first sort file.txt | sponge file.txt # Deduplicate a file in place sort -u config.txt | sponge config.txt # DANGEROUS — truncates file before sort finishes reading it sort file.txt > file.txt # SAFE — sponge buffers everything first sort file.txt | sponge file.txt # Deduplicate a file in place sort -u config.txt | sponge config.txt # DANGEROUS — truncates file before sort finishes reading it sort file.txt > file.txt # SAFE — sponge buffers everything first sort file.txt | sponge file.txt # Deduplicate a file in place sort -u config.txt | sponge config.txt # In crontab — only emails you if backup fails 0 2 * * * chronic /usr/local/bin/run-backup.sh # In crontab — only emails you if backup fails 0 2 * * * chronic /usr/local/bin/run-backup.sh # In crontab — only emails you if backup fails 0 2 * * * chronic /usr/local/bin/run-backup.sh # find — search for JS files, ignore node_modules find . -name "*.js" -not -path "*/node_modules/*" # fd — same thing fd -e js # fd — find files modified in the last 2 days fd --changed-within 2d # fd — find and execute a command on each result fd -e log -x rm {} # fd — search hidden files too fd -H .env # find — search for JS files, ignore node_modules find . -name "*.js" -not -path "*/node_modules/*" # fd — same thing fd -e js # fd — find files modified in the last 2 days fd --changed-within 2d # fd — find and execute a command on each result fd -e log -x rm {} # fd — search hidden files too fd -H .env # find — search for JS files, ignore node_modules find . -name "*.js" -not -path "*/node_modules/*" # fd — same thing fd -e js # fd — find files modified in the last 2 days fd --changed-within 2d # fd — find and execute a command on each result fd -e log -x rm {} # fd — search hidden files too fd -H .env -weight: 500;">brew -weight: 500;">install hyperfine cargo -weight: 500;">install hyperfine -weight: 500;">brew -weight: 500;">install hyperfine cargo -weight: 500;">install hyperfine -weight: 500;">brew -weight: 500;">install hyperfine cargo -weight: 500;">install hyperfine # Benchmark a single command (10 runs by default) hyperfine 'grep -r "TODO" src/' # Compare two implementations hyperfine 'python parse_v1.py data.json' 'python parse_v2.py data.json' # Warm up first, then benchmark hyperfine --warmup 3 './build/server --dry-run' # Export results to markdown table hyperfine --export-markdown results.md 'cmd_a' 'cmd_b' # Run with different input parameters hyperfine 'sort -n {input}' --parameter-list input small.txt medium.txt large.txt # Benchmark a single command (10 runs by default) hyperfine 'grep -r "TODO" src/' # Compare two implementations hyperfine 'python parse_v1.py data.json' 'python parse_v2.py data.json' # Warm up first, then benchmark hyperfine --warmup 3 './build/server --dry-run' # Export results to markdown table hyperfine --export-markdown results.md 'cmd_a' 'cmd_b' # Run with different input parameters hyperfine 'sort -n {input}' --parameter-list input small.txt medium.txt large.txt # Benchmark a single command (10 runs by default) hyperfine 'grep -r "TODO" src/' # Compare two implementations hyperfine 'python parse_v1.py data.json' 'python parse_v2.py data.json' # Warm up first, then benchmark hyperfine --warmup 3 './build/server --dry-run' # Export results to markdown table hyperfine --export-markdown results.md 'cmd_a' 'cmd_b' # Run with different input parameters hyperfine 'sort -n {input}' --parameter-list input small.txt medium.txt large.txt -weight: 500;">brew -weight: 500;">install atool -weight: 500;">apt -weight: 500;">install atool -weight: 500;">brew -weight: 500;">install atool -weight: 500;">apt -weight: 500;">install atool -weight: 500;">brew -weight: 500;">install atool -weight: 500;">apt -weight: 500;">install atool # Extract anything — format auto-detected aunpack archive.tar.bz2 aunpack archive.zip aunpack archive.7z # List contents without extracting als archive.tar.gz # Create an archive (format from extension) apack output.tar.gz file1 file2 dir/ # Repack from one format to another arepack old.tar.gz new.tar.xz # Extract anything — format auto-detected aunpack archive.tar.bz2 aunpack archive.zip aunpack archive.7z # List contents without extracting als archive.tar.gz # Create an archive (format from extension) apack output.tar.gz file1 file2 dir/ # Repack from one format to another arepack old.tar.gz new.tar.xz # Extract anything — format auto-detected aunpack archive.tar.bz2 aunpack archive.zip aunpack archive.7z # List contents without extracting als archive.tar.gz # Create an archive (format from extension) apack output.tar.gz file1 file2 dir/ # Repack from one format to another arepack old.tar.gz new.tar.xz -weight: 500;">brew -weight: 500;">install entr -weight: 500;">apt -weight: 500;">install entr -weight: 500;">brew -weight: 500;">install entr -weight: 500;">apt -weight: 500;">install entr -weight: 500;">brew -weight: 500;">install entr -weight: 500;">apt -weight: 500;">install entr # Re-run tests when any Go file changes find . -name '*.go' | entr go test ./... # Recompile on any C file change ls *.c | entr make # Restart server on source change ls src/**/*.js | entr -r node server.js # Clear screen before each run ls *.py | entr -c python main.py # Watch a directory recursively (combine with fd) fd -e ts | entr -rc -weight: 500;">npm run build # Re-run tests when any Go file changes find . -name '*.go' | entr go test ./... # Recompile on any C file change ls *.c | entr make # Restart server on source change ls src/**/*.js | entr -r node server.js # Clear screen before each run ls *.py | entr -c python main.py # Watch a directory recursively (combine with fd) fd -e ts | entr -rc -weight: 500;">npm run build # Re-run tests when any Go file changes find . -name '*.go' | entr go test ./... # Recompile on any C file change ls *.c | entr make # Restart server on source change ls src/**/*.js | entr -r node server.js # Clear screen before each run ls *.py | entr -c python main.py # Watch a directory recursively (combine with fd) fd -e ts | entr -rc -weight: 500;">npm run build