Tools: Cron Jobs Are Older Than the Internet — And They Still Run Half Your Stack. - Analysis

Tools: Cron Jobs Are Older Than the Internet — And They Still Run Half Your Stack. - Analysis

What Even Is a Cron Job?

Anatomy of a Cron Expression

Common Patterns You'll Use Weekly

Real-World Use Cases (With Stories)

1. Database Backups

2. Clearing Temporary Files / Log Rotation

3. Sending Scheduled Emails / Digests

4. Cache Warming

5. Syncing Data from External APIs

6. Certificate Renewal (Let's Encrypt)

7. Database Maintenance

The Environment Trap (The #1 Gotcha)

Handling Output and Errors

Preventing Overlapping Jobs (The Race Condition Problem)

Timezone Nightmares

Monitoring Cron Jobs in Production

Option 1: Healthchecks.io / Better Uptime

Option 2: Dead Man's Snitch

Option 3: Custom Logging + Alerting

Modern Alternatives to Raw Cron

systemd Timer Example

Kubernetes CronJob Example

Security Considerations

Debugging a Broken Cron Job: A Checklist

The Future: Cron in 2025–2026

Quick Reference Card

Wrapping Up I once spent three hours debugging a production issue that turned out to be a cron job firing at 2 AM and locking a database table. The oncall engineer before me had spent four hours on the same issue six months earlier. Neither of us left a comment. Classic. If you've ever been bitten by a mysterious scheduled task, a job that silently failed for weeks, or a timezone bug that only appeared during daylight saving time — this article is your therapy session and your cheat sheet. Let's go deep on cron jobs. All of it. Cron is a time-based job scheduler built into Unix-like operating systems. It's been around since Version 7 Unix in 1979 — predating the World Wide Web by over a decade. The name comes from Chronos, the Greek god of time. A cron job is simply a command or script that runs automatically on a schedule you define. Need to purge old logs every night? Send a digest email every Monday morning? Rotate API keys monthly? Cron. The daemon that runs in the background and checks for jobs to execute is called crond. On most Linux systems it starts at boot and quietly does its thing forever. Every cron job lives in a crontab (cron table) — a configuration file listing your schedules. You edit it with: A cron expression has five fields (plus the command): Pro tip: Always test your expressions at crontab.guru before deploying. It's saved me more times than I can count. The most classic use case. Every production database should have this somewhere: A friend worked at a startup that had cron backups set up correctly — but the backups were being written to the same disk as the database. When the disk filled up, both the DB and backups were gone. Always write backups off-machine. Here's something that trips up almost every developer who's new to cron: cron jobs run with a minimal environment. Your ~/.bashrc, ~/.zshrc, PATH, and other env vars are not loaded. A script that works perfectly in your terminal might silently fail in cron because: The fix: Always use absolute paths, and explicitly set your environment. Or source your environment in the script itself: By default, cron emails output to the local system user — which almost nobody checks. Silence is not success; it's just silence. Redirect output properly: To suppress the local email: Or redirect to a real address: What happens if your job takes longer than its schedule interval? You get overlapping runs. A 5-minute job running every minute = chaos. Use flock to get a mutex lock: In Python scripts, you can manage this with file locks too: Cron runs in the system timezone by default. If your server is in UTC and your business logic assumes Eastern Time, you will have bugs around daylight saving transitions. Set timezone per-job with: Or set it system-wide in the crontab header: Best practice: Run your server in UTC. Handle timezone display in your application layer. This is the way. Silent failure is a cron job's superpower — and its biggest danger. Here's how to keep it in check. These services give each job a unique ping URL. If the job doesn't ping in its expected window, you get alerted. Same concept — a "snitch" URL you curl after a successful run. If no ping arrives in the expected window, you get an email or Slack alert. Raw cron is great, but production systems often need something more robust. Cron jobs are a common attack vector and a source of privilege escalation bugs. When your cron job isn't running (and you've checked that it should be), go through this list: As of 2026, raw cron is still everywhere — but the ecosystem has matured significantly. Most cloud-native teams now use: That said, for a self-hosted Linux server or a VPS running a side project? A well-written crontab with proper logging and a healthcheck ping is still a perfectly valid and battle-tested solution. Cron jobs are deceptively simple — two lines in a file and something just runs. But production cron is a discipline: proper paths, explicit environments, locked concurrency, monitored outcomes, logged results, and thoughtful security. The engineers who have great cron hygiene are the ones who got burned once, documented it, and never let it happen again. Be that engineer from day one. Next time you write a cron job, ask yourself: If you have good answers to all three, ship it. Found this useful? Drop a ❤️ and share it with someone who's currently debugging a cron job at 2 AM. They need this more than they need coffee. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Command

Copy

$ crontab -e # Edit your crontab crontab -l # List current crontab crontab -r # Remove your crontab (careful!) crontab -e # Edit your crontab crontab -l # List current crontab crontab -r # Remove your crontab (careful!) crontab -e # Edit your crontab crontab -l # List current crontab crontab -r # Remove your crontab (careful!) ┌───────────── minute (0–59) │ ┌─────────── hour (0–23) │ │ ┌───────── day of month (1–31) │ │ │ ┌─────── month (1–12 or JAN–DEC) │ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT) │ │ │ │ │ * * * * * command to execute ┌───────────── minute (0–59) │ ┌─────────── hour (0–23) │ │ ┌───────── day of month (1–31) │ │ │ ┌─────── month (1–12 or JAN–DEC) │ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT) │ │ │ │ │ * * * * * command to execute ┌───────────── minute (0–59) │ ┌─────────── hour (0–23) │ │ ┌───────── day of month (1–31) │ │ │ ┌─────── month (1–12 or JAN–DEC) │ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT) │ │ │ │ │ * * * * * command to execute # Every day at 3 AM, dump the DB and gzip it 0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz # Every day at 3 AM, dump the DB and gzip it 0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz # Every day at 3 AM, dump the DB and gzip it 0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz # Delete temp files older than 7 days every Sunday at midnight 0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days 0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete # Delete temp files older than 7 days every Sunday at midnight 0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days 0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete # Delete temp files older than 7 days every Sunday at midnight 0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days 0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete # Weekly digest every Monday at 8 AM 0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py # Weekly digest every Monday at 8 AM 0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py # Weekly digest every Monday at 8 AM 0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py # Pre-warm the cache before peak traffic hours 45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null # Pre-warm the cache before peak traffic hours 45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null # Pre-warm the cache before peak traffic hours 45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null # Pull exchange rates every hour 0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1 # Pull exchange rates every hour 0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1 # Pull exchange rates every hour 0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1 # Certbot auto-renewal check twice a day (recommended) 0 */12 * * * certbot renew --quiet # Certbot auto-renewal check twice a day (recommended) 0 */12 * * * certbot renew --quiet # Certbot auto-renewal check twice a day (recommended) 0 */12 * * * certbot renew --quiet # Run VACUUM ANALYZE on PostgreSQL every weekend 0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production # Run VACUUM ANALYZE on PostgreSQL every weekend 0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production # Run VACUUM ANALYZE on PostgreSQL every weekend 0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production # BAD — will likely fail * * * * * python3 myscript.py # GOOD — explicit path, full environment * * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DATABASE_URL=postgres://user:pass@localhost/mydb 0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py # BAD — will likely fail * * * * * python3 myscript.py # GOOD — explicit path, full environment * * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DATABASE_URL=postgres://user:pass@localhost/mydb 0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py # BAD — will likely fail * * * * * python3 myscript.py # GOOD — explicit path, full environment * * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DATABASE_URL=postgres://user:pass@localhost/mydb 0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py #!/bin/bash source /home/deploy/.env # rest of script... #!/bin/bash source /home/deploy/.env # rest of script... #!/bin/bash source /home/deploy/.env # rest of script... # Redirect both stdout and stderr to a log file 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!) 0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log # Redirect both stdout and stderr to a log file 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!) 0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log # Redirect both stdout and stderr to a log file 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!) 0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr 0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log MAILTO="ops-team@yourcompany.com" MAILTO="ops-team@yourcompany.com" MAILTO="ops-team@yourcompany.com" # Only one instance runs at a time; others exit immediately * * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh # Only one instance runs at a time; others exit immediately * * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh # Only one instance runs at a time; others exit immediately * * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh # Wait up to 30 seconds for the lock, then give up * * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh # Wait up to 30 seconds for the lock, then give up * * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh # Wait up to 30 seconds for the lock, then give up * * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w') try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w') try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w') try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB) except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here # GNU cron (Vixie cron) supports CRON_TZ CRON_TZ="America/New_York" 0 9 * * 1-5 /opt/scripts/market_open_alert.sh # GNU cron (Vixie cron) supports CRON_TZ CRON_TZ="America/New_York" 0 9 * * 1-5 /opt/scripts/market_open_alert.sh # GNU cron (Vixie cron) supports CRON_TZ CRON_TZ="America/New_York" 0 9 * * 1-5 /opt/scripts/market_open_alert.sh 0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null 0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null 0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null #!/bin/bash START=$(date +%s) /opt/scripts/backup.sh EXIT_CODE=$? END=$(date +%s) DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}" fi #!/bin/bash START=$(date +%s) /opt/scripts/backup.sh EXIT_CODE=$? END=$(date +%s) DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}" fi #!/bin/bash START=$(date +%s) /opt/scripts/backup.sh EXIT_CODE=$? END=$(date +%s) DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}" fi # /etc/systemd/system/backup.timer [Unit] Description=Run backup daily at 3AM [Timer] OnCalendar=*-*-* 03:00:00 Persistent=true [Install] WantedBy=timers.target # /etc/systemd/system/backup.timer [Unit] Description=Run backup daily at 3AM [Timer] OnCalendar=*-*-* 03:00:00 Persistent=true [Install] WantedBy=timers.target # /etc/systemd/system/backup.timer [Unit] Description=Run backup daily at 3AM [Timer] OnCalendar=*-*-* 03:00:00 Persistent=true [Install] WantedBy=timers.target # /etc/systemd/system/backup.-weight: 500;">service [Unit] Description=Database Backup [Service] Type=oneshot ExecStart=/opt/scripts/backup.sh # /etc/systemd/system/backup.-weight: 500;">service [Unit] Description=Database Backup [Service] Type=oneshot ExecStart=/opt/scripts/backup.sh # /etc/systemd/system/backup.-weight: 500;">service [Unit] Description=Database Backup [Service] Type=oneshot ExecStart=/opt/scripts/backup.sh -weight: 500;">systemctl -weight: 500;">enable --now backup.timer -weight: 500;">systemctl list-timers # See all timers and next fire time -weight: 500;">systemctl -weight: 500;">enable --now backup.timer -weight: 500;">systemctl list-timers # See all timers and next fire time -weight: 500;">systemctl -weight: 500;">enable --now backup.timer -weight: 500;">systemctl list-timers # See all timers and next fire time apiVersion: batch/v1 kind: CronJob metadata: name: db-backup spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url apiVersion: batch/v1 kind: CronJob metadata: name: db-backup spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url apiVersion: batch/v1 kind: CronJob metadata: name: db-backup spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url # Edit crontab for current user crontab -e # Edit crontab for another user (as root) crontab -u www-data -e # List all cron jobs for all users (as root) for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /" done # Check system-wide cron jobs ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian) grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd) journalctl -u cron --since "1 hour ago" # Edit crontab for current user crontab -e # Edit crontab for another user (as root) crontab -u www-data -e # List all cron jobs for all users (as root) for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /" done # Check system-wide cron jobs ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian) grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd) journalctl -u cron --since "1 hour ago" # Edit crontab for current user crontab -e # Edit crontab for another user (as root) crontab -u www-data -e # List all cron jobs for all users (as root) for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /" done # Check system-wide cron jobs ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian) grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd) journalctl -u cron --since "1 hour ago" - python3 isn't in cron's PATH - Environment variables like DATABASE_URL aren't set - Working directory assumptions are wrong - Never run cron as root unless absolutely necessary. Use dedicated -weight: 500;">service accounts. - Audit world-writable directories — if a script lives in /tmp, an attacker could swap it. - Validate inputs in scripts that process external data. A cron job silently downloading and executing something is a security nightmare. - Review /etc/cron.d/, /etc/cron.daily/, /etc/cron.weekly/ — these are often forgotten and accumulate stale, potentially vulnerable scripts. - Use environment variable secrets carefully. Avoid hardcoding credentials; use a secrets manager or .env file with restricted permissions (chmod 600). - [ ] Is crond running? (-weight: 500;">systemctl -weight: 500;">status cron) - [ ] Does the crontab have a trailing newline? (Some implementations require it) - [ ] Are you using absolute paths? - [ ] Is the script executable? (chmod +x /path/to/script.sh) - [ ] Does the script work when run manually as the cron user? (-weight: 600;">sudo -u cronuser /path/to/script.sh) - [ ] Are environment variables available? - [ ] Is output being silently discarded? Add >> /tmp/cron_debug.log 2>&1 temporarily. - [ ] Check /var/log/syslog or journalctl -u cron for cron daemon logs. - [ ] Is there a lock file preventing the job from starting? - [ ] Timezone mismatch — is the job running at unexpected times? - Serverless schedulers (AWS EventBridge, Google Cloud Scheduler, Azure Logic Apps) that eliminate the need for a persistent server entirely. - Workflow orchestrators like Temporal and Prefect that handle retries, observability, and complex dependencies. - GitOps-managed cron — schedule definitions stored in Git, deployed via CI/CD, with full audit history. - What happens if this runs twice at once? - What happens if it fails silently for a week? - What happens during the DST clock change?