$ crontab -e # Edit your crontab
crontab -l # List current crontab
crontab -r # Remove your crontab (careful!)
crontab -e # Edit your crontab
crontab -l # List current crontab
crontab -r # Remove your crontab (careful!)
crontab -e # Edit your crontab
crontab -l # List current crontab
crontab -r # Remove your crontab (careful!)
┌───────────── minute (0–59)
│ ┌─────────── hour (0–23)
│ │ ┌───────── day of month (1–31)
│ │ │ ┌─────── month (1–12 or JAN–DEC)
│ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT)
│ │ │ │ │
* * * * * command to execute
┌───────────── minute (0–59)
│ ┌─────────── hour (0–23)
│ │ ┌───────── day of month (1–31)
│ │ │ ┌─────── month (1–12 or JAN–DEC)
│ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT)
│ │ │ │ │
* * * * * command to execute
┌───────────── minute (0–59)
│ ┌─────────── hour (0–23)
│ │ ┌───────── day of month (1–31)
│ │ │ ┌─────── month (1–12 or JAN–DEC)
│ │ │ │ ┌───── day of week (0–6, Sun=0 or 7, or SUN–SAT)
│ │ │ │ │
* * * * * command to execute
# Every day at 3 AM, dump the DB and gzip it
0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz
# Every day at 3 AM, dump the DB and gzip it
0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz
# Every day at 3 AM, dump the DB and gzip it
0 3 * * * pg_dump myapp_production | gzip > /backups/db_$(date +\%Y\%m\%d).sql.gz
# Delete temp files older than 7 days every Sunday at midnight
0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days
0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete
# Delete temp files older than 7 days every Sunday at midnight
0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days
0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete
# Delete temp files older than 7 days every Sunday at midnight
0 0 * * 0 find /tmp/uploads -mtime +7 -delete # Truncate application logs older than 30 days
0 2 * * * find /var/log/myapp -name "*.log" -mtime +30 -delete
# Weekly digest every Monday at 8 AM
0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py
# Weekly digest every Monday at 8 AM
0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py
# Weekly digest every Monday at 8 AM
0 8 * * 1 /usr/bin/python3 /opt/myapp/scripts/send_weekly_digest.py
# Pre-warm the cache before peak traffic hours
45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null
# Pre-warm the cache before peak traffic hours
45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null
# Pre-warm the cache before peak traffic hours
45 7 * * 1-5 -weight: 500;">curl -s https://mysite.com/warm-cache > /dev/null
# Pull exchange rates every hour
0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1
# Pull exchange rates every hour
0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1
# Pull exchange rates every hour
0 * * * * /opt/scripts/sync_exchange_rates.sh >> /var/log/exchange_sync.log 2>&1
# Certbot auto-renewal check twice a day (recommended)
0 */12 * * * certbot renew --quiet
# Certbot auto-renewal check twice a day (recommended)
0 */12 * * * certbot renew --quiet
# Certbot auto-renewal check twice a day (recommended)
0 */12 * * * certbot renew --quiet
# Run VACUUM ANALYZE on PostgreSQL every weekend
0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production
# Run VACUUM ANALYZE on PostgreSQL every weekend
0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production
# Run VACUUM ANALYZE on PostgreSQL every weekend
0 1 * * 6 psql -U postgres -c "VACUUM ANALYZE;" myapp_production
# BAD — will likely fail
* * * * * python3 myscript.py # GOOD — explicit path, full environment
* * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DATABASE_URL=postgres://user:pass@localhost/mydb
0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py
# BAD — will likely fail
* * * * * python3 myscript.py # GOOD — explicit path, full environment
* * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DATABASE_URL=postgres://user:pass@localhost/mydb
0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py
# BAD — will likely fail
* * * * * python3 myscript.py # GOOD — explicit path, full environment
* * * * * /usr/bin/python3 /home/deploy/myapp/myscript.py # BETTER — set env vars at the top of your crontab
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DATABASE_URL=postgres://user:pass@localhost/mydb
0 3 * * * /usr/bin/python3 /home/deploy/myapp/backup.py
#!/bin/bash
source /home/deploy/.env
# rest of script...
#!/bin/bash
source /home/deploy/.env
# rest of script...
#!/bin/bash
source /home/deploy/.env
# rest of script...
# Redirect both stdout and stderr to a log file
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!)
0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log
# Redirect both stdout and stderr to a log file
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!)
0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log
# Redirect both stdout and stderr to a log file
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 # Discard all output (only do this if you have other monitoring!)
0 3 * * * /opt/scripts/noisy-but-reliable.sh > /dev/null 2>&1 # Separate logs for stdout and stderr
0 3 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>>/var/log/backup_errors.log
MAILTO="ops-team@yourcompany.com"
MAILTO="ops-team@yourcompany.com"
MAILTO="ops-team@yourcompany.com"
# Only one instance runs at a time; others exit immediately
* * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh
# Only one instance runs at a time; others exit immediately
* * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh
# Only one instance runs at a time; others exit immediately
* * * * * flock -n /tmp/myjob.lock /opt/scripts/myjob.sh
# Wait up to 30 seconds for the lock, then give up
* * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh
# Wait up to 30 seconds for the lock, then give up
* * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh
# Wait up to 30 seconds for the lock, then give up
* * * * * flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh
import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w')
try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here
import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w')
try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here
import fcntl, sys lock_file = open('/tmp/myjob.lock', 'w')
try: fcntl.flock(lock_file, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError: print("Another instance is running. Exiting.") sys.exit(0) # ... your job logic here
# GNU cron (Vixie cron) supports CRON_TZ
CRON_TZ="America/New_York"
0 9 * * 1-5 /opt/scripts/market_open_alert.sh
# GNU cron (Vixie cron) supports CRON_TZ
CRON_TZ="America/New_York"
0 9 * * 1-5 /opt/scripts/market_open_alert.sh
# GNU cron (Vixie cron) supports CRON_TZ
CRON_TZ="America/New_York"
0 9 * * 1-5 /opt/scripts/market_open_alert.sh
0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null
0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null
0 3 * * * /opt/scripts/backup.sh && -weight: 500;">curl -fsS --retry 3 https://hc-ping.com/YOUR-UUID > /dev/null
#!/bin/bash
START=$(date +%s)
/opt/scripts/backup.sh
EXIT_CODE=$?
END=$(date +%s)
DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}"
fi
#!/bin/bash
START=$(date +%s)
/opt/scripts/backup.sh
EXIT_CODE=$?
END=$(date +%s)
DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}"
fi
#!/bin/bash
START=$(date +%s)
/opt/scripts/backup.sh
EXIT_CODE=$?
END=$(date +%s)
DURATION=$((END - START)) if [ $EXIT_CODE -ne 0 ]; then -weight: 500;">curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \ -d "{\"text\": \"🚨 Backup job failed! Exit: $EXIT_CODE, Duration: ${DURATION}s\"}"
fi
# /etc/systemd/system/backup.timer
[Unit]
Description=Run backup daily at 3AM [Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true [Install]
WantedBy=timers.target
# /etc/systemd/system/backup.timer
[Unit]
Description=Run backup daily at 3AM [Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true [Install]
WantedBy=timers.target
# /etc/systemd/system/backup.timer
[Unit]
Description=Run backup daily at 3AM [Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true [Install]
WantedBy=timers.target
# /etc/systemd/system/backup.-weight: 500;">service
[Unit]
Description=Database Backup [Service]
Type=oneshot
ExecStart=/opt/scripts/backup.sh
# /etc/systemd/system/backup.-weight: 500;">service
[Unit]
Description=Database Backup [Service]
Type=oneshot
ExecStart=/opt/scripts/backup.sh
# /etc/systemd/system/backup.-weight: 500;">service
[Unit]
Description=Database Backup [Service]
Type=oneshot
ExecStart=/opt/scripts/backup.sh
-weight: 500;">systemctl -weight: 500;">enable --now backup.timer
-weight: 500;">systemctl list-timers # See all timers and next fire time
-weight: 500;">systemctl -weight: 500;">enable --now backup.timer
-weight: 500;">systemctl list-timers # See all timers and next fire time
-weight: 500;">systemctl -weight: 500;">enable --now backup.timer
-weight: 500;">systemctl list-timers # See all timers and next fire time
apiVersion: batch/v1
kind: CronJob
metadata: name: db-backup
spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url
apiVersion: batch/v1
kind: CronJob
metadata: name: db-backup
spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url
apiVersion: batch/v1
kind: CronJob
metadata: name: db-backup
spec: schedule: "0 3 * * *" concurrencyPolicy: Forbid # Prevents overlapping runs successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - name: backup image: myapp/backup:latest env: - name: DATABASE_URL valueFrom: secretKeyRef: name: db-secret key: url
# Edit crontab for current user
crontab -e # Edit crontab for another user (as root)
crontab -u www-data -e # List all cron jobs for all users (as root)
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /"
done # Check system-wide cron jobs
ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian)
grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd)
journalctl -u cron --since "1 hour ago"
# Edit crontab for current user
crontab -e # Edit crontab for another user (as root)
crontab -u www-data -e # List all cron jobs for all users (as root)
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /"
done # Check system-wide cron jobs
ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian)
grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd)
journalctl -u cron --since "1 hour ago"
# Edit crontab for current user
crontab -e # Edit crontab for another user (as root)
crontab -u www-data -e # List all cron jobs for all users (as root)
for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l 2>/dev/null | grep -v '^#' | sed "s/^/$user: /"
done # Check system-wide cron jobs
ls /etc/cron.d/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ # Check cron logs (Ubuntu/Debian)
grep CRON /var/log/syslog | tail -50 # Check cron logs (systemd)
journalctl -u cron --since "1 hour ago" - python3 isn't in cron's PATH
- Environment variables like DATABASE_URL aren't set
- Working directory assumptions are wrong - Never run cron as root unless absolutely necessary. Use dedicated -weight: 500;">service accounts.
- Audit world-writable directories — if a script lives in /tmp, an attacker could swap it.
- Validate inputs in scripts that process external data. A cron job silently downloading and executing something is a security nightmare.
- Review /etc/cron.d/, /etc/cron.daily/, /etc/cron.weekly/ — these are often forgotten and accumulate stale, potentially vulnerable scripts.
- Use environment variable secrets carefully. Avoid hardcoding credentials; use a secrets manager or .env file with restricted permissions (chmod 600). - [ ] Is crond running? (-weight: 500;">systemctl -weight: 500;">status cron)
- [ ] Does the crontab have a trailing newline? (Some implementations require it)
- [ ] Are you using absolute paths?
- [ ] Is the script executable? (chmod +x /path/to/script.sh)
- [ ] Does the script work when run manually as the cron user? (-weight: 600;">sudo -u cronuser /path/to/script.sh)
- [ ] Are environment variables available?
- [ ] Is output being silently discarded? Add >> /tmp/cron_debug.log 2>&1 temporarily.
- [ ] Check /var/log/syslog or journalctl -u cron for cron daemon logs.
- [ ] Is there a lock file preventing the job from starting?
- [ ] Timezone mismatch — is the job running at unexpected times? - Serverless schedulers (AWS EventBridge, Google Cloud Scheduler, Azure Logic Apps) that eliminate the need for a persistent server entirely.
- Workflow orchestrators like Temporal and Prefect that handle retries, observability, and complex dependencies.
- GitOps-managed cron — schedule definitions stored in Git, deployed via CI/CD, with full audit history. - What happens if this runs twice at once?
- What happens if it fails silently for a week?
- What happens during the DST clock change?