Tools: Why My Spark Container Keeps Exiting — Docker PID 1 and the Daemon Trap - 2025 Update

Tools: Why My Spark Container Keeps Exiting — Docker PID 1 and the Daemon Trap - 2025 Update

The Setup

Attempt 1 — The Image That No Longer Exists

Attempt 2 — Wrong Environment Variables

Attempt 3 — The Real Trap: start-master.sh

The Core Rule: Docker Containers Live and Die with PID 1

Why Kafka and ZooKeeper Didn't Have This Problem

Four Ways to Fix It

Fix A: tail -f /dev/null

Fix B: Run the Spark Master Class Directly

Fix C: Custom Entrypoint Script

Fix D: Use a Docker-Friendly Image

Summary I spent an embarrassing amount of time staring at my terminal, watching Spark containers start and immediately die. Three different attempts, three different failure modes, all in the same afternoon. If you're setting up Spark inside Docker and your container just... vanishes, this post is for you. I'm building a CMS Medicare streaming pipeline — pulling hospital charge data from the CMS public API, pushing it through Kafka, processing it with Spark Structured Streaming, and landing the results in Snowflake. The whole stack runs in Docker Compose. Kafka and ZooKeeper came up without a hitch. Spark did not. Here's what my docker-compose.yml looked like at the start: Looked reasonable enough. It wasn't. bitnami/spark:3.5 had been pulled from Docker Hub. I tried 3.5.3. Gone. Tried bitnami/spark:3. Also gone. The entire Bitnami Spark image line had been removed with no notice. This is the first thing worth remembering before we even get to the real problem: third-party images on Docker Hub can disappear at any time. There is no deprecation warning, no migration guide. For anything that needs to be reproducible, you either pin to a verified digest or mirror the image in a private registry. I switched to the Apache official image: apache/spark:3.5.1-python3. That one pulled fine. I updated the image name but kept the same environment variable: docker-compose up -d reported all containers as "Started." But docker ps only showed two running — Kafka and ZooKeeper. The Spark containers had already exited. The problem: SPARK_MODE is a Bitnami-specific environment variable. The Apache official image has never heard of it. Bitnami's image ships with a custom entrypoint script that reads SPARK_MODE and decides whether to launch a master or worker. It's a convenience layer Bitnami built on top of vanilla Spark. The Apache official image has none of this. Its default entrypoint (/opt/entrypoint.sh) simply executes whatever command you pass in. If you don't pass a meaningful command, it finishes and exits. The lesson: switching between images from different publishers is not just swapping the image: field. Different publishers package the same software with different entrypoints, different environment variables, and different directory layouts. Before you can use an image correctly, you need to understand how that specific image expects to be started. Spark comes bundled with start-master.sh. That seems like the right tool: Same result. "Started." No Spark container. The container was starting. Spark Master was launching. And then everything was shutting down within a fraction of a second. To understand why, you need to know one foundational Docker rule. Every container has a main process — specified by CMD, ENTRYPOINT, or command in your Compose file. Inside the container, this process gets PID 1. When PID 1 exits, the container exits. No exceptions. Now look at what start-master.sh actually does internally (simplified): See that &? It puts the Spark Master process into the background. The shell script (PID 1) spawns a child Java process, prints a message, and calls exit 0. The moment it does that, Docker kills the container and everything inside it — including the Spark Master that just started. Here's the exact timeline: Spark Master was alive for about 0.2 seconds. start-master.sh was written for bare-metal servers and VMs, where you start a background daemon and the OS keeps it alive after the startup script exits. Docker doesn't work that way. Docker is watching PID 1 and only PID 1. Confluent's images use exec in their entrypoints: In bash, exec replaces the current process with the specified command. The shell doesn't fork a child — it becomes Kafka. Kafka inherits PID 1, runs in the foreground, and blocks indefinitely. The entire difference: & versus exec. tail -f /dev/null watches a file that never gets new content. PID 1 blocks forever. Submit jobs via docker exec: Best for: local development, one-off job submission. Skips the wrapper script entirely. The Master process runs in the foreground as PID 1. Best for: when you actually need a running Master/Worker cluster. Master auto-starts, container stays alive, and you get log output via docker logs. Best for: when you want Spark to auto-start and want logs accessible. jupyter/pyspark-notebook handles all of this correctly out of the box. Their entrypoints are built around exec from the start. Best for: quick prototyping. Tradeoff: you depend on a third party to keep the image available. Three patterns to spot in any startup script: Docker containers are not VMs. On a VM, daemonizing a process and exiting the startup script is completely normal. In Docker, the startup script is the container. Once you internalize that, most "why does my container keep exiting" questions answer themselves. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

services: zookeeper: image: confluentinc/cp-zookeeper:7.4.0 environment: ZOOKEEPER_CLIENT_PORT: 2181 kafka: image: confluentinc/cp-kafka:7.4.0 depends_on: [zookeeper] ports: - "9092:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 spark: image: bitnami/spark:3.5 depends_on: [kafka] environment: SPARK_MODE: master spark-worker: image: bitnami/spark:3.5 depends_on: [spark] environment: SPARK_MODE: worker SPARK_MASTER_URL: spark://spark:7077 services: zookeeper: image: confluentinc/cp-zookeeper:7.4.0 environment: ZOOKEEPER_CLIENT_PORT: 2181 kafka: image: confluentinc/cp-kafka:7.4.0 depends_on: [zookeeper] ports: - "9092:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 spark: image: bitnami/spark:3.5 depends_on: [kafka] environment: SPARK_MODE: master spark-worker: image: bitnami/spark:3.5 depends_on: [spark] environment: SPARK_MODE: worker SPARK_MASTER_URL: spark://spark:7077 services: zookeeper: image: confluentinc/cp-zookeeper:7.4.0 environment: ZOOKEEPER_CLIENT_PORT: 2181 kafka: image: confluentinc/cp-kafka:7.4.0 depends_on: [zookeeper] ports: - "9092:9092" environment: KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 spark: image: bitnami/spark:3.5 depends_on: [kafka] environment: SPARK_MODE: master spark-worker: image: bitnami/spark:3.5 depends_on: [spark] environment: SPARK_MODE: worker SPARK_MASTER_URL: spark://spark:7077 Error response from daemon: failed to resolve reference "docker.io/bitnami/spark:3.5": not found Error response from daemon: failed to resolve reference "docker.io/bitnami/spark:3.5": not found Error response from daemon: failed to resolve reference "docker.io/bitnami/spark:3.5": not found spark: image: apache/spark:3.5.1-python3 environment: SPARK_MODE: master spark: image: apache/spark:3.5.1-python3 environment: SPARK_MODE: master spark: image: apache/spark:3.5.1-python3 environment: SPARK_MODE: master spark: image: apache/spark:3.5.1-python3 command: /opt/spark/sbin/start-master.sh spark: image: apache/spark:3.5.1-python3 command: /opt/spark/sbin/start-master.sh spark: image: apache/spark:3.5.1-python3 command: /opt/spark/sbin/start-master.sh PID 1 is running → container is running PID 1 exits → container exits immediately PID 1 is running → container is running PID 1 exits → container exits immediately PID 1 is running → container is running PID 1 exits → container exits immediately #!/bin/bash nohup java -cp $SPARK_CLASSPATH org.apache.spark.deploy.master.Master & echo "Master started." exit 0 #!/bin/bash nohup java -cp $SPARK_CLASSPATH org.apache.spark.deploy.master.Master & echo "Master started." exit 0 #!/bin/bash nohup java -cp $SPARK_CLASSPATH org.apache.spark.deploy.master.Master & echo "Master started." exit 0 t=0.0s Container starts; PID 1 = start-master.sh (bash) t=0.1s Bash forks a Java process (Spark Master) into the background t=0.2s Bash script reaches exit 0 → PID 1 terminates t=0.2s Docker detects PID 1 exit → tears down the container t=0.2s The background Java process is killed along with it t=0.0s Container starts; PID 1 = start-master.sh (bash) t=0.1s Bash forks a Java process (Spark Master) into the background t=0.2s Bash script reaches exit 0 → PID 1 terminates t=0.2s Docker detects PID 1 exit → tears down the container t=0.2s The background Java process is killed along with it t=0.0s Container starts; PID 1 = start-master.sh (bash) t=0.1s Bash forks a Java process (Spark Master) into the background t=0.2s Bash script reaches exit 0 → PID 1 terminates t=0.2s Docker detects PID 1 exit → tears down the container t=0.2s The background Java process is killed along with it exec kafka-server-start /etc/kafka/server.properties exec kafka-server-start /etc/kafka/server.properties exec kafka-server-start /etc/kafka/server.properties spark: image: apache/spark:3.5.1-python3 command: ["tail", "-f", "/dev/null"] volumes: - ./spark-apps:/opt/spark-apps spark: image: apache/spark:3.5.1-python3 command: ["tail", "-f", "/dev/null"] volumes: - ./spark-apps:/opt/spark-apps spark: image: apache/spark:3.5.1-python3 command: ["tail", "-f", "/dev/null"] volumes: - ./spark-apps:/opt/spark-apps docker exec my-spark-container \ /opt/spark/bin/spark-submit \ /opt/spark-apps/my_job.py docker exec my-spark-container \ /opt/spark/bin/spark-submit \ /opt/spark-apps/my_job.py docker exec my-spark-container \ /opt/spark/bin/spark-submit \ /opt/spark-apps/my_job.py command: > bash -c " /opt/spark/bin/spark-class org.apache.spark.deploy.master.Master --host spark --port 7077 --webui-port 8080 " command: > bash -c " /opt/spark/bin/spark-class org.apache.spark.deploy.master.Master --host spark --port 7077 --webui-port 8080 " command: > bash -c " /opt/spark/bin/spark-class org.apache.spark.deploy.master.Master --host spark --port 7077 --webui-port 8080 " #!/bin/bash # custom-entrypoint.sh /opt/spark/sbin/start-master.sh # starts daemon in background tail -f /opt/spark/logs/* # blocks + streams logs to stdout #!/bin/bash # custom-entrypoint.sh /opt/spark/sbin/start-master.sh # starts daemon in background tail -f /opt/spark/logs/* # blocks + streams logs to stdout #!/bin/bash # custom-entrypoint.sh /opt/spark/sbin/start-master.sh # starts daemon in background tail -f /opt/spark/logs/* # blocks + streams logs to stdout volumes: - ./custom-entrypoint.sh:/opt/custom-entrypoint.sh command: bash /opt/custom-entrypoint.sh volumes: - ./custom-entrypoint.sh:/opt/custom-entrypoint.sh command: bash /opt/custom-entrypoint.sh volumes: - ./custom-entrypoint.sh:/opt/custom-entrypoint.sh command: bash /opt/custom-entrypoint.sh - Docker containers exit when PID 1 exits. Always. - start-master.sh backgrounds Spark with & and exits — which kills the container. - Confluent's images use exec, making the service itself PID 1 and keeping the container alive. - The fix: ensure PID 1 is a foreground process that never returns. - command & — background execution, PID 1 exits shortly after → container dies - exec command — replaces PID 1, container lives as long as the process does → container survives - nohup command & — classic daemon pattern, same problem as & in Docker → container dies