Tools: Running Sonatype Nexus 3 on a 1 Gi RAM VPS — A Practical Guide (2026)

Tools: Running Sonatype Nexus 3 on a 1 Gi RAM VPS — A Practical Guide (2026)

Why Self-Host a Registry?

The Setup

Infrastructure

Architecture

The Memory Problem

JVM Tuning

The Swap Imperative

The compose.yml

Things worth calling out

PostgreSQL vs Embedded Database

Configuring the Docker Registry

Enable the Bearer Token Realm

Create the Repository

CI/CD Integration

Disk Management

Cleanup Policies

Monitoring

Lessons Learned

The Result TL;DR — I self-host a private Docker registry and artifact store using Sonatype Nexus 3 on a VPS with only 1 Gi of RAM and 25 Gi of SSD. This post covers every decision made, every pitfall hit, and every config line tuned to make it work reliably in production. Every time I push an image to Docker Hub on a free plan, I think about rate limits, retention policies, and the slow creep of vendor lock-in. For personal and small team projects, a self-hosted registry gives you: The catch: Sonatype Nexus 3 is a Java application. It was built for enterprise servers, not budget VPS instances. The official documentation recommends a minimum of 8 Gi of RAM. Running it on 1 Gi is a constraint-driven engineering problem — and those are my favourite kind. Traefik handles all TLS termination. Nexus never sees raw HTTPS traffic — it only speaks plain HTTP internally, which simplifies configuration considerably. Here is the honest picture of my RAM budget before Nexus even starts: Nexus is a JVM application. Its memory footprint has two main components: The default Sonatype recommendation is -Xmx2703m. On my VPS, that would require 4× the available RAM. The JVM would immediately start swapping and the container would be OOM-killed within minutes. The solution is aggressive but careful downsizing. The key insight is separating -Xms from -Xmx. Starting at 128m means the JVM consumes minimal RAM on boot and only grows the heap as Nexus actually needs it. On a quiet personal registry, it rarely needs to grow much past 250–300m in practice. Total JVM footprint: ~680m. Docker mem_limit is set to 700m, leaving a 20m buffer for JVM overhead variability. Never run a JVM application on a machine with no swap. Swap on SSD is not a performance strategy — it is a safety net. Without it, the Linux OOM killer will terminate your container the instant it crosses the memory limit, with no warning and no graceful shutdown. Nexus does not handle abrupt kills well; you risk database corruption or a blob store in an inconsistent state. With vm.swappiness=10, Linux strongly prefers to keep data in RAM and only swills to the swapfile under real pressure. The SSD takes a minor hit, but your service survives spikes. After adding swap, my memory picture looked like this: The buff/cache column (671Mi) looks alarming but is not — Linux uses free RAM as disk cache, and that cache is immediately evictable the moment Nexus needs it. The available column (594Mi) is the number that matters. user: "200:200" — The Nexus image internally runs as UID 200. Explicitly setting this prevents accidental root execution. The ./data directory must be pre-owned: sudo chown -R 200:200 ./data. start_period: 180s — Without this, Docker marks the container unhealthy before it has finished booting, which can trigger restart loops. On constrained hardware, Nexus takes 2–3 minutes to start. maxRequestBodyBytes=0 — The single most important Traefik label for a Docker registry. Without it, pushing any image layer larger than Traefik's default body size limit (2m) will fail with a cryptic 413 error. -Dnexus-ssl-proxy=true — Tells Nexus it is behind a TLS-terminating proxy. Without this, the UI generates incorrect http:// URLs in some contexts. Nexus 3 supports two database backends: an embedded H2 database and external PostgreSQL. I chose PostgreSQL for several reasons: The tradeoff: there is no admin.password file on a PostgreSQL-backed install. The default credentials are simply admin / admin123, and Nexus forces a password change on first login. After the first login: Administration → Security → Realms → move Docker Bearer Token Realm to Active. This is the single most common cause of 401 Unauthorized errors when using docker login. It must be enabled. Administration → Repository → Repositories → Create repository → docker (hosted) Set the HTTP port to 5000 (this is what Traefik will route to), leave HTTPS unchecked (Traefik handles that), and set the deployment policy to your preference. I created a dedicated ci user with a minimal role, following the principle of least privilege. The role has only the privileges needed to push and pull from the Docker repository — no admin access, no access to other repositories. Using the commit SHA as the image tag instead of latest gives you full traceability — every deployment can be traced back to an exact commit. The SSD is the other constraint. 25 Gi sounds like a lot until you start storing Docker images — a typical Node.js app image is 200–400 Mi, and you accumulate many versions fast. In Administration → Repository → Cleanup Policies, I created a policy that removes: Attached to the docker-hosted repository and scheduled as a weekly task, this keeps the blob store from growing unbounded. JVM memory is not just heap. Many guides say "set -Xmx to half your RAM" without mentioning direct memory, metaspace, code cache, or JVM thread stacks. The real footprint is heap + direct + ~100m of JVM internals. Budget for all of it. Swap before anything else. I almost deployed without swap because "SSD swap is slow." SSD swap at 10% swappiness is effectively never used during normal operation, but it has saved me from OOM kills more than once during Nexus startup when memory pressure is highest. Traefik label order matters. The middlewares label must reference middleware names defined in other labels in the same service. If you define docker-headers and nexus-docker-buffering but only reference one in the router label, the other silently does nothing. start_period is not optional for slow services. Docker's healthcheck start_period is the grace period before failed checks count against retries. For a service that takes 2–3 minutes to boot, setting this to 30s means Docker will restart the container before it has even finished starting — creating an infinite restart loop that looks like a memory issue. PostgreSQL default credentials are not in the logs. Coming from H2-backed Nexus where an admin.password file is generated, this caught me off guard. The PostgreSQL-backed install simply uses admin / admin123 with no file or log entry indicating this. A fully functional private Docker registry and artifact store, running reliably on hardware that costs a few euros a month. Memory usage in steady state: 58% memory utilisation at idle, with 42% headroom before the hard limit and 700m of swap available beyond that. For a personal infrastructure project, that is a comfortable margin. The full configuration, scripts, and documentation are available on GitHub: gitlab.com/hanatole/nexus Have questions or improvements? Open an issue on the repository. Templates let you quickly answer FAQs or store snippets for re-use. Hide child comments as well For further actions, you may consider blocking this person and/or reporting abuse

Code Block

Copy

| Component | Details | | ------------- | ------------------------------------------------------------------| | VPS | 1 vCPU, 1 Gi RAM, 25 Gi SSD | | OS | Ubuntu 22.04 LTS | | Nexus version | 3.90.2-alpine | | Database | PostgreSQL (external, on the same host) | | Reverse proxy | Traefik v3 with automatic TLS | | DNS | `repository.bitnoises.com` (UI), `registry.bitnoises.com` (Docker)| | Component | Details | | ------------- | ------------------------------------------------------------------| | VPS | 1 vCPU, 1 Gi RAM, 25 Gi SSD | | OS | Ubuntu 22.04 LTS | | Nexus version | 3.90.2-alpine | | Database | PostgreSQL (external, on the same host) | | Reverse proxy | Traefik v3 with automatic TLS | | DNS | `repository.bitnoises.com` (UI), `registry.bitnoises.com` (Docker)| | Component | Details | | ------------- | ------------------------------------------------------------------| | VPS | 1 vCPU, 1 Gi RAM, 25 Gi SSD | | OS | Ubuntu 22.04 LTS | | Nexus version | 3.90.2-alpine | | Database | PostgreSQL (external, on the same host) | | Reverse proxy | Traefik v3 with automatic TLS | | DNS | `repository.bitnoises.com` (UI), `registry.bitnoises.com` (Docker)| Total RAM: 957 Mi OS + kernel: ~150 Mi Docker daemon: ~50 Mi Traefik: ~30 Mi PostgreSQL: ~80 Mi ───────────────────────────── Available for Nexus: ~647 Mi Total RAM: 957 Mi OS + kernel: ~150 Mi Docker daemon: ~50 Mi Traefik: ~30 Mi PostgreSQL: ~80 Mi ───────────────────────────── Available for Nexus: ~647 Mi Total RAM: 957 Mi OS + kernel: ~150 Mi Docker daemon: ~50 Mi Traefik: ~30 Mi PostgreSQL: ~80 Mi ───────────────────────────── Available for Nexus: ~647 Mi -Xms128m # Start heap small, grow as needed -Xmx384m # Hard heap ceiling -XX:MaxDirectMemorySize=192m # Off-heap buffer ceiling -XX:+UseG1GC # Better GC under memory pressure -XX:MaxGCPauseMillis=300 # GC pause target -XX:G1HeapRegionSize=4m # Smaller regions = less wasted space -XX:+UseStringDeduplication # G1 deduplicates identical strings (~5-10% heap savings) -XX:SoftRefLRUPolicyMSPerMB=0 # Aggressively clear soft references under pressure -Xms128m # Start heap small, grow as needed -Xmx384m # Hard heap ceiling -XX:MaxDirectMemorySize=192m # Off-heap buffer ceiling -XX:+UseG1GC # Better GC under memory pressure -XX:MaxGCPauseMillis=300 # GC pause target -XX:G1HeapRegionSize=4m # Smaller regions = less wasted space -XX:+UseStringDeduplication # G1 deduplicates identical strings (~5-10% heap savings) -XX:SoftRefLRUPolicyMSPerMB=0 # Aggressively clear soft references under pressure -Xms128m # Start heap small, grow as needed -Xmx384m # Hard heap ceiling -XX:MaxDirectMemorySize=192m # Off-heap buffer ceiling -XX:+UseG1GC # Better GC under memory pressure -XX:MaxGCPauseMillis=300 # GC pause target -XX:G1HeapRegionSize=4m # Smaller regions = less wasted space -XX:+UseStringDeduplication # G1 deduplicates identical strings (~5-10% heap savings) -XX:SoftRefLRUPolicyMSPerMB=0 # Aggressively clear soft references under pressure fallocate -l 2G /swapfile chmod 600 /swapfile mkswap /swapfile swapon /swapfile echo '/swapfile none swap sw 0 0' >> /etc/fstab # Only use swap as a last resort echo 'vm.swappiness=10' >> /etc/sysctl.conf sysctl -p fallocate -l 2G /swapfile chmod 600 /swapfile mkswap /swapfile swapon /swapfile echo '/swapfile none swap sw 0 0' >> /etc/fstab # Only use swap as a last resort echo 'vm.swappiness=10' >> /etc/sysctl.conf sysctl -p fallocate -l 2G /swapfile chmod 600 /swapfile mkswap /swapfile swapon /swapfile echo '/swapfile none swap sw 0 0' >> /etc/fstab # Only use swap as a last resort echo 'vm.swappiness=10' >> /etc/sysctl.conf sysctl -p $ free -h total used free buff/cache available Mem: 957Mi 194Mi 91Mi 671Mi 594Mi Swap: 2.0Gi 125Mi 1.9Gi $ free -h total used free buff/cache available Mem: 957Mi 194Mi 91Mi 671Mi 594Mi Swap: 2.0Gi 125Mi 1.9Gi $ free -h total used free buff/cache available Mem: 957Mi 194Mi 91Mi 671Mi 594Mi Swap: 2.0Gi 125Mi 1.9Gi services: nexus: image: sonatype/nexus3:3.90.2-alpine container_name: nexus restart: unless-stopped user: "200:200" environment: INSTALL4J_ADD_VM_PARAMS: >- -Xms128m -Xmx384m -XX:MaxDirectMemorySize=192m -XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=4m -XX:+UseStringDeduplication -XX:SoftRefLRUPolicyMSPerMB=0 -Djava.util.prefs.userRoot=/nexus-data/javaprefs -Dnexus.datastore.enabled=true -Dnexus-ssl-proxy=true NEXUS_DATASTORE_NEXUS_JDBCURL: jdbc:postgresql://${DB_HOST}:5432/${DB_NAME} NEXUS_DATASTORE_NEXUS_USERNAME: ${DB_USER} NEXUS_DATASTORE_NEXUS_PASSWORD: ${DB_PASSWORD} volumes: - "./data:/nexus-data" mem_limit: 700m memswap_limit: 1400m # 700m RAM + 700m swap headroom healthcheck: test: ["CMD-SHELL", "curl -sf http://localhost:8081/service/rest/v1/status || exit 1"] interval: 30s timeout: 10s retries: 5 start_period: 180s # Nexus is slow to boot on constrained hardware networks: - traefiknetwork - infra labels: - "traefik.enable=true" - "traefik.docker.network=traefiknetwork" # UI - "traefik.http.routers.nexus-ui.rule=Host(`${NEXUS_HOST}`)" - "traefik.http.routers.nexus-ui.entrypoints=websecure" - "traefik.http.routers.nexus-ui.tls=true" - "traefik.http.routers.nexus-ui.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-ui.service=nexus-ui" - "traefik.http.services.nexus-ui.loadbalancer.server.port=8081" # Docker registry - "traefik.http.routers.nexus-docker.rule=Host(`${REGISTRY_HOST}`)" - "traefik.http.routers.nexus-docker.entrypoints=websecure" - "traefik.http.routers.nexus-docker.tls=true" - "traefik.http.routers.nexus-docker.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-docker.service=nexus-docker" - "traefik.http.services.nexus-docker.loadbalancer.server.port=5000" - "traefik.http.middlewares.docker-headers.headers.customrequestheaders.Docker-Distribution-Api-Version=registry/2.0" - "traefik.http.middlewares.nexus-docker-buffering.buffering.maxRequestBodyBytes=0" - "traefik.http.routers.nexus-docker.middlewares=docker-headers,nexus-docker-buffering" networks: traefiknetwork: external: true infra: external: true services: nexus: image: sonatype/nexus3:3.90.2-alpine container_name: nexus restart: unless-stopped user: "200:200" environment: INSTALL4J_ADD_VM_PARAMS: >- -Xms128m -Xmx384m -XX:MaxDirectMemorySize=192m -XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=4m -XX:+UseStringDeduplication -XX:SoftRefLRUPolicyMSPerMB=0 -Djava.util.prefs.userRoot=/nexus-data/javaprefs -Dnexus.datastore.enabled=true -Dnexus-ssl-proxy=true NEXUS_DATASTORE_NEXUS_JDBCURL: jdbc:postgresql://${DB_HOST}:5432/${DB_NAME} NEXUS_DATASTORE_NEXUS_USERNAME: ${DB_USER} NEXUS_DATASTORE_NEXUS_PASSWORD: ${DB_PASSWORD} volumes: - "./data:/nexus-data" mem_limit: 700m memswap_limit: 1400m # 700m RAM + 700m swap headroom healthcheck: test: ["CMD-SHELL", "curl -sf http://localhost:8081/service/rest/v1/status || exit 1"] interval: 30s timeout: 10s retries: 5 start_period: 180s # Nexus is slow to boot on constrained hardware networks: - traefiknetwork - infra labels: - "traefik.enable=true" - "traefik.docker.network=traefiknetwork" # UI - "traefik.http.routers.nexus-ui.rule=Host(`${NEXUS_HOST}`)" - "traefik.http.routers.nexus-ui.entrypoints=websecure" - "traefik.http.routers.nexus-ui.tls=true" - "traefik.http.routers.nexus-ui.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-ui.service=nexus-ui" - "traefik.http.services.nexus-ui.loadbalancer.server.port=8081" # Docker registry - "traefik.http.routers.nexus-docker.rule=Host(`${REGISTRY_HOST}`)" - "traefik.http.routers.nexus-docker.entrypoints=websecure" - "traefik.http.routers.nexus-docker.tls=true" - "traefik.http.routers.nexus-docker.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-docker.service=nexus-docker" - "traefik.http.services.nexus-docker.loadbalancer.server.port=5000" - "traefik.http.middlewares.docker-headers.headers.customrequestheaders.Docker-Distribution-Api-Version=registry/2.0" - "traefik.http.middlewares.nexus-docker-buffering.buffering.maxRequestBodyBytes=0" - "traefik.http.routers.nexus-docker.middlewares=docker-headers,nexus-docker-buffering" networks: traefiknetwork: external: true infra: external: true services: nexus: image: sonatype/nexus3:3.90.2-alpine container_name: nexus restart: unless-stopped user: "200:200" environment: INSTALL4J_ADD_VM_PARAMS: >- -Xms128m -Xmx384m -XX:MaxDirectMemorySize=192m -XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:G1HeapRegionSize=4m -XX:+UseStringDeduplication -XX:SoftRefLRUPolicyMSPerMB=0 -Djava.util.prefs.userRoot=/nexus-data/javaprefs -Dnexus.datastore.enabled=true -Dnexus-ssl-proxy=true NEXUS_DATASTORE_NEXUS_JDBCURL: jdbc:postgresql://${DB_HOST}:5432/${DB_NAME} NEXUS_DATASTORE_NEXUS_USERNAME: ${DB_USER} NEXUS_DATASTORE_NEXUS_PASSWORD: ${DB_PASSWORD} volumes: - "./data:/nexus-data" mem_limit: 700m memswap_limit: 1400m # 700m RAM + 700m swap headroom healthcheck: test: ["CMD-SHELL", "curl -sf http://localhost:8081/service/rest/v1/status || exit 1"] interval: 30s timeout: 10s retries: 5 start_period: 180s # Nexus is slow to boot on constrained hardware networks: - traefiknetwork - infra labels: - "traefik.enable=true" - "traefik.docker.network=traefiknetwork" # UI - "traefik.http.routers.nexus-ui.rule=Host(`${NEXUS_HOST}`)" - "traefik.http.routers.nexus-ui.entrypoints=websecure" - "traefik.http.routers.nexus-ui.tls=true" - "traefik.http.routers.nexus-ui.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-ui.service=nexus-ui" - "traefik.http.services.nexus-ui.loadbalancer.server.port=8081" # Docker registry - "traefik.http.routers.nexus-docker.rule=Host(`${REGISTRY_HOST}`)" - "traefik.http.routers.nexus-docker.entrypoints=websecure" - "traefik.http.routers.nexus-docker.tls=true" - "traefik.http.routers.nexus-docker.tls.certresolver=letsencrypt" - "traefik.http.routers.nexus-docker.service=nexus-docker" - "traefik.http.services.nexus-docker.loadbalancer.server.port=5000" - "traefik.http.middlewares.docker-headers.headers.customrequestheaders.Docker-Distribution-Api-Version=registry/2.0" - "traefik.http.middlewares.nexus-docker-buffering.buffering.maxRequestBodyBytes=0" - "traefik.http.routers.nexus-docker.middlewares=docker-headers,nexus-docker-buffering" networks: traefiknetwork: external: true infra: external: true stages: - push push-alpine-to-nexus: stage: push image: docker:29.2.1 services: - docker:29.2.1-dind variables: DOCKER_TLS_CERTDIR: "" script: # Log in to the private registry - echo "$NEXUS_PASSWORD" | docker login registry.bitnoises.com \ --username "$NEXUS_USER" \ --password-stdin # Pull Alpine image from Docker Hub - docker pull alpine:latest # Tag the image for the private registry - docker tag alpine:latest registry.bitnoises.com/alpine:latest # Push the image to Nexus - docker push registry.bitnoises.com/alpine:latest stages: - push push-alpine-to-nexus: stage: push image: docker:29.2.1 services: - docker:29.2.1-dind variables: DOCKER_TLS_CERTDIR: "" script: # Log in to the private registry - echo "$NEXUS_PASSWORD" | docker login registry.bitnoises.com \ --username "$NEXUS_USER" \ --password-stdin # Pull Alpine image from Docker Hub - docker pull alpine:latest # Tag the image for the private registry - docker tag alpine:latest registry.bitnoises.com/alpine:latest # Push the image to Nexus - docker push registry.bitnoises.com/alpine:latest stages: - push push-alpine-to-nexus: stage: push image: docker:29.2.1 services: - docker:29.2.1-dind variables: DOCKER_TLS_CERTDIR: "" script: # Log in to the private registry - echo "$NEXUS_PASSWORD" | docker login registry.bitnoises.com \ --username "$NEXUS_USER" \ --password-stdin # Pull Alpine image from Docker Hub - docker pull alpine:latest # Tag the image for the private registry - docker tag alpine:latest registry.bitnoises.com/alpine:latest # Push the image to Nexus - docker push registry.bitnoises.com/alpine:latest # How much is Nexus actually using? du -sh ./data/blobs/ # Docker layer cache on the host docker system df # Full picture df -h # How much is Nexus actually using? du -sh ./data/blobs/ # Docker layer cache on the host docker system df # Full picture df -h # How much is Nexus actually using? du -sh ./data/blobs/ # Docker layer cache on the host docker system df # Full picture df -h $ docker stats nexus --no-stream CONTAINER CPU % MEM USAGE / LIMIT MEM % nexus 0.3% 412MiB / 700MiB 58.9% $ docker stats nexus --no-stream CONTAINER CPU % MEM USAGE / LIMIT MEM % nexus 0.3% 412MiB / 700MiB 58.9% $ docker stats nexus --no-stream CONTAINER CPU % MEM USAGE / LIMIT MEM % nexus 0.3% 412MiB / 700MiB 58.9% - No rate limits on pulls (critical in CI/CD pipelines) - Private images without paying for a cloud registry - A single artifact store for Docker images, npm packages, Maven artifacts, and more — all under one roof - Full control over retention and access - Heap (-Xmx) — object allocations, managed by the garbage collector - Direct memory (-XX:MaxDirectMemorySize) — off-heap buffers, used heavily for I/O - Reliability — H2 is fine for development, but I've seen it corrupt under abrupt JVM kills. PostgreSQL handles dirty shutdowns gracefully. - Observability — I can query the database directly to inspect state, run backups, and monitor connection counts. - Consistency — PostgreSQL is already running on my infrastructure for other services. One less moving part. - Components older than 30 days - Components not downloaded in 14 days