# .env
OPENMETEO_API_KEY=...
# .env
OPENMETEO_API_KEY=...
# .env
OPENMETEO_API_KEY=...
// src/lib/uv-api.ts (excerpt)
const omHost = process.env.OPENMETEO_HOST;
const apiKey = !omHost ? process.env.OPENMETEO_API_KEY : undefined; const CAMS_BASE = omHost ? `${omHost}/v1/air-quality` : apiKey ? 'https://customer-air-quality-api.open-meteo.com/v1/air-quality' : 'https://air-quality-api.open-meteo.com/v1/air-quality';
// src/lib/uv-api.ts (excerpt)
const omHost = process.env.OPENMETEO_HOST;
const apiKey = !omHost ? process.env.OPENMETEO_API_KEY : undefined; const CAMS_BASE = omHost ? `${omHost}/v1/air-quality` : apiKey ? 'https://customer-air-quality-api.open-meteo.com/v1/air-quality' : 'https://air-quality-api.open-meteo.com/v1/air-quality';
// src/lib/uv-api.ts (excerpt)
const omHost = process.env.OPENMETEO_HOST;
const apiKey = !omHost ? process.env.OPENMETEO_API_KEY : undefined; const CAMS_BASE = omHost ? `${omHost}/v1/air-quality` : apiKey ? 'https://customer-air-quality-api.open-meteo.com/v1/air-quality' : 'https://air-quality-api.open-meteo.com/v1/air-quality';
services: open-meteo: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] expose: ["8080"] command: ["serve"] sync-dwd: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - dwd_icon,dwd_icon_eu,dwd_icon_d2 - temperature_2m,relative_humidity_2m,weather_code,cloud_cover,precipitation,shortwave_radiation - --past-days=3 - --repeat-interval=5 sync-cams: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - cams_global,cams_europe - uv_index,uv_index_clear_sky,pm10,pm2_5,ozone,alder_pollen,birch_pollen,grass_pollen,ragweed_pollen - --past-days=3 - --repeat-interval=5 volumes: open-meteo-data:
services: open-meteo: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] expose: ["8080"] command: ["serve"] sync-dwd: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - dwd_icon,dwd_icon_eu,dwd_icon_d2 - temperature_2m,relative_humidity_2m,weather_code,cloud_cover,precipitation,shortwave_radiation - --past-days=3 - --repeat-interval=5 sync-cams: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - cams_global,cams_europe - uv_index,uv_index_clear_sky,pm10,pm2_5,ozone,alder_pollen,birch_pollen,grass_pollen,ragweed_pollen - --past-days=3 - --repeat-interval=5 volumes: open-meteo-data:
services: open-meteo: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] expose: ["8080"] command: ["serve"] sync-dwd: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - dwd_icon,dwd_icon_eu,dwd_icon_d2 - temperature_2m,relative_humidity_2m,weather_code,cloud_cover,precipitation,shortwave_radiation - --past-days=3 - --repeat-interval=5 sync-cams: image: ghcr.io/open-meteo/open-meteo volumes: [open-meteo-data:/app/data] command: - sync - cams_global,cams_europe - uv_index,uv_index_clear_sky,pm10,pm2_5,ozone,alder_pollen,birch_pollen,grass_pollen,ragweed_pollen - --past-days=3 - --repeat-interval=5 volumes: open-meteo-data: - The forecast API for temperature, weather code, humidity, etc.
- The CAMS air quality API for UV index, AQI components, and pollen. - Meta-ExternalAgent - Crawler load is wasteful spend. Even if the limit isn't enforced, paying for traffic that produces no revenue (and no useful index entry on most platforms) is irritating.
- Latency. Every page render fans out to two API hosts in another datacenter. We were measuring p50 around 180–220 ms per upstream call from our box. CAMS pollen + forecast = two of those, mostly serial. - serve — runs the API on port 8080. Same query syntax as the public API.
- sync <model> <variables> — pulls the latest model run from the upstream provider (DWD, NOAA, ECMWF, MET Norway, …) and writes it to a shared volume. - DWD ICON (11 km global, 7 km EU, 2 km Central EU)
- NCEP (GFS 13 km global, HRRR 3 km CONUS)
- ECMWF IFS 25 km — long-range
- MET Norway / UKMO / BOM / CMC for regional accuracy
- CAMS global + Europe — UV, AQI, pollen
- A one-off copernicus_dem90 sync for elevation data (~10 GB, runs once) - Disk used by Open-Meteo data: ~50 GB and stable. The DEM is the largest one-time cost (~10 GB). The rolling weather data stays bounded by --past-days.
- open-meteo serve container at steady state: ~1.1 GiB RAM, ~4 % of one core. Model files are mmapped, so the kernel page cache does most of the work — it's why free -h shows ~13 GiB sitting in buff/cache.
- Sync workers: burst CPU when a new model run lands (every 1–6 hours depending on model), idle the rest of the time.
- Initial sync: 1–2 hours for the first run. This is the only painful step. - Public API (api.open-meteo.com): ~100-110 ms
- Customer API (customer-api.open-meteo.com): comparable, slightly more consistent
- Local container (http://open-meteo:8080): ~10 ms - Geocoding (geocoding-api.open-meteo.com). It's a separate -weight: 500;">service with its own dataset; we kept it on the public API and put a 1-hour LRU cache in front of it.
- Historical / climate / ensemble APIs. We don't use them. If you do, note that the Standard plan also doesn't include them — that's a Professional-tier thing.
- Marine / flood APIs. Same — out of scope for us. - Pick variables deliberately. Each sync command takes an explicit list of variables. Adding a variable later means re-syncing — don't be too minimal at first.
- Disk growth is mostly the DEM. The rolling weather data stays small if --past-days is small. Set this honestly — we use 3.
- There is no built-in API key / rate limit on the local instance. It binds to a private Docker network in our case; if you expose it to the internet, put a reverse proxy with auth or a rate limiter in front.
- Crawlers will still hit your app. Self-hosting Open-Meteo doesn't solve the crawler problem — it just stops the crawler problem from cascading into a third-party rate-limit problem.
- Attribution still applies. Open-Meteo's data is CC BY 4.0; you keep crediting the underlying data sources (DWD, NOAA, ECMWF, CAMS, …) regardless of how you host it. - Capacity: effectively unbounded for our scale. We can let crawlers through if we ever change our mind without watching a meter.
- Latency: ~10 ms per upstream call, twice per render.
- Cost: one VPS that we already had, instead of a per-domain subscription.
- Operational risk: lower than expected. The image is one container, the syncs are independent, and a failed sync just means stale-but-still-served data for that model.