Caddy Docker Compose Reverse Proxy: Production HTTPS 2026
May 11, 2026
TL;DR
This tutorial walks you through running Caddy 2.11.21 as a reverse proxy in front of multiple Dockerised services, using only docker compose and a single Caddyfile. You will get automatic Let's Encrypt HTTPS, a reusable security-headers snippet, a graceful zero-downtime reload workflow, and an optional xcaddy build that adds the Cloudflare DNS plugin for wildcard certificates. The whole stack is one docker compose up -d away and runs on any Linux box with Docker Engine 24+.
What you'll learn
- How to set up Caddy as a Docker Compose reverse proxy that fronts multiple HTTP services
- How automatic HTTPS works in Docker, including the volumes you must persist
- Why mounting the Caddyfile as a single file breaks
caddy reload, and the directory-mount fix - How to define a reusable
(security-headers)snippet and apply it to every site block - How to perform a zero-downtime reload from the host with
docker compose exec - How to build a custom Caddy image with the Cloudflare DNS plugin for wildcard certificates
- How to harden the admin API on port 2019 in production
Prerequisites
| Tool | Version |
|---|---|
| Docker Engine | 24.0+ (Compose V2 plugin built in) |
| Docker Compose | v2.24+ (docker compose version) |
| Linux host with public IP | Ports 80, 443, and 443/udp reachable from the internet |
| A real domain | DNS A/AAAA records pointed at the host BEFORE you boot Caddy2 |
| Optional: Cloudflare account | API token with Zone.DNS:Edit for wildcard certs |
You do not need Go, xcaddy, or any Caddy binaries on the host. Everything happens inside containers built from the official caddy:2.11.2-alpine image3.
How Caddy automatic HTTPS works in Docker
When Caddy serves a hostname (not a bare port or IP), it automatically registers an ACME account, requests a certificate, completes the HTTP-01 challenge on port 80, switches you to HTTPS on 443, and renews the certificate before it expires2. In Docker, three things have to be true for that to actually work end-to-end:
- DNS resolves to your host. Caddy will not provision a certificate for a name whose A/AAAA record does not point at the box it's running on.
- Ports 80 and 443 are reachable from the internet. Both are required for the HTTP-01 challenge and the HTTPS service.
- The
/datavolume is persistent. This is where Caddy stores certificates, private keys, and OCSP staples. The official Docker Hub README is explicit: "The data directory must not be treated as a cache. Its contents are not ephemeral."3 Wipe it and Caddy re-issues every certificate from scratch — fast track to a Let's Encrypt rate limit.
That third point is the most common production foot-gun, and it's the reason this tutorial uses named volumes from step 1.
The minimum viable Caddy stack
Create a project directory and three files:
mkdir -p caddy-stack/caddy && cd caddy-stack
touch compose.yaml caddy/Caddyfile .env
Step 1 — Write the Compose file
# compose.yaml
services:
caddy:
image: caddy:2.11.2-alpine
restart: unless-stopped
cap_add:
- NET_ADMIN
ports:
- "80:80"
- "443:443"
- "443:443/udp"
volumes:
- ./caddy:/etc/caddy
- caddy_data:/data
- caddy_config:/config
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN:-}
networks: [edge]
api:
image: traefik/whoami:latest
restart: unless-stopped
networks: [edge]
app:
image: traefik/whoami:latest
restart: unless-stopped
networks: [edge]
volumes:
caddy_data:
caddy_config:
networks:
edge:
driver: bridge
A few details worth pinning to memory:
- No
version:key. The Compose Spec dropped the top-levelversion:field and recentdocker composeprints a deprecation warning when you include it4. Start fromservices:. cap_add: NET_ADMINletsquic-goraise its UDP buffer sizes for HTTP/3 without you having to bump kernelsysctlvalues on the host3../caddy:/etc/caddyis a directory bind, not a file bind. That's deliberate (see the next section).traefik/whoamiis a tiny Go server that prints HTTP request details as JSON5. We use two copies as stand-in backends so you can see traffic routing without having to bring your own apps.
Step 2 — Write the Caddyfile
# caddy/Caddyfile
{
email you@example.com
}
(security-headers) {
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
X-Content-Type-Options "nosniff"
X-Frame-Options "SAMEORIGIN"
Referrer-Policy "strict-origin-when-cross-origin"
Permissions-Policy "camera=(), microphone=(), geolocation=()"
-Server
-X-Powered-By
}
}
api.example.com {
import security-headers
reverse_proxy api:80
}
app.example.com {
import security-headers
reverse_proxy app:80
}
Replace you@example.com and the two hostnames with values you actually own. Caddy reads email from the global options block once and uses it for every ACME registration6. The admin API is left at its default — bound to localhost:2019 inside the container — because we will use caddy reload later, and that command talks to the admin API. We harden it in a dedicated section below.
Step 3 — Start the stack
docker compose up -d
docker compose logs -f caddy
Within a few seconds you should see Caddy provision certificates and start listening:
INFO using config from file {"file": "/etc/caddy/Caddyfile"}
INFO autosaved config (load with --resume flag)
INFO serving initial configuration
INFO tls.obtain acquiring lock {"identifier": "api.example.com"}
INFO tls.issuance.acme waiting on internal rate limiter
INFO tls.obtain certificate obtained successfully {"identifier": "api.example.com"}
Step 4 — Verify it works
curl -sSI https://api.example.com | head -n 5
# HTTP/2 200
# alt-svc: h3=":443"; ma=2592000
# strict-transport-security: max-age=31536000; includeSubDomains; preload
# x-content-type-options: nosniff
# x-frame-options: SAMEORIGIN
curl -s https://api.example.com/api | python3 -m json.tool
# {
# "hostname": "abc123…",
# "ip": ["172.18.0.3"],
# "headers": {
# "X-Forwarded-For": ["1.2.3.4"],
# "X-Forwarded-Proto": ["https"]
# },
# "method": "GET",
# "host": "api.example.com"
# }
X-Forwarded-Proto: https and X-Forwarded-For come for free — Caddy's reverse_proxy directive sets both by default7. The original Host header is also passed through unchanged, which is what most apps want behind a proxy.
Why you should not mount the Caddyfile as a single file
Almost every blog tutorial gets this wrong. They write ./Caddyfile:/etc/caddy/Caddyfile and move on. The official Docker Hub README has a ⚠️ warning right at the top:
If vim or another editor is used that changes the inode of the edited file, the changes will only be applied within the container when the container is recreated.3
What's actually happening: when you save with Vim, VS Code, or most editors, the editor writes to a temp file and renames it on top of the original. The host inode changes, but the bind mount inside the container is pinned to the old inode, so Caddy never sees the new file. docker compose exec caddy caddy reload succeeds, but reads the unchanged contents.
The fix is to mount the parent directory (./caddy:/etc/caddy in the Compose file above) and edit caddy/Caddyfile from outside. The container watches the directory, not the file, so inode swaps are invisible.
Reload without restart
docker compose exec -w /etc/caddy caddy caddy reload
This runs caddy reload in the same container. The reload is atomic: if the new Caddyfile parses but fails to apply, Caddy rolls back to the previous config without dropping a connection8. Your TLS sessions stay alive, in-flight requests finish on the old config, new requests pick up the new config.
If you started Caddy with the standard entrypoint (caddy run --config /etc/caddy/Caddyfile), you can also send SIGUSR1 to the main process — this is a 2.11.1+ feature that re-reads the file if it was originally loaded from one and has not since been changed through the admin API9:
docker compose kill -s SIGUSR1 caddy
Use caddy reload in scripts (it returns a non-zero exit code if the new config is invalid). Use SIGUSR1 if you have a config-file watcher process and need a signal-driven trigger.
Add a reusable security headers snippet
The (security-headers) block in the Caddyfile above is what Caddy calls a snippet — a reusable chunk of configuration that you reference with import10. The snippet defines the response headers once and import security-headers inlines it into each site block. Add a third site, you only need to remember the import line.
The header values themselves are conservative defaults that work for most apps:
| Header | Value | Why |
|---|---|---|
Strict-Transport-Security | max-age=31536000; includeSubDomains; preload | One-year HSTS, all subdomains, eligible for the browser preload list11 |
X-Content-Type-Options | nosniff | Disables MIME sniffing |
X-Frame-Options | SAMEORIGIN | Blocks clickjacking via <iframe> |
Referrer-Policy | strict-origin-when-cross-origin | Strips path/query when leaving your origin |
Permissions-Policy | camera=(), microphone=(), geolocation=() | Denies sensor access by default; opt in per site |
-Server and -X-Powered-By | (removal) | Stops leaking server identity to attackers |
Notice the leading - to remove a header — Caddy parses -Header-Name as a deletion in a header block12. By default Caddy sends Server: Caddy in every response, and upstream apps often add their own X-Powered-By; both leak server identity to attackers, so the snippet strips them.
Wildcard certs with the Cloudflare DNS plugin
The default Caddy binary cannot issue wildcard certificates (*.example.com) because that requires the ACME DNS-01 challenge, which in turn requires a DNS provider plugin. The Cloudflare module lives at github.com/caddy-dns/cloudflare13 and you install it by rebuilding Caddy with xcaddy. The simplest way to do that in Docker is the multi-stage builder pattern from the official Docker Hub README3:
# caddy/Dockerfile
FROM caddy:2.11.2-builder-alpine AS builder
RUN xcaddy build \
--with github.com/caddy-dns/cloudflare
FROM caddy:2.11.2-alpine
COPY /usr/bin/caddy /usr/bin/caddy
Update the Caddy service in compose.yaml to build from this Dockerfile instead of pulling the image:
caddy:
build:
context: ./caddy
dockerfile: Dockerfile
restart: unless-stopped
# …rest of the service unchanged
Generate a Cloudflare API token at My Profile → API Tokens → Create Token → Edit zone DNS (just Zone.DNS:Edit on the zone you control). Drop it into .env:
# .env
CLOUDFLARE_API_TOKEN=cf_token_value_here
Then update the Caddyfile to issue a wildcard:
*.example.com {
import security-headers
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
@api host api.example.com
handle @api {
reverse_proxy api:80
}
@app host app.example.com
handle @app {
reverse_proxy app:80
}
}
docker compose up -d --build and Caddy will request a wildcard from Let's Encrypt by writing a TXT record to your Cloudflare zone, removing it after verification, and storing the resulting *.example.com cert in caddy_data. The whole flow is hands-off and re-runs automatically before expiry2.
Lock down the admin API
Caddy's admin API listens on localhost:2019 by default14. Inside a container, "localhost" is the container's loopback, which is unreachable from the host or other containers unless you publish port 2019 in compose.yaml. So you are not directly exposing the API to the internet by accident — but anyone who can docker exec into the container can run the full configuration API.
The Caddy team's documentation is direct: "If you are running untrusted code on your server, make sure you protect your admin endpoint by isolating processes, patching vulnerable programs, and configuring the endpoint to bind to a permissioned unix socket instead."14
Three production-friendly options, in order of paranoia:
- Bind the admin API to a Unix socket with
admin unix//run/caddy/admin.sockin the global options block, then mount that socket only into containers that legitimately need to reload (a deploy sidecar, for example). Reloads still work —caddy reload --address unix//run/caddy/admin.sock— but no TCP listener is exposed. - Disable the admin API entirely with
admin off. Caddy still serves traffic butcaddy reloadstops working, so config changes requiredocker compose restart caddy. The restart is graceful (in-flight requests finish on the old config) but it's a multi-second blip. - Keep the default
localhost:2019listener but never publish port 2019 incompose.yaml. This is what the stack above does. The risk surface is anyone who candocker execinto the Caddy container — usually an acceptable trust boundary for a single-tenant host, but worth thinking about.
There is no universally correct answer; pick based on who has shell access to your Docker host and how often you legitimately need to reload.
Active health checks for safer rollouts
When you have multiple replicas of a backend, Caddy's reverse_proxy directive can probe them and stop sending traffic to unhealthy pods:
api.example.com {
import security-headers
reverse_proxy api-v1:80 api-v2:80 {
lb_policy least_conn
health_uri /healthz
health_interval 10s
health_timeout 2s
health_status 200
fail_duration 30s
max_fails 3
}
}
The health_uri is hit every health_interval against each upstream; an upstream is considered down when it fails to return health_status within health_timeout15. The fail_duration / max_fails pair is passive: if a real request fails, Caddy remembers it for 30 s and rotates to a healthy peer after 3 such failures. Caddy's default load-balancing policy is random; least_conn is usually a better default in front of long-lived connections like WebSockets or SSE.
If you want to inspect upstream state at runtime, the admin API has a dedicated read-only endpoint. The Alpine-based Caddy image ships with BusyBox wget (no curl), so use that:
docker compose exec caddy wget -qO- http://localhost:2019/reverse_proxy/upstreams
# [{"address":"api-v1:80","num_requests":4,"fails":0},
# {"address":"api-v2:80","num_requests":5,"fails":0}]
(With admin off set, the endpoint is gone with the rest of the API.)
Verification checklist
After docker compose up -d, confirm each of the following:
# 1. Caddy is listening on 80, 443, and 443/udp
docker compose ps caddy --format "table {{.Service}}\t{{.Ports}}"
# SERVICE PORTS
# caddy 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:443->443/udp
# 2. HTTP redirects to HTTPS
curl -sI http://api.example.com | grep -i location
# location: https://api.example.com/
# 3. The certificate is issued by Let's Encrypt, not Caddy's local CA
echo | openssl s_client -servername api.example.com -connect api.example.com:443 2>/dev/null \
| openssl x509 -noout -issuer
# issuer=C = US, O = Let's Encrypt, CN = R11
# (CN will be one of R10–R14 for RSA leaf certs or E5–E9 for ECDSA)
# 4. Security headers are present
curl -sI https://api.example.com | grep -iE "(strict|frame|content-type-options|referrer)"
# 5. The data volume actually contains certificates
docker compose exec caddy ls -la /data/caddy/certificates/acme-v02.api.letsencrypt.org-directory/
# api.example.com app.example.com
If any of these fail, jump to the troubleshooting table.
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
tls.obtain could not get certificate with HTTP-01 timeout | Port 80 not reachable from the internet, or DNS A/AAAA record not pointing at this host | curl -I http://yourdomain/ from a machine outside your network must hit Caddy. Check DNS with dig +short yourdomain. |
Certificate is Caddy Local Authority instead of Let's Encrypt | The site address in the Caddyfile is localhost, an IP, or a name that doesn't resolve publicly | Use a real public hostname. Caddy falls back to its internal CA for non-public addresses2. |
caddy reload exits 0 but config doesn't change | You bind-mounted the Caddyfile as a single file and your editor changed its inode on save | Mount the parent directory instead (./caddy:/etc/caddy)3. |
| HTTP/3 works in Chrome but not curl | Curl's stock build has no QUIC support; this is normal | Test with curl --http3 from a build that has QUIC, or check Alt-Svc: header on the HTTP/2 response. |
permission denied on caddy_data after a host migration | A volume copied without preserving Linux ownership; Caddy runs as UID 0 in the official image but a copy may have flipped permissions | docker compose exec --user root caddy chown -R 0:0 /data /config then restart. |
Cloudflare DNS plugin: unknown directive: dns | You forgot the multi-stage build and are still on the stock caddy:2.11.2-alpine | Rebuild with the Dockerfile shown above and docker compose up -d --build. |
Production checklist
Before you point real traffic at this:
- Pin the exact image tag (
caddy:2.11.2-alpine), never:latest— the official README warns against it explicitly3. - Persist
caddy_dataandcaddy_configto a backed-up volume. - Pick an admin-API hardening posture (Unix socket, off, or default loopback) — see the dedicated section above.
- Add a log rotation strategy (
logdirective in Caddyfile or external log shipper) — Caddy 2.11.2 addedzstdsupport to log rolling and deprecatedroll_gzipin favour ofroll_compression1. - Replace the
traefik/whoamicontainers with your real backends; the whoami image is a debug tool, not a production app. - Run
docker compose pull && docker compose up -don a schedule to pick up Caddy patch releases (v2.11.2 fixed aforward_authcopy_headersvulnerability that allowed client-supplied identity headers to reach the upstream when the auth service didn't return every copied header — relevant if you useforward_authwith an SSO proxy1).
Further reading
- Production Postgres pooling with PgBouncer and Supavisor — the natural backend pairing for a Caddy stack
- Cloudflare Workers + R2 image CDN tutorial — when you want edge-side image transforms in addition to a reverse proxy
- Migrate from Ingress to the Kubernetes Gateway API — the Kubernetes-native version of the same problem
Sources
Footnotes
-
Caddy v2.11.2 release notes — https://github.com/caddyserver/caddy/releases/tag/v2.11.2 ↩ ↩2 ↩3
-
Automatic HTTPS — https://caddyserver.com/docs/automatic-https ↩ ↩2 ↩3 ↩4
-
Official Caddy Docker image overview — https://hub.docker.com/_/caddy ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7
-
Compose Spec: version and name top-level elements — https://docs.docker.com/reference/compose-file/version-and-name/ ↩
-
traefik/whoami — https://github.com/traefik/whoami ↩
-
Caddyfile global options (
email) — https://caddyserver.com/docs/caddyfile/options#email ↩ -
reverse_proxydirective — https://caddyserver.com/docs/caddyfile/directives/reverse_proxy ↩ -
API: POST /load — https://caddyserver.com/docs/api#post-load ↩
-
SIGUSR1 reload (v2.11.1+) — https://github.com/caddyserver/caddy/releases/tag/v2.11.1 ↩
-
importdirective and snippets — https://caddyserver.com/docs/caddyfile/directives/import ↩ -
Strict-Transport-Security — https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Strict-Transport-Security ↩
-
headerdirective — https://caddyserver.com/docs/caddyfile/directives/header ↩ -
caddy-dns/cloudflare module — https://github.com/caddy-dns/cloudflare ↩
-
Caddy admin API — https://caddyserver.com/docs/api ↩ ↩2
-
reverse_proxyhealth checks — https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#health-checks ↩