Docker Compose Best Practices: 10 Rules for Production-Ready Configs
Docker Compose files start simple and accumulate complexity fast. A config that works fine in development can cause outages, data loss, and security vulnerabilities in production if you do not follow a few critical rules.
These 10 practices are drawn from real production failures. Each one includes a "bad" and "good" example so you can audit your own configs.
1. Pin Image Versions -- Never Use "latest"
The latest tag is a moving target. A docker compose pull on Tuesday might give you a completely different image than it did on Monday. This breaks reproducibility and can introduce breaking changes without warning.
Bad
services:
web:
image: nginx:latest
Good
services:
web:
image: nginx:1.25.4-alpine
Always use a specific version tag. Include the OS variant (like alpine) for smaller, more secure images. When you want to upgrade, change the tag explicitly and test.
2. Add Health Checks to Every Service
Without health checks, Docker considers a container "healthy" the moment its process starts -- even if the application inside is still initializing, has crashed internally, or cannot reach its database.
Bad
services:
api:
image: myapp:2.1.0
depends_on:
- db
Good
services:
api:
image: myapp:2.1.0
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
The start_period gives your app time to initialize before Docker starts counting failures. Combined with condition: service_healthy on depends_on, this ensures services start in the right order and only when their dependencies are actually ready.
3. Set Resource Limits
A runaway process without memory limits can consume all available RAM and crash every other container on the host. CPU limits prevent one service from starving others.
Bad
services:
worker:
image: myworker:1.3.0
Good
services:
worker:
image: myworker:1.3.0
deploy:
resources:
limits:
cpus: "1.0"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
Reservations guarantee a minimum allocation. Limits cap the maximum. Set limits based on observed usage plus a safety margin -- not arbitrarily high values.
4. Configure Logging Properly
By default, Docker stores logs as JSON files with no size limit. A chatty application can fill your disk in hours.
Bad
services:
api:
image: myapp:2.1.0
# No logging config -- unbounded log files
Good
services:
api:
image: myapp:2.1.0
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
This caps each service at 30 MB of logs (3 files of 10 MB each). For production systems that need centralized logging, use the syslog or fluentd driver instead.
5. Use Custom Networks
The default bridge network allows every container to communicate with every other container. Custom networks provide isolation between service groups.
Bad
services:
web:
image: nginx:1.25.4-alpine
api:
image: myapp:2.1.0
db:
image: postgres:16.2-alpine
# All on the default network -- web can reach db directly
Good
services:
web:
image: nginx:1.25.4-alpine
networks:
- frontend
api:
image: myapp:2.1.0
networks:
- frontend
- backend
db:
image: postgres:16.2-alpine
networks:
- backend
networks:
frontend:
backend:
Now the web server can only reach the API, and the database is only accessible from the API. This follows the principle of least privilege and limits the blast radius if any container is compromised.
6. Never Put Secrets in the Compose File
Secrets hardcoded in docker-compose.yml end up in version control. Environment variables in the compose file are visible to anyone with docker inspect access.
Bad
services:
db:
image: postgres:16.2-alpine
environment:
POSTGRES_PASSWORD: my_secret_password_123
Good
services:
db:
image: postgres:16.2-alpine
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
secrets:
db_password:
file: ./secrets/db_password.txt
Docker secrets mount credentials as files inside the container at /run/secrets/. Most official database images support the _FILE suffix convention for reading credentials from files. Keep your secrets directory in .gitignore and use a .env.example to document required values.
7. Use .env Files Correctly
The .env file is for variable substitution in your compose file -- not for passing environment variables to containers. Keep these concerns separate.
Bad
# .env committed to git with real values
DB_HOST=prod-db.internal
DB_PASSWORD=real_password
Good
# .env.example committed to git (template)
DB_HOST=localhost
DB_PASSWORD=changeme
# .env ignored by git (real values)
# Created manually on each host
Add .env to your .gitignore. Commit a .env.example with safe defaults so new developers know which variables are needed.
8. Set Restart Policies
Without a restart policy, a crashed container stays down until someone manually restarts it. In production, that might mean hours of downtime before anyone notices.
Bad
services:
api:
image: myapp:2.1.0
# No restart policy -- stays down after crash
Good
services:
api:
image: myapp:2.1.0
restart: unless-stopped
Use unless-stopped for most services -- it restarts on crash and after host reboots, but respects manual docker compose stop commands. Use on-failure for batch jobs that should not restart after completing successfully. Avoid always unless you have a specific reason, as it restarts even after manual stops.
9. Prefer "image" Over "build" in Production
Building images on the production server means your deploy depends on build tools, source code, and network access to registries -- all potential points of failure. Build once, push to a registry, and pull the image in production.
Bad
# Production compose file
services:
api:
build:
context: .
dockerfile: Dockerfile
Good
# Production compose file
services:
api:
image: registry.example.com/myapp:2.1.0
# Development compose file (docker-compose.override.yml)
services:
api:
build:
context: .
dockerfile: Dockerfile
Use docker-compose.override.yml for development-specific settings like build, volume mounts for hot reloading, and debug ports. Docker Compose merges the override file automatically.
10. Use Named Volumes for Persistent Data
Bind mounts (./data:/var/lib/postgresql/data) tie your data to a specific host path and can have permission issues across different operating systems. Named volumes are managed by Docker and work consistently.
Bad
services:
db:
image: postgres:16.2-alpine
volumes:
- ./pgdata:/var/lib/postgresql/data
Good
services:
db:
image: postgres:16.2-alpine
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
Named volumes survive docker compose down (but not docker compose down -v). For backups, use docker run --volumes-from or a backup sidecar container rather than reaching into the host filesystem.
Common Mistakes to Avoid
- Using
docker compose down -vin production -- the-vflag deletes all named volumes, including your database. It is useful for development resets but catastrophic in production. - Exposing ports unnecessarily -- use
expose(container-to-container) instead ofports(host-exposed) for internal services. Your database should almost never be exposed on a host port. - Ignoring the
.dockerignorefile -- without it, your build context includes everything in the directory, includingnode_modules,.git, and potentially secrets files. This slows builds and risks leaking sensitive files into images. - Running containers as root -- most official images run as root by default. Add a
userdirective or set up a non-root user in your Dockerfile to limit the damage from container escapes. - Not testing compose configs before deploying -- run
docker compose configto validate your file and see the fully resolved configuration, including variable substitution and merge results.
Generate Docker Compose Configs
Use our Docker Compose generator to build production-ready configurations with health checks, resource limits, and proper networking built in.
Open Docker Compose GeneratorFrequently Asked Questions
Should I use Docker Compose in production?
Docker Compose is suitable for production on single-server deployments, small teams, and applications that do not require multi-node orchestration. For simple web apps, internal tools, and small-to-medium workloads, Compose with proper health checks, resource limits, and restart policies is production-ready. For large-scale, multi-node deployments requiring auto-scaling and rolling updates across a cluster, Kubernetes or Docker Swarm is a better fit.
How do I manage secrets in Docker Compose?
Never put secrets directly in your docker-compose.yml or commit .env files with real credentials. Use Docker secrets (which mount credentials as files in /run/secrets/), reference environment variables from your host, or use a secrets manager like HashiCorp Vault or AWS Secrets Manager. For development, .env files are fine -- just add them to .gitignore and use .env.example as a template.
What is the difference between depends_on and healthcheck?
depends_on controls startup order -- it ensures container A starts before container B. However, it only waits for the container to start, not for the service inside to be ready. healthcheck defines how Docker tests whether the service inside a container is actually working. When you combine them with depends_on: condition: service_healthy, Docker waits until the dependency's healthcheck passes before starting the dependent service.