Docker Compose Best Practices for Homelab: The Complete 2026 Guide
Master Docker Compose for your homelab with practical patterns for security, networking, storage, and maintenance. From directory structure to production-ready stacks.
Table of Contents
- Why Docker Compose Matters for Homelab
- Directory Organization
- Monolithic vs Modular Approaches
- Recommended Homelab Structure
- Compose File Best Practices
- Environment Variables & Secrets
- .env File Organization
- Secrets Management with Docker Compose
- Networking
- Network Segmentation
- Traefik as Reverse Proxy
- Storage & Volumes
- Named Volumes vs Bind Mounts
- Backup Strategies
- Security Hardening
- Run as Non-Root User
- Network Isolation
- Security Checklist
- Resource Management
- CPU and Memory Limits
- Healthchecks for Reliability
- Logging & Maintenance
- Configure Log Rotation
- Cleanup Strategies
- Update Strategies
- Manual Updates (Recommended for Production)
- Version Pinning
- Watchtower for Automated Updates
- Common Pitfalls
- Using :latest Tag
- Missing Healthchecks
- No Resource Limits
- Exposing Docker Socket Carelessly
- Forgetting Backups
- Conclusion
If you’re running a homelab, Docker Compose is probably your go-to tool for managing containers. It’s simple enough to get started quickly, yet powerful enough to run dozens of services. But there’s a big difference between a working setup and a well-organized, secure, maintainable one.
This guide covers everything you need to know to level up your Docker Compose game in 2026—from directory structure to security hardening to update strategies. Whether you’re running a few containers or managing an entire self-hosted stack, these patterns will save you headaches down the road.
Why Docker Compose Matters for Homelab
Docker Compose gives you declarative infrastructure in a single file. Instead of remembering long docker run commands with dozens of flags, you describe your services once and recreate them reliably. For homelab enthusiasts juggling media servers, DNS blockers, monitoring stacks, and home automation, this is transformative.
But the simplicity that makes Compose appealing can also lead to trouble. Without structure, your docker-compose.yml becomes a sprawling monolith. Without security, one compromised container exposes your entire network. Without backups, a mistyped docker compose down -v destroys critical data.
The good news? A handful of best practices prevent almost all of these problems.
Directory Organization
Your directory structure is the foundation of everything. Get this right and maintenance becomes trivial. Get it wrong and you’ll dread touching your setup.
Monolithic vs Modular Approaches
Monolithic keeps everything in a single docker-compose.yml. This works well for smaller setups (under 10-15 services). You can see all dependencies at a glance and start everything with one command.
# docker-compose.yml - All services in one file
services:
traefik:
image: traefik:v3.0
# ...
pihole:
image: pihole/pihole:latest
# ...
plex:
image: plexinc/pms-docker:latest
# ...
Modular splits services into separate compose files, organized by function. This scales better for larger homelabs.
homelab/
├── .env
├── services/
│ ├── traefik/
│ │ └── docker-compose.yml
│ ├── media/
│ │ └── docker-compose.yml
│ └── monitoring/
│ └── docker-compose.yml
└── scripts/
├── backup.sh
└── update.sh
:::tip Start with a monolithic approach and split into modules when your single file exceeds 200 lines. The transition is straightforward—create shared external networks and reference them from each service. :::
Recommended Homelab Structure
Here’s a battle-tested structure that works for homelabs of all sizes:
homelab/
├── docker-compose.yml # Main orchestration (or symlink to Makefile)
├── .env # Environment variables (gitignored)
├── .env.example # Template for new setups
├── docker-compose.override.yml # Auto-loaded local overrides
│
├── services/
│ ├── traefik/
│ │ ├── docker-compose.yml
│ │ ├── traefik.yml
│ │ └── dynamic/
│ │ └── middlewares.yml
│ ├── media/
│ │ ├── docker-compose.yml
│ │ └── config/
│ └── monitoring/
│ ├── docker-compose.yml
│ └── prometheus.yml
│
├── shared/
│ └── networks/
│ └── proxy-network.yml
│
├── secrets/
│ └── db_password.txt
│
└── backups/
└── (backup files)
Use a Makefile to orchestrate multi-file setups:
# Makefile
.PHONY: up down update logs
up:
@for dir in services/*/; do \
echo "Starting $$dir"; \
(cd $$dir && docker compose up -d); \
done
down:
@for dir in services/*/; do \
(cd $$dir && docker compose down); \
done
update:
@for dir in services/*/; do \
(cd $$dir && docker compose pull && docker compose up -d); \
done
Compose File Best Practices
Modern Docker Compose (v2.x+) no longer requires a version key. The current spec is cleaner and more flexible. Here’s a well-structured compose file:
# compose.yaml
services:
traefik:
image: traefik:v3.3
container_name: traefik
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik.yml:/etc/traefik/traefik.yml:ro
- ./acme.json:/acme.json
networks:
- proxy
healthcheck:
test: ["CMD", "traefik", "healthcheck"]
interval: 30s
timeout: 10s
retries: 3
labels:
- "traefik.enable=true"
app:
image: myapp:${APP_VERSION:-latest}
restart: unless-stopped
networks:
- proxy
- backend
volumes:
- app_data:/app/data
environment:
- DATABASE_URL=postgres://db:5432/mydb
depends_on:
db:
condition: service_healthy
deploy:
resources:
limits:
memory: 512M
cpus: '1'
db:
image: postgres:16-alpine
restart: unless-stopped
networks:
- backend
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
proxy:
name: proxy_network
backend:
internal: true
volumes:
app_data:
db_data:
secrets:
db_password:
file: ./secrets/db_password.txt
:::warning
Always use the --ro (read-only) flag when mounting the Docker socket. Tools like Traefik only need read access to discover containers. Read-write access gives potential attackers full control over your Docker daemon.
:::
Key structural elements:
- Services: Define containers in logical order (reverse proxy → app → database)
- Networks: Create isolated segments for service communication
- Volumes: Define persistent storage at the top level for reusability
- Secrets: Mount sensitive data securely, never in environment variables
Environment Variables & Secrets
Environment variables configure your containers, but secrets require special handling. Mixing them up is a common security mistake.
.env File Organization
Create separate files for different purposes:
# .env.example (committed to git)
# =================================
# Application Settings
APP_NAME=myapp
APP_ENV=production
LOG_LEVEL=info
# Database Configuration
POSTGRES_DB=myapp_db
POSTGRES_USER=app_user
# Port Configuration
WEB_PORT=8080
DB_PORT=5432
# Image Versions
NGINX_VERSION=1.27-alpine
POSTGRES_VERSION=16-alpine
# .env (actual file, gitignored)
POSTGRES_PASSWORD=your-secure-password-here
API_KEY=sk-live-abc123...
SMTP_PASSWORD=smtp-secret-here
Use variable substitution with defaults for flexibility:
services:
app:
image: myapp:${IMAGE_TAG:-latest}
ports:
- "${HOST_PORT:-8080}:80"
environment:
- DB_HOST=${DB_HOST:-db}
Secrets Management with Docker Compose
Docker Compose secrets are mounted at /run/secrets/ in an in-memory filesystem. They’re never visible in docker inspect and don’t leak into logs.
services:
db:
image: postgres:16-alpine
secrets:
- db_password
- db_root_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
db_password:
file: ./secrets/db_password.txt
db_root_password:
file: ./secrets/db_root_password.txt
:::danger
Never put secrets in environment variables! They appear in docker inspect, process listings, and can leak into logs. Use _FILE suffix environment variables (like POSTGRES_PASSWORD_FILE) that point to secret files instead.
:::
Networking
Docker’s default networking puts all containers on the same bridge network—everything can talk to everything. That’s convenient for development but dangerous in production.
Network Segmentation
Create isolated networks for different tiers:
services:
# Frontend - accessible from internet via reverse proxy
web:
image: nginx:alpine
networks:
- proxy
- frontend
# API - internal communication only
api:
image: myapi:latest
networks:
- frontend
- backend
# Database - most restricted
db:
image: postgres:16-alpine
networks:
- backend
# Cannot reach frontend or internet
networks:
proxy:
external: true # Created by Traefik or external stack
frontend:
backend:
internal: true # Blocks all external internet access
Traefik as Reverse Proxy
Traefik automatically discovers containers and generates routes based on labels. Here’s a production-ready setup:
# services/traefik/docker-compose.yml
services:
traefik:
image: traefik:v3.3
container_name: traefik
restart: unless-stopped
security_opt:
- no-new-privileges:true
ports:
- "80:80"
- "443:443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./traefik.yml:/etc/traefik/traefik.yml:ro
- ./acme.json:/acme.json
- ./dynamic:/dynamic:ro
networks:
- proxy
networks:
proxy:
name: proxy_network
Configure your services with Traefik labels:
services:
plex:
image: plexinc/pms-docker:latest
networks:
- proxy
labels:
- "traefik.enable=true"
- "traefik.http.routers.plex.rule=Host(`plex.home.lab`)"
- "traefik.http.routers.plex.entrypoints=websecure"
- "traefik.http.routers.plex.tls.certresolver=letsencrypt"
- "traefik.http.services.plex.loadbalancer.server.port=32400"
networks:
proxy:
external: true
Storage & Volumes
Choosing the right storage strategy prevents data loss and makes backups straightforward.
Named Volumes vs Bind Mounts
| Aspect | Bind Mounts | Named Volumes |
|---|---|---|
| Location | Host path you specify | Docker-managed |
| Portability | Host-dependent | Portable across hosts |
| Backup | Manual path knowledge | Docker commands work |
| Best for | Config files, dev work | Databases, persistent data |
services:
db:
image: postgres:16-alpine
volumes:
# Named volume for data (recommended)
- db_data:/var/lib/postgresql/data
# Bind mount for configs (okay for dev/simple setups)
- ./postgres.conf:/etc/postgresql/postgresql.conf:ro
volumes:
db_data:
driver: local
Backup Strategies
Never rely on containers to persist data. Set up automated backups:
services:
backup:
image: offen/docker-volume-backup:v2
environment:
BACKUP_CRON_EXPRESSION: "0 2 * * *" # Daily at 2 AM
BACKUP_RETENTION_DAYS: "7"
volumes:
- db_data:/backup/db:ro
- ./backups:/archive
For manual backups:
# Backup a named volume
docker run --rm \
-v myapp_data:/source:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/myapp_$(date +%Y%m%d).tar.gz -C /source .
# Restore from backup
docker run --rm \
-v myapp_data:/target \
-v $(pwd)/backups:/backup \
alpine sh -c "cd /target && tar xzf /backup/myapp_20260228.tar.gz"
Security Hardening
A compromised container shouldn’t compromise your entire homelab. Apply defense in depth.
Run as Non-Root User
services:
app:
image: myapp:latest
user: "1000:1000" # UID:GID
read_only: true # Read-only filesystem
tmpfs:
- /tmp # Writable tmpfs for temporary files
- /run
cap_drop:
- ALL # Drop all Linux capabilities
cap_add:
- NET_BIND_SERVICE # Add only what's needed
Network Isolation
services:
db:
image: postgres:16-alpine
networks:
- backend
# No external network = no internet access
networks:
backend:
internal: true # Blocks all external traffic
Security Checklist
- Run containers as non-root users
- Use read-only filesystems where possible
- Drop all capabilities, add back only what’s needed
- Never expose Docker socket read-write unless absolutely necessary
- Use Docker secrets for sensitive data, never environment variables
- Create network isolation between service tiers
- Pin image versions, avoid
:latest - Scan images for vulnerabilities regularly
Resource Management
Without limits, a misbehaving container can consume all your host’s resources.
CPU and Memory Limits
services:
app:
image: myapp:latest
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
cpus: '0.5'
memory: 512M
:::tip
Start conservative with limits, then monitor with docker stats to see actual usage. Set reservations for critical services to guarantee resources when the host is under load.
:::
Healthchecks for Reliability
Healthchecks tell Docker when a container is actually healthy, not just running:
services:
db:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s # Check every 10 seconds
timeout: 5s # Max time for check to complete
retries: 5 # Failures before marking unhealthy
start_period: 30s # Grace period during startup
api:
image: myapi:latest
depends_on:
db:
condition: service_healthy # Wait for db to pass healthcheck
Logging & Maintenance
Logs grow without bounds unless you configure rotation. A full disk will ruin your day.
Configure Log Rotation
services:
app:
image: myapp:latest
logging:
driver: "json-file"
options:
max-size: "10m" # Rotate when log reaches 10MB
max-file: "5" # Keep 5 rotated files
compress: "true" # Compress old logs
For daemon-wide configuration, edit /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
Cleanup Strategies
# Remove unused resources
docker system prune -a
# Remove unused volumes (be careful!)
docker volume prune
# Check disk usage
docker system df
Update Strategies
Manual Updates (Recommended for Production)
# Pull latest images
docker compose pull
# Recreate containers
docker compose up -d
# Or update a specific service
docker compose up -d --pull always nginx
Version Pinning
Always pin versions in production:
# .env
NGINX_VERSION=1.27.4-alpine
POSTGRES_VERSION=16.2-alpine
# compose.yaml
services:
nginx:
image: nginx:${NGINX_VERSION}
db:
image: postgres:${POSTGRES_VERSION}
Watchtower for Automated Updates
:::warning Watchtower can automatically update containers, but this is risky for production homelabs. Use monitor-only mode and review changes manually. :::
services:
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_SCHEDULE=0 0 4 * * * # 4 AM daily
- WATCHTOWER_MONITOR_ONLY=true # Just notify, don't update
- WATCHTOWER_CLEANUP=true # Remove old images
restart: unless-stopped
Exclude critical services from Watchtower:
services:
critical-db:
labels:
- "com.centurylinklabs.watchtower.enable=false"
Common Pitfalls
Learn from these mistakes instead of making them yourself:
Using :latest Tag
# ❌ BAD - unpredictable updates
image: postgres:latest
# ✅ GOOD - predictable, testable updates
image: postgres:16.2-alpine
Missing Healthchecks
# ❌ BAD - container "running" but not ready
services:
db:
image: postgres:16
# ✅ GOOD - know when actually healthy
services:
db:
image: postgres:16
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
No Resource Limits
# ❌ BAD - runaway container can consume all resources
services:
app:
image: myapp:latest
# ✅ GOOD - protect your host
services:
app:
image: myapp:latest
deploy:
resources:
limits:
memory: 1G
cpus: '1'
Exposing Docker Socket Carelessly
# ❌ DANGEROUS - full host access
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ✅ BETTER - read-only at minimum
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Forgetting Backups
# ❌ BAD - data lost on container removal
services:
db:
image: postgres:16
# ✅ GOOD - persistent named volume
services:
db:
image: postgres:16
volumes:
- db_data:/var/lib/postgresql/data
volumes:
db_data:
Conclusion
A well-organized Docker Compose setup pays dividends every time you touch it. Here’s your quick reference checklist:
Structure & Organization
- Use modular compose files for larger setups
- Keep
.env.examplein version control,.envgitignored - Use a Makefile or scripts for common operations
Security
- Run containers as non-root
- Use Docker secrets for sensitive data
- Create internal networks for databases
- Mount Docker socket read-only
Reliability
- Pin image versions, never use
:latest - Add healthchecks to all services
- Set resource limits on every container
- Configure log rotation
Maintenance
- Automated backups for volumes
- Document update procedures
- Test restore processes
- Monitor with
docker stats
Start simple, iterate on these patterns, and your homelab will scale from a handful of containers to dozens without becoming unmanageable. The investment in structure now saves hours of debugging later.

Comments
Powered by GitHub Discussions