Phase 5: Deploy & Ship

Production Deployment

3 min read

You have built TaskFlow from the ground up -- models, migrations, endpoints, auth, tests. Now it is time to package it into a production-ready container and ship it.

Multi-Stage Docker Builds

A multi-stage build separates the build environment (where you install dependencies and compile) from the runtime environment (what actually runs in production). This produces smaller, more secure images.

# Stage 1: Builder -- install dependencies
FROM python:3.12-slim AS builder

WORKDIR /app

RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential libpq-dev \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt

# Stage 2: Production -- slim runtime
FROM python:3.12-slim AS production

RUN apt-get update && apt-get install -y --no-install-recommends \
    libpq5 curl \
    && rm -rf /var/lib/apt/lists/*

# Non-root user for security
RUN groupadd -r taskflow && useradd -r -g taskflow taskflow

WORKDIR /app

# Copy only installed packages from builder
COPY --from=builder /install /usr/local
COPY . .

RUN chown -R taskflow:taskflow /app
USER taskflow

HEALTHCHECK --interval=30s --timeout=5s --retries=3 \
    CMD curl -f http://localhost:8000/health || exit 1

EXPOSE 8000
CMD ["gunicorn", "app.main:app", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000"]

Key principles:

Principle Why It Matters
Multi-stage build Builder stage is discarded; final image contains only runtime dependencies
Pinned base image python:3.12-slim instead of python:latest prevents surprise breakages
Non-root user Limits damage if the container is compromised
HEALTHCHECK Orchestrators (Compose, ECS, K8s) know when the app is truly ready
Layer ordering COPY requirements.txt before COPY . so dependency layers are cached

Docker Compose for Production

Keep separate files for development and production. Development uses hot-reload and debug settings; production uses resource limits and restart policies.

# docker-compose.prod.yml
services:
  api:
    build:
      context: .
      dockerfile: Dockerfile
      target: production
    ports:
      - "8000:8000"
    env_file: .env.production
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
    restart: unless-stopped
    networks:
      - external
      - internal

  db:
    image: postgres:18
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: taskflow
      POSTGRES_USER: taskflow
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U taskflow"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - internal

  redis:
    image: redis:7-alpine
    command: redis-server --appendonly yes --maxmemory 128mb --maxmemory-policy allkeys-lru
    volumes:
      - redisdata:/data
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - internal

volumes:
  pgdata:
  redisdata:

networks:
  external:
  internal:
    internal: true

Notice internal: true on the database network. PostgreSQL and Redis are only reachable by the API service, never exposed to the host.

Environment Management

Never hardcode secrets. Use .env files for local development and proper secret stores for production.

# .env.development (committed to repo as .env.example with empty values)
DATABASE_URL=postgresql+asyncpg://taskflow:localpass@localhost:5432/taskflow
REDIS_URL=redis://localhost:6379/0
SECRET_KEY=dev-only-not-for-production
ENVIRONMENT=development

# .env.production (NEVER committed -- use CI/CD secrets)
DATABASE_URL=postgresql+asyncpg://taskflow:${DB_PASSWORD}@db:5432/taskflow
REDIS_URL=redis://redis:6379/0
SECRET_KEY=${SECRET_KEY}
ENVIRONMENT=production

With Pydantic Settings, configuration is validated at startup:

from pydantic_settings import BaseSettings

class Settings(BaseSettings):
    database_url: str
    redis_url: str
    secret_key: str
    environment: str = "development"
    api_v1_prefix: str = "/api/v1"

    @property
    def is_production(self) -> bool:
        return self.environment == "production"

    model_config = {"env_file": ".env"}

settings = Settings()

CI/CD with GitHub Actions

A standard pipeline: test, build, deploy -- triggered on every push to main and on pull requests.

# .github/workflows/ci.yml
name: CI/CD Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    services:
      postgres:
        image: postgres:18
        env:
          POSTGRES_DB: taskflow_test
          POSTGRES_USER: taskflow
          POSTGRES_PASSWORD: testpassword
        ports: ["5432:5432"]
        options: >-
          --health-cmd="pg_isready -U taskflow"
          --health-interval=10s
          --health-timeout=5s
          --health-retries=5
      redis:
        image: redis:7-alpine
        ports: ["6379:6379"]
        options: >-
          --health-cmd="redis-cli ping"
          --health-interval=10s
          --health-timeout=5s
          --health-retries=5

    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install -r requirements.txt
      - run: ruff check .
      - run: pytest --cov=app tests/
        env:
          DATABASE_URL: postgresql+asyncpg://taskflow:testpassword@localhost:5432/taskflow_test
          REDIS_URL: redis://localhost:6379/0
          SECRET_KEY: test-secret-key

  build:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - run: docker build -t taskflow-api:${{ github.sha }} .

Production Checklist

Before going live, verify every item:

Category Item How
Security Non-root Docker user USER taskflow in Dockerfile
Security Security headers Middleware: X-Content-Type-Options, X-Frame-Options, Strict-Transport-Security
Security CORS restricted Explicit allowed origins, never * in production
Performance Gunicorn workers gunicorn -w 4 -k uvicorn.workers.UvicornWorker
Reliability Healthchecks /health endpoint returns 200 when DB and Redis are reachable
Reliability Restart policy restart: unless-stopped in Compose
Versioning API prefix All routes under /api/v1/
Secrets No hardcoded keys All secrets from environment variables or secret manager

Deployment Options (as of 2026)

Platform Best For Notes
Railway Quick deploys, small teams Docker + PostgreSQL + Redis add-ons, simple pricing
Render Auto-deploy from Git Free tier for hobby, managed PostgreSQL
Fly.io Edge deployment, low latency Deploy containers globally, built-in Postgres
AWS ECS / Fargate Enterprise scale Full AWS ecosystem, more configuration needed

All four support Docker-based deployments. For TaskFlow, Railway or Render gets you from code to production in under 10 minutes.

Next: hands-on lab where you will Dockerize TaskFlow and set up the full CI/CD pipeline. :::

Quiz

Module 5: Deploy & Ship Quiz

Take Quiz