Dockerize & Deploy TaskFlow
Instructions
Objective
Package the TaskFlow API for production deployment. You will create a multi-stage Dockerfile, a production Docker Compose configuration, a GitHub Actions CI/CD pipeline, and production-hardened application settings.
Part 1: Dockerfile (25 points)
Create a Dockerfile in the project root with a multi-stage build.
Requirements
Builder stage:
- Use
python:3.12-slimas the base image (pinned, notlatest) - Install system dependencies needed to compile Python packages (
build-essential,libpq-dev) - Copy
requirements.txtfirst, then install dependencies with--no-cache-dirand--prefix=/install - This ordering ensures Docker layer caching works -- dependency layers rebuild only when
requirements.txtchanges
Production stage:
- Use
python:3.12-slimagain as a clean base - Install only runtime system libraries (
libpq5,curl) - Create a non-root user and group named
taskflow - Copy installed Python packages from the builder stage using
COPY --from=builder - Copy application code
- Set ownership to the
taskflowuser and switch to it withUSER taskflow - Add a
HEALTHCHECKthat curls the/healthendpoint - Expose port 8000
- Use Gunicorn with Uvicorn workers as the entrypoint:
gunicorn app.main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000
Example Structure
# Stage 1: Builder
FROM python:3.12-slim AS builder
WORKDIR /app
# Install build dependencies
# Copy and install Python dependencies
# Stage 2: Production
FROM python:3.12-slim AS production
# Install runtime dependencies only
# Create non-root user
# Copy from builder
# Copy application code
# Set user, healthcheck, expose, cmd
Part 2: Docker Compose Production (25 points)
Create docker-compose.prod.yml with three services.
Requirements
API service:
- Build from the Dockerfile, targeting the
productionstage - Map port 8000
- Load environment from
.env.production - Depend on
dbandrediswithcondition: service_healthy - Set resource limits: 1 CPU, 512MB memory
- Set
restart: unless-stopped - Connect to both
externalandinternalnetworks
PostgreSQL service:
- Use
postgres:18image - Persist data with a named volume
pgdatamounted at/var/lib/postgresql/data - Configure database name, user, and password via environment variables
- Add a healthcheck using
pg_isready - Connect only to the
internalnetwork
Redis service:
- Use
redis:7-alpineimage - Enable AOF persistence with
--appendonly yes - Set max memory to 128MB with
allkeys-lrueviction policy - Persist data with a named volume
redisdatamounted at/data - Add a healthcheck using
redis-cli ping - Connect only to the
internalnetwork
Networks:
external: default bridge network (API is reachable from outside)internal: setinternal: trueso db and redis are isolated from the host
Volumes:
pgdataandredisdataas named volumes
Part 3: GitHub Actions CI/CD (25 points)
Create .github/workflows/ci.yml with a CI/CD pipeline.
Requirements
Trigger: on push to main branch and on pull_request to main
Test job:
- Runs on
ubuntu-latest - Spin up PostgreSQL 18 and Redis 7 as service containers with healthchecks
- Steps:
actions/checkout@v4actions/setup-python@v5with Python 3.12- Install dependencies from
requirements.txt - Run linter:
ruff check . - Run tests:
pytest --cov=app tests/
- Pass
DATABASE_URL,REDIS_URL, andSECRET_KEYas environment variables to the test step
Build job:
- Runs only on push to
main(not on pull requests) - Depends on the test job passing (
needs: test) - Steps:
actions/checkout@v4- Build Docker image tagged with the commit SHA
Part 4: Production Configuration (25 points)
Create or update the following application files.
4a. Gunicorn Configuration
Create gunicorn.conf.py in the project root:
import multiprocessing
workers = multiprocessing.cpu_count() * 2 + 1
worker_class = "uvicorn.workers.UvicornWorker"
bind = "0.0.0.0:8000"
accesslog = "-"
errorlog = "-"
loglevel = "info"
4b. Security Headers Middleware
Create a middleware in app/middleware/security.py that adds these headers
to every response:
X-Content-Type-Options: nosniffX-Frame-Options: DENYStrict-Transport-Security: max-age=31536000; includeSubDomainsX-XSS-Protection: 1; mode=blockReferrer-Policy: strict-origin-when-cross-origin
from starlette.middleware.base import BaseHTTPMiddleware
from starlette.requests import Request
from starlette.responses import Response
class SecurityHeadersMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next) -> Response:
response = await call_next(request)
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["Strict-Transport-Security"] = (
"max-age=31536000; includeSubDomains"
)
response.headers["X-XSS-Protection"] = "1; mode=block"
response.headers["Referrer-Policy"] = "strict-origin-when-cross-origin"
return response
4c. CORS Configuration
In app/main.py, configure CORS with explicit origins for production
(never use * as the wildcard origin in production):
from fastapi.middleware.cors import CORSMiddleware
if settings.is_production:
allowed_origins = ["https://taskflow.example.com"]
else:
allowed_origins = ["http://localhost:3000"]
app.add_middleware(
CORSMiddleware,
allow_origins=allowed_origins,
allow_credentials=True,
allow_methods=["GET", "POST", "PUT", "DELETE"],
allow_headers=["Authorization", "Content-Type"],
)
4d. Environment-Based Settings
Create app/config.py using Pydantic Settings to load and validate
all configuration from environment variables:
from pydantic_settings import BaseSettings
class Settings(BaseSettings):
database_url: str
redis_url: str
secret_key: str
environment: str = "development"
api_v1_prefix: str = "/api/v1"
@property
def is_production(self) -> bool:
return self.environment == "production"
model_config = {"env_file": ".env"}
settings = Settings()
4e. API Versioning
All routers must be mounted under the /api/v1/ prefix:
from app.config import settings
app.include_router(tasks_router, prefix=settings.api_v1_prefix)
app.include_router(auth_router, prefix=settings.api_v1_prefix)
What to Submit
Your submission should contain 7 file sections in the editor below. Each section begins with a # FILE N: header.
Hints
- Test your Dockerfile locally with
docker build -t taskflow .before submitting - Run
docker compose -f docker-compose.prod.yml upto verify all services start and healthchecks pass - The order of
COPYinstructions in the Dockerfile matters for caching -- put rarely-changing files first - For the CI workflow, use the
serviceskey under the job to spin up PostgreSQL and Redis