Phase 2: Database Models & Migrations
Production Database Design with SQLAlchemy 2.0
In Phase 1 you set up the project skeleton. Now it is time to build the data foundation. Every production API needs a solid database layer -- and for TaskFlow, that means async SQLAlchemy models, Pydantic validation schemas, and version-controlled migrations with Alembic.
Why Async SQLAlchemy?
SQLAlchemy 2.0 (current release: 2.0.46, January 2026) introduced native async support through create_async_engine and AsyncSession. Combined with the asyncpg driver for PostgreSQL 18, this gives you non-blocking database access that matches FastAPI's async architecture.
from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker
DATABASE_URL = "postgresql+asyncpg://user:pass@localhost:5432/taskflow"
engine = create_async_engine(DATABASE_URL, echo=False, pool_size=20)
async_session = async_sessionmaker(engine, expire_on_commit=False)
async def get_db():
async with async_session() as session:
yield session
Key points:
create_async_enginereplaces the synchronouscreate_engineasync_sessionmaker(not the oldersessionmaker) creates async-aware sessionsexpire_on_commit=Falseprevents lazy-load errors after commit in async contextspool_size=20is a sensible production starting point for connection pooling
TaskFlow Database Schema
TaskFlow needs four core tables with clear relationships:
| Table | Purpose | Key Relationships |
|---|---|---|
| User | Accounts with auth data | Owns Projects, assigned Tasks |
| Project | Task containers | Has many Tasks, has Members |
| Task | Individual work items | Belongs to Project, assigned to User |
| ProjectMember | RBAC join table | Links User to Project with a role |
The entity relationships look like this:
User 1──* Project (owner_id)
User 1──* Task (assignee_id, nullable)
User 1──* ProjectMember (user_id)
Project 1──* Task (project_id)
Project 1──* ProjectMember (project_id)
Defining Models with SQLAlchemy 2.0
SQLAlchemy 2.0 uses DeclarativeBase and Mapped type annotations. This is a major shift from the legacy declarative_base() approach:
from datetime import datetime
from sqlalchemy import String, ForeignKey, Enum as SAEnum
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
import enum
class Base(DeclarativeBase):
pass
class TaskStatus(str, enum.Enum):
todo = "todo"
in_progress = "in_progress"
done = "done"
class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(primary_key=True)
email: Mapped[str] = mapped_column(String(255), unique=True, index=True)
hashed_password: Mapped[str] = mapped_column(String(255))
full_name: Mapped[str] = mapped_column(String(100))
is_active: Mapped[bool] = mapped_column(default=True)
created_at: Mapped[datetime] = mapped_column(default=datetime.utcnow)
updated_at: Mapped[datetime] = mapped_column(
default=datetime.utcnow, onupdate=datetime.utcnow
)
# Relationships
owned_projects: Mapped[list["Project"]] = relationship(back_populates="owner")
assigned_tasks: Mapped[list["Task"]] = relationship(back_populates="assignee")
memberships: Mapped[list["ProjectMember"]] = relationship(back_populates="user")
Notice how Mapped[int] and mapped_column replace the old Column(Integer) style. This gives you full type-checker support.
Why Alembic for Migrations
Alembic (current release: 1.18.4, February 2026) is the migration tool built for SQLAlchemy. Think of it as "git for your database schema":
# Initialize Alembic in your project
alembic init alembic
# Generate a migration from model changes
alembic revision --autogenerate -m "create initial tables"
# Apply migrations to the database
alembic upgrade head
Every migration is a Python file with upgrade() and downgrade() functions. Your team can review schema changes in pull requests, roll back safely, and keep every environment in sync.
Pydantic v2 Schemas
Pydantic v2 (2.12.x) handles request/response validation. You create separate schemas for different operations:
from pydantic import BaseModel, EmailStr, ConfigDict
from datetime import datetime
class UserCreate(BaseModel):
email: EmailStr
password: str
full_name: str
class UserResponse(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
email: str
full_name: str
is_active: bool
created_at: datetime
ConfigDict(from_attributes=True) replaces the old orm_mode = True from Pydantic v1. This lets you pass SQLAlchemy model instances directly to response schemas.
What You Will Build
In the lab that follows, you will:
- Define all four SQLAlchemy 2.0 models with proper relationships and enums
- Create an async database session factory
- Write Pydantic v2 schemas for Create, Update, and Response variants of each model
- Initialize Alembic and generate your first migration
- Write a seed script to populate the database with test data
Next: Build the complete database layer in the hands-on lab. :::