Architecture Patterns & System Design

Event-Driven Architecture Patterns

4 min read

Event-driven architecture (EDA) enables loose coupling, scalability, and real-time processing. These patterns are essential for modern cloud architectures.

Event Types & Semantics

Event Classification

Type Purpose Example
Domain Events Business facts OrderPlaced, PaymentReceived
Integration Events Cross-service communication CustomerCreatedEvent
System Events Infrastructure changes InstanceTerminated, QueueFull

Event Structure

{
  "eventId": "uuid-v4",
  "eventType": "OrderPlaced",
  "timestamp": "2026-01-05T10:30:00Z",
  "version": "1.0",
  "source": "order-service",
  "data": {
    "orderId": "12345",
    "customerId": "67890",
    "totalAmount": 99.99
  },
  "metadata": {
    "correlationId": "request-uuid",
    "causationId": "previous-event-uuid"
  }
}

Interview Question: Event Granularity

Q: "Should we emit fine-grained events (OrderItemAdded) or coarse-grained events (OrderUpdated)?"

A: Consider the trade-offs:

Approach Pros Cons
Fine-grained Specific reactions, audit trail More events, complex consumers
Coarse-grained Simpler, fewer events May over-notify, less context
Hybrid Best of both More complexity to manage

Recommendation: Start coarse-grained, add fine-grained when specific consumer needs emerge.

Event Sourcing

Concept

Store state as a sequence of events rather than current state.

Traditional:
  User Table: { id: 1, name: "John", email: "john@example.com" }

Event Sourced:
  Events:
    1. UserCreated { id: 1, name: "John" }
    2. EmailUpdated { id: 1, email: "john@example.com" }
    3. NameChanged { id: 1, name: "John Doe" }

Benefits & Trade-offs

Benefit Trade-off
Complete audit trail Storage growth
Temporal queries ("state at time X") Complex read models
Event replay for debugging Eventual consistency
Natural fit for CQRS Learning curve

Interview Question: When to Use Event Sourcing

Q: "A financial services company wants to implement event sourcing for all services. Advise them."

A: Event sourcing isn't appropriate everywhere:

Good Fit:

  • Audit requirements (financial transactions, compliance)
  • Complex domain with rich business rules
  • Need for temporal queries
  • Event-driven integrations already in place

Poor Fit:

  • Simple CRUD applications
  • Real-time analytics requirements
  • Team unfamiliar with the pattern
  • No audit requirements

Recommendation: Use event sourcing for core financial transactions (ledger, trades). Use traditional persistence for supporting services (user management, notifications).

CQRS: Command Query Responsibility Segregation

Architecture

Separate read and write models:

Commands (Write):
  Client → Command Handler → Domain Model → Event Store
                               Events Published
Queries (Read):
  Client → Query Handler → Read Model (Denormalized Views)
                         Event Projector

CQRS Benefits

Benefit Explanation
Scalable Reads Read model optimized for queries
Optimized Writes Write model focused on business logic
Flexibility Different storage for read/write
Performance Materialized views for complex queries

Interview Question: CQRS Consistency

Q: "How do you handle the eventual consistency gap in CQRS?"

A: Strategies for managing consistency:

  1. UI Optimistic Updates: Show pending state immediately
  2. Polling/WebSockets: Update UI when projection catches up
  3. Correlation IDs: Track command through to read model
  4. Read-Your-Writes: Route user's reads to master during gap
// Optimistic update example
async function placeOrder(order) {
  // Immediately show pending in UI
  updateUI({ ...order, status: 'pending' });

  // Send command
  const commandId = await sendCommand('PlaceOrder', order);

  // Poll for confirmation
  await pollUntilProjected(commandId);

  // Update UI with confirmed state
  const confirmed = await fetchOrder(order.id);
  updateUI(confirmed);
}

Event Streaming Platforms

Apache Kafka Architecture

Producers → Topics (Partitioned) → Consumer Groups
             Partitions: [P0] [P1] [P2]
             Consumer Group: [C0] [C1] [C2]

Key Concepts:

  • Topics: Category of events
  • Partitions: Parallelism unit, ordered within partition
  • Consumer Groups: Load balancing across consumers
  • Offsets: Position tracking for replay

AWS Kinesis vs Kafka

Feature Kinesis Kafka (MSK)
Management Serverless Managed clusters
Scaling Shard-based Partition-based
Retention 24h-365d Unlimited
Integration Native AWS Broader ecosystem
Cost Model Per shard hour + data Instance-based
Best For AWS-native, variable load High throughput, long retention

Interview Question: Kafka Partition Strategy

Q: "You're designing a notification system with 100M users. How do you partition the Kafka topic?"

A: Design for parallelism and ordering:

Topic: user-notifications
Partition Key: user_id

Reasoning:
- All notifications for a user go to same partition
- Guarantees per-user ordering
- 100 partitions for parallelism
- Each partition ~1M users on average

Scaling Considerations:
- Can add partitions (only increases parallelism)
- Cannot decrease partitions
- Rebalancing happens automatically
- Consider hot users (celebrities) - may need special handling

Event Delivery Guarantees

Delivery Semantics

Guarantee Definition Use Case
At-most-once May lose events Non-critical notifications
At-least-once May duplicate Most cases (with idempotency)
Exactly-once No loss, no duplicates Financial transactions

Implementing Idempotency

def process_event(event):
    # Check if already processed
    if event_store.exists(event.id):
        return  # Skip duplicate

    # Process event
    result = handle_event(event)

    # Mark as processed (atomically with side effects)
    event_store.mark_processed(event.id, result)

Interview Question: Duplicate Events

Q: "Your payment service receives duplicate OrderPlaced events. How do you prevent double charges?"

A: Implement idempotency at multiple levels:

  1. Idempotency Key: Use orderId as natural key
  2. Deduplication Table: Track processed events
  3. Payment Gateway: Most support idempotency keys
  4. Database Constraints: Unique constraint on order-payment relationship
-- Prevent duplicate payments
CREATE TABLE payments (
    id UUID PRIMARY KEY,
    order_id UUID UNIQUE,  -- Only one payment per order
    amount DECIMAL,
    created_at TIMESTAMP
);

Key Principle: Assume events will be duplicated. Design all consumers to be idempotent.

Next, we'll explore high availability and disaster recovery patterns. :::

Quiz

Module 4: Architecture Patterns & System Design

Take Quiz