API Design & Microservices Patterns

gRPC and GraphQL

4 min read

REST is not the only way to build APIs. gRPC dominates internal microservice communication, while GraphQL excels at flexible client-driven queries. Knowing when to use each is a key interview differentiator.

Protocol Buffers (Protobuf)

gRPC uses Protocol Buffers as its interface definition language and serialization format. Protobuf is a binary format — smaller and faster to parse than JSON.

// user.proto — Proto3 syntax
syntax = "proto3";

package userservice;

// Message definitions with numbered fields
message User {
  string id = 1;            // Field number 1 — never reuse after deletion
  string name = 2;
  string email = 3;
  UserRole role = 4;
  repeated Address addresses = 5;  // repeated = list/array
  optional string phone = 6;       // optional field (proto3)
}

enum UserRole {
  USER_ROLE_UNSPECIFIED = 0;  // Proto3 requires 0 as default
  USER_ROLE_ADMIN = 1;
  USER_ROLE_MEMBER = 2;
}

message Address {
  string street = 1;
  string city = 2;
  string country = 3;
}

Backward Compatibility Rules

  • Never change the field number of an existing field
  • Never reuse a deleted field number — use reserved instead
  • Adding new fields is safe — old clients ignore unknown fields
  • Removing fields is safe — new clients see default values for missing fields
  • Renaming fields is safe — only the field number matters on the wire
// Reserving deleted field numbers to prevent accidental reuse
message User {
  reserved 7, 8;                    // These field numbers are retired
  reserved "legacy_username";       // This field name is retired
  string id = 1;
  string name = 2;
}

gRPC Service Definitions

gRPC supports four communication patterns, each suited for different use cases.

// service.proto
syntax = "proto3";

package orderservice;

service OrderService {
  // 1. Unary: simple request-response (like REST)
  rpc GetOrder(GetOrderRequest) returns (Order);

  // 2. Server streaming: server sends multiple responses
  rpc WatchOrderStatus(WatchRequest) returns (stream OrderStatus);

  // 3. Client streaming: client sends multiple requests
  rpc UploadOrderDocuments(stream Document) returns (UploadSummary);

  // 4. Bidirectional streaming: both sides send streams
  rpc OrderChat(stream ChatMessage) returns (stream ChatMessage);
}

message GetOrderRequest {
  string order_id = 1;
}

message Order {
  string id = 1;
  string customer_id = 2;
  repeated OrderItem items = 3;
  OrderStatus status = 4;
  double total = 5;
}

message OrderItem {
  string product_id = 1;
  int32 quantity = 2;
  double price = 3;
}

message OrderStatus {
  string order_id = 1;
  string status = 2;
  string timestamp = 3;
}

message WatchRequest {
  string order_id = 1;
}

message Document {
  string filename = 1;
  bytes content = 2;
}

message UploadSummary {
  int32 files_received = 1;
  int64 total_bytes = 2;
}

message ChatMessage {
  string sender = 1;
  string content = 2;
  string timestamp = 3;
}

The Four Patterns Explained

PatternUse CaseExample
UnaryStandard request-responseFetch an order by ID
Server StreamingServer pushes multiple resultsStock price ticker, live order status updates
Client StreamingClient sends data in chunksFile upload, batch data ingestion
Bidirectional StreamingReal-time two-way communicationChat, collaborative editing, gaming

Implementing a gRPC Server (Go Example)

// server.go — Unary and server streaming implementation
package main

import (
    "context"
    "log"
    "time"
    pb "myapp/proto/orderservice"
)

type orderServer struct {
    pb.UnimplementedOrderServiceServer
}

// Unary RPC
func (s *orderServer) GetOrder(ctx context.Context, req *pb.GetOrderRequest) (*pb.Order, error) {
    // Fetch order from database
    order, err := db.FindOrder(req.OrderId)
    if err != nil {
        return nil, status.Errorf(codes.NotFound, "order %s not found", req.OrderId)
    }
    return order, nil
}

// Server streaming RPC
func (s *orderServer) WatchOrderStatus(req *pb.WatchRequest, stream pb.OrderService_WatchOrderStatusServer) error {
    orderID := req.OrderId
    for {
        status, err := db.GetOrderStatus(orderID)
        if err != nil {
            return err
        }
        if err := stream.Send(status); err != nil {
            return err
        }
        if status.Status == "delivered" {
            return nil // Stream complete
        }
        time.Sleep(2 * time.Second) // Poll interval
    }
}

HTTP/2 Benefits

gRPC runs on HTTP/2, which provides significant performance advantages over HTTP/1.1.

FeatureHTTP/1.1HTTP/2
MultiplexingOne request per connection (or pipelining with head-of-line blocking)Multiple concurrent streams on one connection
HeadersSent as plain text every requestCompressed with HPACK, only deltas sent
FormatText-basedBinary framing layer
Server PushNot supportedServer can push resources proactively
ConnectionMultiple TCP connections neededSingle TCP connection for all streams

gRPC vs REST Performance

AspectREST (JSON/HTTP 1.1)gRPC (Protobuf/HTTP 2)
Payload size~2-10x larger (text JSON)Compact binary encoding
Serialization speedSlower (text parsing)~5-10x faster (binary)
StreamingRequires WebSocket or SSENative bidirectional streaming
Browser supportUniversalRequires gRPC-Web proxy
Toolingcurl, Postman, any HTTP clientNeeds protoc compiler, gRPC tools
ContractOpenAPI/Swagger (optional).proto files (required, strongly typed)
Code generationOptionalBuilt-in (Go, Java, Python, etc.)

GraphQL

GraphQL lets clients request exactly the data they need — no more, no less. It uses a single endpoint and a type system to define what queries are possible.

Schema Definition

// schema.graphql
type User {
  id: ID!
  name: String!
  email: String!
  orders(status: OrderStatus, first: Int): [Order!]!
}

type Order {
  id: ID!
  total: Float!
  status: OrderStatus!
  items: [OrderItem!]!
  createdAt: String!
}

type OrderItem {
  product: Product!
  quantity: Int!
  price: Float!
}

type Product {
  id: ID!
  name: String!
  price: Float!
}

enum OrderStatus {
  PENDING
  PROCESSING
  SHIPPED
  DELIVERED
  CANCELLED
}

type Query {
  user(id: ID!): User
  users(first: Int, after: String): UserConnection!
  order(id: ID!): Order
}

type Mutation {
  createOrder(input: CreateOrderInput!): Order!
  cancelOrder(id: ID!): Order!
}

type Subscription {
  orderStatusChanged(orderId: ID!): Order!
}

input CreateOrderInput {
  userId: ID!
  items: [OrderItemInput!]!
}

input OrderItemInput {
  productId: ID!
  quantity: Int!
}

# Relay-style pagination
type UserConnection {
  edges: [UserEdge!]!
  pageInfo: PageInfo!
}

type UserEdge {
  node: User!
  cursor: String!
}

type PageInfo {
  hasNextPage: Boolean!
  endCursor: String
}

The N+1 Problem and DataLoader

The most critical GraphQL performance issue. Without DataLoader, fetching a list of orders with their products makes 1 query for orders + N queries for products.

// WITHOUT DataLoader: N+1 queries
// Query: { users { orders { items { product { name } } } } }
// 1 query for users
// N queries for each user's orders
// M queries for each order's items
// K queries for each item's product

// WITH DataLoader: batched queries
import DataLoader from 'dataloader';

// DataLoader batches individual loads into a single query
const productLoader = new DataLoader<string, Product>(async (productIds) => {
  // One query: SELECT * FROM products WHERE id IN (id1, id2, id3, ...)
  const products = await db.products.findMany({
    where: { id: { in: productIds as string[] } }
  });

  // Return in the same order as the input IDs
  const productMap = new Map(products.map(p => [p.id, p]));
  return productIds.map(id => productMap.get(id) || new Error(`Product ${id} not found`));
});

// Resolver uses the loader
const resolvers = {
  OrderItem: {
    product: (item: OrderItem) => productLoader.load(item.productId)
    // DataLoader automatically batches all loads within the same tick
  }
};

When to Use Each: Decision Matrix

CriterionRESTgRPCGraphQL
Best forPublic APIs, simple CRUDInternal microservices, high-performanceFlexible client queries, mobile apps
Client typeAny HTTP clientGenerated clients from .protoWeb/mobile with varying data needs
Data shapeFixed by serverFixed by .proto contractChosen by client per query
PerformanceGoodExcellent (binary, HTTP/2)Good (but resolver overhead)
StreamingSSE or WebSocket (extra)Native, bidirectionalSubscriptions (WebSocket-based)
Browser supportUniversalRequires proxy (gRPC-Web)Universal
Learning curveLowMedium (proto, code gen)Medium-High (schema, resolvers, caching)
API evolutionVersioning neededBuilt-in backward compatibilitySchema evolution (deprecation)
CachingHTTP caching (GET)Custom (no HTTP caching)Complex (query-dependent)
Error handlingHTTP status codesgRPC status codesAlways 200, errors in response body
ContractOpenAPI (optional).proto (required)Schema (required)

Quick Interview Answer

"I'd use REST for a public-facing API where simplicity and broad client support matter. For internal microservice communication, I'd choose gRPC because of its binary encoding, HTTP/2 multiplexing, and native streaming — latency between services is critical. For a mobile or frontend-heavy application with diverse data requirements, I'd put GraphQL as an aggregation layer on top of the microservices, so clients can request exactly what they need in one round trip."

Next: We'll dive into microservices architecture patterns — sagas, circuit breakers, CQRS, and event sourcing. :::

Quiz

Module 3 Quiz: API Design & Microservices Patterns

Take Quiz
FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.