MCP Server in TypeScript: OAuth 2.1 + Streamable HTTP (2026)

May 14, 2026

MCP Server in TypeScript: OAuth 2.1 + Streamable HTTP (2026)

To add OAuth 2.1 to a Model Context Protocol server in TypeScript, run the @modelcontextprotocol/sdk 1.29.0 StreamableHTTPServerTransport behind Express, publish /.well-known/oauth-protected-resource per RFC 9728, validate bearer tokens with jose while enforcing RFC 8707 resource indicators, and check authInfo.scopes inside every tool handler. This guide builds the whole thing end-to-end.

TL;DR

You will build a remote MCP server in TypeScript that an MCP client (Claude Desktop, Cursor, or any spec-compliant agent) can call over HTTP with a bearer token. The server exposes a tasks API with two scopes — tasks:read and tasks:write — and demonstrates every piece of the November 2025 MCP authorization spec1: protected resource metadata, resource indicators, scope-based authorization inside tool handlers, and the DNS-rebinding mitigation that landed in SDK 1.24.0. By the end you will have a runnable, observable, scope-aware server in about 300 lines of TypeScript across five small files.

What you'll learn

  • How to scaffold an MCP server with @modelcontextprotocol/sdk 1.29.0 and zod 4.4.3
  • How to register typed tools whose handlers receive an authInfo argument
  • How to wire StreamableHTTPServerTransport behind Express 5 with session management
  • How to publish /.well-known/oauth-protected-resource per RFC 9728
  • How to verify access tokens with jose and enforce RFC 8707 resource indicators
  • How to enforce per-tool OAuth scopes inside the handler
  • How to turn on DNS-rebinding protection and Host-header allowlisting
  • How to add structured logging with pino so production runs are debuggable

Prerequisites

  • Node.js 22 LTS or Node.js 24 LTS. The SDK requires Node 18+; Node 22 is in Maintenance LTS through April 2027 and Node 24 is the current Active LTS through October 20262. The examples below pin Node 22 to match the @types/node line installed in Step 1; everything in this guide runs unchanged on Node 24.
  • npm 10 (ships with Node 22) or pnpm 10.
  • An OAuth 2.1 authorization server that issues JWT access tokens with aud, iss, sub, scope, and exp claims. WorkOS AuthKit and Stytch ship native RFC 8707 resource-indicator support; Auth0 supports it behind an opt-in Resource Parameter Compatibility Profile; Microsoft Entra ID does not yet implement the resource parameter and requires a scope={resource}/.default workaround. The verifier in this guide is vendor-agnostic.
  • Familiarity with TypeScript, async/await, and basic Express.
  • An MCP-compatible client for testing — the @modelcontextprotocol/inspector CLI is the fastest path.

You do not need any of: a database, a real OAuth server (the verifier accepts the JWKs URL of whichever provider you have), Docker, or Claude Desktop. The whole tutorial runs on tsx.

Step 1: Bootstrap the TypeScript project

Create a new directory and pin every dependency to the versions confirmed on the npm registry on the day of writing3:

mkdir mcp-tasks-server && cd mcp-tasks-server
npm init -y
npm install \
  @modelcontextprotocol/sdk@1.29.0 \
  zod@4.4.3 \
  express@5.2.1 \
  jose@6.2.3 \
  pino@10.3.1 \
  pino-pretty@13.1.3
npm install --save-dev \
  typescript@6.0.3 \
  tsx@4.21.0 \
  '@types/node@^22.19.19' \
  '@types/express@5.0.6'

Patch package.json to enable ESM and add scripts:

{
  "name": "mcp-tasks-server",
  "version": "0.1.0",
  "type": "module",
  "scripts": {
    "dev": "tsx watch src/server.ts",
    "build": "tsc -p tsconfig.json",
    "start": "node dist/server.js"
  }
}

Create tsconfig.json with strict NodeNext settings — the SDK's source maps imports with .js extensions even from TypeScript files, so module: NodeNext is mandatory:

{
  "compilerOptions": {
    "target": "ES2023",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "outDir": "dist",
    "rootDir": "src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "resolveJsonModule": true,
    "declaration": false
  },
  "include": ["src/**/*.ts"]
}

Make the source tree:

mkdir -p src/auth src/tools src/observability

Step 2: Register typed tools with McpServer and zod

The SDK exposes a high-level McpServer class that handles JSON-RPC, capability negotiation, and tool listing for you. Tool handlers receive a second argument carrying authInfo — that is where scope enforcement lives.

Create src/tools/tasks.ts:

import { z } from "zod";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";

// In-memory store keyed by OAuth subject ("sub" claim). A real server would use
// a database; the API surface is identical.
type Task = { id: string; title: string; done: boolean; ownerSub: string };
const tasks: Task[] = [];

function requireScope(authInfo: { scopes?: string[] } | undefined, scope: string) {
  if (!authInfo?.scopes?.includes(scope)) {
    throw new Error(`Forbidden: missing required scope "${scope}"`);
  }
}

export function registerTaskTools(server: McpServer) {
  server.registerTool(
    "list_tasks",
    {
      title: "List tasks",
      description: "List tasks owned by the authenticated user.",
      inputSchema: { includeDone: z.boolean().optional().default(false) },
    },
    async ({ includeDone }, { authInfo }) => {
      requireScope(authInfo, "tasks:read");
      const sub = (authInfo?.extra?.sub as string) ?? "";
      const mine = tasks.filter(
        (t) => t.ownerSub === sub && (includeDone || !t.done),
      );
      return {
        content: [{ type: "text", text: JSON.stringify(mine, null, 2) }],
      };
    },
  );

  server.registerTool(
    "create_task",
    {
      title: "Create task",
      description: "Create a new task owned by the authenticated user.",
      inputSchema: { title: z.string().min(1).max(140) },
    },
    async ({ title }, { authInfo }) => {
      requireScope(authInfo, "tasks:write");
      const sub = (authInfo?.extra?.sub as string) ?? "";
      const task: Task = {
        id: crypto.randomUUID(),
        title,
        done: false,
        ownerSub: sub,
      };
      tasks.push(task);
      return { content: [{ type: "text", text: JSON.stringify(task) }] };
    },
  );

  server.registerTool(
    "complete_task",
    {
      title: "Mark task complete",
      description: "Mark a task complete by id.",
      inputSchema: { id: z.string().uuid() },
    },
    async ({ id }, { authInfo }) => {
      requireScope(authInfo, "tasks:write");
      const sub = (authInfo?.extra?.sub as string) ?? "";
      const task = tasks.find((t) => t.id === id && t.ownerSub === sub);
      if (!task) throw new Error("Task not found");
      task.done = true;
      return { content: [{ type: "text", text: JSON.stringify(task) }] };
    },
  );
}

Three details matter here. First, inputSchema is a plain object of zod schemas — the SDK adapts each to the Standard Schema interface and generates a JSON Schema for the wire protocol4. Second, the second argument to every handler is { authInfo, ... }, populated by the bearer-auth middleware in Step 5; we never trust a sub argument from the client. Third, the in-handler requireScope check is intentional: even when the AS minted a token with the right scopes, the AS does not know which tool the client is about to call, so scope enforcement is a server responsibility5.

Step 3: Wire the Streamable HTTP transport behind Express

Streamable HTTP replaced the deprecated SSE transport in the March 2025 spec revision6. It runs over a single endpoint (we will use /mcp) and uses three verbs: POST for client→server JSON-RPC, GET to open an SSE stream for server→client notifications, and DELETE to terminate a session.

Create src/server.ts:

import express from "express";
import { randomUUID } from "node:crypto";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { isInitializeRequest } from "@modelcontextprotocol/sdk/types.js";
import { registerTaskTools } from "./tools/tasks.js";
import { logger, requestIdMiddleware } from "./observability/logger.js";
import { protectedResourceMetadata } from "./auth/prm.js";
import { requireBearerToken } from "./auth/verifier.js";

const PORT = Number(process.env.PORT ?? 3333);
const PUBLIC_BASE_URL = process.env.PUBLIC_BASE_URL ?? `http://localhost:${PORT}`;

const app = express();
app.use(express.json({ limit: "1mb" }));
app.use(requestIdMiddleware);

// RFC 9728 — Protected Resource Metadata. Anonymous, must be reachable.
app.get("/.well-known/oauth-protected-resource", protectedResourceMetadata);

// Map of sessionId → transport. Streamable HTTP is stateful by default.
const transports = new Map<string, StreamableHTTPServerTransport>();

app.all("/mcp", requireBearerToken, async (req, res) => {
  const sessionHeader = req.header("mcp-session-id");
  let transport = sessionHeader ? transports.get(sessionHeader) : undefined;

  if (!transport && req.method === "POST" && isInitializeRequest(req.body)) {
    transport = new StreamableHTTPServerTransport({
      sessionIdGenerator: () => randomUUID(),
      onsessioninitialized: (sid) => transports.set(sid, transport!),
      enableDnsRebindingProtection: true,
      allowedHosts: [`localhost:${PORT}`, `127.0.0.1:${PORT}`],
      allowedOrigins: ["https://claude.ai", "https://app.cursor.com"],
    });
    transport.onclose = () => {
      if (transport!.sessionId) transports.delete(transport!.sessionId);
    };
    const server = new McpServer({ name: "tasks", version: "0.1.0" });
    registerTaskTools(server);
    await server.connect(transport);
  }

  if (!transport) {
    res.status(400).json({
      jsonrpc: "2.0",
      error: { code: -32000, message: "No active session for this request" },
      id: null,
    });
    return;
  }

  await transport.handleRequest(req, res, req.body);
});

app.listen(PORT, () => {
  logger.info({ port: PORT, baseUrl: PUBLIC_BASE_URL }, "mcp server listening");
});

A few points are worth reading carefully. The session map is in-memory; if you horizontally scale, switch to sessionIdGenerator: undefined for stateless mode and accept the loss of server→client notifications. enableDnsRebindingProtection: true is the security feature Anthropic added in SDK 1.24.07 — it rejects any request whose Host header is not in allowedHosts, foiling the rebinding attack documented by Straiker in late 2025. Put your real production hostname in allowedHosts and your trusted MCP client origins in allowedOrigins.

Step 4: Publish protected resource metadata per RFC 9728

MCP clients discover where to get a token by fetching the server's protected resource metadata document. The location is fixed: /.well-known/oauth-protected-resource8. The November 2025 spec made publishing this document mandatory for HTTP-transport MCP servers9.

Create src/auth/prm.ts:

import type { Request, Response } from "express";

const PUBLIC_BASE_URL =
  process.env.PUBLIC_BASE_URL ?? "http://localhost:3333";
const AUTH_SERVER = process.env.OAUTH_ISSUER!; // e.g. https://example.auth0.com/

export function protectedResourceMetadata(_req: Request, res: Response) {
  res.json({
    resource: `${PUBLIC_BASE_URL}/mcp`,
    authorization_servers: [AUTH_SERVER],
    bearer_methods_supported: ["header"],
    scopes_supported: ["tasks:read", "tasks:write"],
    resource_documentation: `${PUBLIC_BASE_URL}/docs`,
  });
}

Two fields earn their keep. The resource value is the exact URL clients must echo as the resource parameter when requesting a token; if those two strings disagree, RFC 8707 audience validation will (correctly) fail at verification time. The authorization_servers array lists every AS the resource trusts — clients pick one, fetch its /.well-known/oauth-authorization-server document (RFC 841410), and run the standard authorization code + PKCE flow. The MCP server itself never issues tokens.

If you need clients that do not support Dynamic Client Registration (RFC 759111), the November 2025 spec added a second registration path called Client ID Metadata Documents — clients identify themselves with a URL they control12. Both DCR and CIMD are optional from the resource server's point of view; you only have to support whichever your AS supports.

Step 5: Verify access tokens with jose and enforce resource indicators

The verifier is small but every line earns its place. It checks the JWT signature against the AS's JWKS, validates iss, validates aud against the resource URL (this is the RFC 8707 enforcement13), checks exp, and finally derives the scopes array from the scope claim.

Create src/auth/verifier.ts:

import { createRemoteJWKSet, jwtVerify } from "jose";
import type { NextFunction, Request, Response } from "express";

const OAUTH_ISSUER = process.env.OAUTH_ISSUER!;
const PUBLIC_BASE_URL =
  process.env.PUBLIC_BASE_URL ?? "http://localhost:3333";
const RESOURCE_URL = `${PUBLIC_BASE_URL}/mcp`;
const JWKS = createRemoteJWKSet(new URL(`${OAUTH_ISSUER}.well-known/jwks.json`));

export interface AuthInfo {
  token: string;
  clientId?: string;
  scopes: string[];
  expiresAt?: number;
  extra: { sub?: string; iss?: string };
}

declare module "express-serve-static-core" {
  interface Request {
    auth?: AuthInfo;
  }
}

export async function requireBearerToken(
  req: Request,
  res: Response,
  next: NextFunction,
) {
  const header = req.header("authorization") ?? "";
  if (!header.toLowerCase().startsWith("bearer ")) {
    return unauthorized(res, "missing_bearer_token");
  }
  const token = header.slice("bearer ".length);

  try {
    const { payload } = await jwtVerify(token, JWKS, {
      issuer: OAUTH_ISSUER,
      audience: RESOURCE_URL,
    });
    const scopes =
      typeof payload.scope === "string" ? payload.scope.split(" ") : [];
    req.auth = {
      token,
      clientId:
        typeof payload.azp === "string"
          ? payload.azp
          : typeof payload.client_id === "string"
            ? payload.client_id
            : undefined,
      scopes,
      expiresAt: typeof payload.exp === "number" ? payload.exp : undefined,
      extra: {
        sub: typeof payload.sub === "string" ? payload.sub : undefined,
        iss: typeof payload.iss === "string" ? payload.iss : undefined,
      },
    };
    next();
  } catch (err) {
    return unauthorized(res, "invalid_token", (err as Error).message);
  }
}

function unauthorized(res: Response, code: string, description?: string) {
  // RFC 9728 requires the WWW-Authenticate challenge to point at the PRM.
  res
    .status(401)
    .set(
      "WWW-Authenticate",
      `Bearer realm="mcp", error="${code}", resource_metadata="${process.env.PUBLIC_BASE_URL ?? "http://localhost:3333"}/.well-known/oauth-protected-resource"${description ? `, error_description="${description}"` : ""}`,
    )
    .json({ error: code, error_description: description });
}

The middleware writes the discovered AuthInfo onto req.auth, which StreamableHTTPServerTransport.handleRequest reads and threads through to each tool handler as the authInfo argument seen in Step 2. The WWW-Authenticate header includes a resource_metadata parameter so a smart client can self-discover where to get a token after a 401 — this is the affordance RFC 9728 was written to enable14.

Step 6: Enforce scope-based authorization inside tool handlers

The requireScope helper from Step 2 is the second half of the story. Scopes flow from the AS into the JWT, through the verifier, into req.auth.scopes, and finally into the tool handler's authInfo.scopes. The check inside the handler is intentional: the network layer can authenticate, but only the tool itself knows what scope it needs.

When you wire this to a real AS, mint two scopes — tasks:read and tasks:write — and ensure your token request includes scope=tasks:read tasks:write and (on providers that support it) resource=https://your-host/mcp. WorkOS AuthKit honours RFC 8707 natively15; Auth0 honours it once the Resource Parameter Compatibility Profile is enabled16; check your provider docs before assuming.

Two production patterns are worth adopting now while the code is fresh. First, never read sub from the JSON-RPC arguments — always pull it from authInfo.extra.sub. The example tools follow that rule, which is why they store tasks keyed by the verified subject claim and not by anything the client supplied. Second, scope checks should always be the first line of a handler so error responses are uniform; if you scatter them later, you will eventually forget one.

Step 7: Turn on DNS-rebinding protection and Host allowlisting

The DNS-rebinding attack reported by Straiker in late 2025 weaponized any MCP server listening on localhost: a malicious website triggered the browser to resolve a controlled hostname to 127.0.0.1 and ran tools against the local server with the visiting user's network identity17. Anthropic shipped the mitigation in @modelcontextprotocol/sdk 1.24.0 — Host-header validation that rejects any request whose Host header is not in the configured allowedHosts list. The protection is opt-in (off by default for backward compatibility, per advisory CVE-2025-66414) unless you use the createMcpExpressApp() helper, so we set it explicitly on the transport.

The StreamableHTTPServerTransport options used in Step 3 already enable the protection:

new StreamableHTTPServerTransport({
  sessionIdGenerator: () => randomUUID(),
  onsessioninitialized: (sid) => transports.set(sid, transport!),
  enableDnsRebindingProtection: true,
  allowedHosts: [`localhost:${PORT}`, `127.0.0.1:${PORT}`],
  allowedOrigins: ["https://claude.ai", "https://app.cursor.com"],
});

For a production deployment, replace the localhost entries with your real hostname and add every MCP-client origin you intend to support to allowedOrigins. Both lists do strict string matching — wildcards are intentionally not supported. The MCP security best-practices doc explicitly recommends configuring both lists in production18.

There is a second hardening lever worth adding to the same Express app: rate limiting. The SDK already depends on express-rate-limit; the simplest production setup mounts it on /mcp with a 60 req/min cap per IP for unauthenticated requests and a higher cap for authenticated ones.

Step 8: Structured observability with pino and request IDs

Tool calls fail in mysterious ways when there are no logs. The minimum useful pattern is a request-id middleware that attaches an id to every request and a pino instance that emits one structured line per tool invocation.

Create src/observability/logger.ts:

import pino from "pino";
import { randomUUID } from "node:crypto";
import type { NextFunction, Request, Response } from "express";

export const logger = pino({
  level: process.env.LOG_LEVEL ?? "info",
  transport:
    process.env.NODE_ENV === "production"
      ? undefined
      : { target: "pino-pretty", options: { translateTime: "SYS:HH:MM:ss.l" } },
  base: { service: "mcp-tasks-server" },
});

declare module "express-serve-static-core" {
  interface Request {
    log?: pino.Logger;
    requestId?: string;
  }
}

export function requestIdMiddleware(
  req: Request,
  res: Response,
  next: NextFunction,
) {
  const requestId =
    req.header("x-request-id") ?? randomUUID();
  req.requestId = requestId;
  req.log = logger.child({ requestId });
  res.setHeader("x-request-id", requestId);

  const started = process.hrtime.bigint();
  res.on("finish", () => {
    const latencyMs = Number(process.hrtime.bigint() - started) / 1_000_000;
    req.log!.info(
      {
        method: req.method,
        path: req.path,
        status: res.statusCode,
        latencyMs: Math.round(latencyMs * 100) / 100,
        clientId: req.auth?.clientId,
        userSub: req.auth?.extra?.sub,
      },
      "request",
    );
  });
  next();
}

When you have this in place, every request emits one JSON log line in production and a colored line in development. The MCP Inspector and Claude Desktop both pass through the x-request-id header you set on the response, so end-to-end correlation works without further plumbing.

Verification

Boot the server with a real issuer URL. The tutorial uses auth.example.com as a stand-in — substitute your own AS:

OAUTH_ISSUER=https://auth.example.com/ \
PUBLIC_BASE_URL=http://localhost:3333 \
npm run dev

In a second terminal, confirm the protected resource metadata document is reachable:

curl -s http://localhost:3333/.well-known/oauth-protected-resource | jq .

Expected output:

{
  "resource": "http://localhost:3333/mcp",
  "authorization_servers": ["https://auth.example.com/"],
  "bearer_methods_supported": ["header"],
  "scopes_supported": ["tasks:read", "tasks:write"],
  "resource_documentation": "http://localhost:3333/docs"
}

Confirm that anonymous requests return a 401 with a WWW-Authenticate challenge pointing back at the PRM:

curl -i -X POST http://localhost:3333/mcp \
  -H 'Content-Type: application/json' \
  -d '{"jsonrpc":"2.0","method":"initialize","params":{},"id":1}'

You should see:

HTTP/1.1 401 Unauthorized
WWW-Authenticate: Bearer realm="mcp", error="missing_bearer_token", resource_metadata="http://localhost:3333/.well-known/oauth-protected-resource"

To exercise the full happy path, run the official Inspector against the server. Write a config file describing the remote server:

// mcp.json
{
  "mcpServers": {
    "tasks": {
      "type": "streamable-http",
      "url": "http://localhost:3333/mcp"
    }
  }
}

Then launch the Inspector pointing at that config:

npx @modelcontextprotocol/inspector --config mcp.json --server tasks

The Inspector starts a local web UI at http://localhost:6274. Open it, paste your bearer token into the "Authorization" header field on the connection panel, and click "Connect". Once connected, the UI lists the three tools and lets you invoke create_task, list_tasks, and complete_task against the live server. Watch your pino logs — every call emits a single structured line carrying clientId, userSub, and latencyMs.

If you prefer scripting, the same Inspector ships a --cli mode for stdio servers; remote HTTP testing is easiest through the UI, or you can drive the JSON-RPC endpoint directly with curl once you have a session id from the initialize response.

Troubleshooting

A few things bite first-time builders. Each of these has caught me or someone I work with on a real MCP build in 2026.

Error: Invalid Host header — your client is sending a Host value that is not in allowedHosts. Either add the host to the list or, if you are running behind a load balancer that rewrites Host, set allowedHosts to the externally visible value. Do not disable enableDnsRebindingProtection; the November 2025 spec lists the rebinding mitigation as a SHOULD19.

Error: audience claim check failedjose's audience option compares against the JWT's aud claim. The MCP spec requires aud to be the resource URL the AS issued the token for. Confirm three things: (1) your client included resource=http://localhost:3333/mcp in the token request, (2) your AS supports RFC 8707, and (3) the value in your verifier matches the resource field in the PRM document character-for-character.

Cannot find module ... .js — TypeScript with module: NodeNext requires .js extensions on relative imports even from .ts files. This is a one-time learning tax; once your editor's autocompletion is configured, it disappears.

Tools list comes back empty — almost always a session issue. Check that the response of the initial POST /mcp includes an mcp-session-id header and that your client is echoing it on subsequent requests. The Inspector does this automatically; curl does not.

The handler throws "Forbidden: missing required scope" — the AS issued a token without the scopes you requested. Auth0 and Entra both gate scopes behind explicit consent and API registration; double-check that the scope is registered on the API resource and that the client app is allowed to request it.

MODULE_NOT_FOUND for @modelcontextprotocol/sdk/server/auth/... — common when copy-pasting older snippets. Imports must end with .js, not .ts or no extension. The SDK ships an ESM entry point and Node strictly enforces extensions for NodeNext resolution.

Next steps

You now have a runnable MCP server that an MCP client can authenticate to, that enforces RFC 8707 resource indicators, that publishes RFC 9728 protected resource metadata, that rejects DNS-rebinding attempts via Host-header allowlisting, and that emits structured logs you can pipe into any observability stack.

A few natural follow-ons:

  • Swap the in-memory tasks array for a real database. If you want a worked example of session-stateful Postgres patterns that pair well with Streamable HTTP, the Postgres LISTEN/NOTIFY realtime presence tutorial covers the long-lived connection lifecycle you want here.
  • Push the server out to the edge. The Cloudflare Workers R2 image CDN tutorial walks through Workers fundamentals that translate almost verbatim to running an MCP server on a Workers route — the SDK's StreamableHTTPServerTransport already accepts a stateless mode for serverless.
  • Run the server behind a pooled Postgres backend when you graduate from Map<string, transport> to multi-instance deployment. See Production Postgres pooling with PgBouncer and Supavisor for the connection-pool patterns that survive horizontal scale.
  • For conceptual background on what MCP is and where it sits in the agent stack, read the MCP servers explained overview.

Once the server is live, the next thing every team wants is a custom MCP client that can call it programmatically; that is a guide for another day, but the auth flow you just built is exactly what it will consume on the other side.

Footnotes

  1. Model Context Protocol Authorization specification (current draft tracking the November 25, 2025 revision). https://modelcontextprotocol.io/specification/draft/basic/authorization

  2. Node.js release schedule — Node 24 is the current Active LTS through October 2026; Node 22 is in Maintenance LTS through April 2027. https://nodejs.org/en/about/previous-releases

  3. @modelcontextprotocol/sdk 1.29.0 on the npm registry, verified May 14, 2026. https://www.npmjs.com/package/@modelcontextprotocol/sdk

  4. MCP TypeScript SDK server documentation — tool registration with Standard Schema input. https://github.com/modelcontextprotocol/typescript-sdk/blob/main/docs/server.md

  5. MCP Authorization spec — clients SHOULD request the minimum scope; servers MUST enforce. https://modelcontextprotocol.io/specification/draft/basic/authorization

  6. MCP transports specification — Streamable HTTP replaces SSE (2025-03-26). https://modelcontextprotocol.io/specification/2025-03-26/basic/transports

  7. @modelcontextprotocol/sdk 1.24.0 release notes — DNS rebinding protection added. https://github.com/modelcontextprotocol/typescript-sdk/releases

  8. RFC 9728 — OAuth 2.0 Protected Resource Metadata. https://datatracker.ietf.org/doc/html/rfc9728

  9. Auth0 — MCP specs update covering the June 18, 2025 and November 25, 2025 authorization revisions. https://auth0.com/blog/mcp-specs-update-all-about-auth/

  10. RFC 8414 — OAuth 2.0 Authorization Server Metadata. https://datatracker.ietf.org/doc/html/rfc8414

  11. RFC 7591 — OAuth 2.0 Dynamic Client Registration Protocol. https://datatracker.ietf.org/doc/html/rfc7591

  12. Aaron Parecki — Client Registration and Enterprise Management in the November 2025 MCP Authorization Spec. https://aaronparecki.com/2025/11/25/1/mcp-authorization-spec-update

  13. RFC 8707 — Resource Indicators for OAuth 2.0. https://datatracker.ietf.org/doc/html/rfc8707

  14. Den Delimarsky — What's new in the 2025-11-25 MCP authorization spec. https://den.dev/blog/mcp-november-authorization-spec/

  15. WorkOS — Resource indicators for MCP Auth (AuthKit). https://workos.com/changelog/resource-indicators-for-mcp-auth

  16. Auth0 — Resource Parameter Compatibility Profile for MCP. https://auth0.com/ai/docs/mcp/guides/resource-param-compatibility-profile

  17. Straiker — Agentic danger: DNS rebinding exposes internal MCP servers. https://www.straiker.ai/blog/agentic-danger-dns-rebinding-exposing-your-internal-mcp-servers

  18. MCP — Security best practices. https://modelcontextprotocol.io/docs/tutorials/security/security_best_practices

  19. MCP Authorization spec — DNS rebinding mitigations and Host-header validation. https://modelcontextprotocol.io/specification/draft/basic/authorization


FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.