Postgres LISTEN/NOTIFY: Real-Time Presence Tutorial 2026

May 5, 2026

Postgres LISTEN/NOTIFY: Real-Time Presence Tutorial 2026

TL;DR

This tutorial walks through building a real-time "who's online" presence system on PostgreSQL 18.31 using LISTEN / NOTIFY and the pg 8.20.0 driver2, with a thin Express 5.2.13 + ws 8.20.04 fanout layer that pushes presence changes to browsers over WebSockets. No Redis, no message broker, no third-party realtime service — just SQL plus about 120 lines of Node.js you can copy-paste-run in roughly twenty minutes on Docker Desktop or any Linux host.

What You'll Learn

A LISTEN / NOTIFY channel turns Postgres itself into a pub/sub bus: any session that issues NOTIFY my_channel, 'payload' (or calls pg_notify('my_channel','payload')) wakes every other session that ran LISTEN my_channel, with payloads delivered to listeners only after the publishing transaction commits5. (Duplicate notifies — same channel, same payload, same transaction — may be coalesced into a single delivery, so design payloads so duplicates within one transaction are intentional.) In this tutorial you will spin up Postgres 18 in Docker, design a minimal presence table with a row-level trigger that publishes a JSON payload through pg_notify whenever a user's status flips, write a reconnect-safe Node.js listener built on a dedicated pg client (not a pool — pools rotate connections, which silently breaks LISTEN), bridge those notifications to browser clients via ws, build a tiny HTML test page that shows live status changes, and verify the whole thing end-to-end with two psql shells. You'll also learn the hard limits of LISTEN / NOTIFY — the 8000-byte payload cap, the 8 GB notification-queue ceiling, and the single-global-queue serialization that makes very high notify rates a contention point — so you know when this pattern is the right tool and when you should reach for Redis or NATS instead.

Prerequisites

Before you start, make sure your local environment matches these versions. Older Node and Postgres releases will run the same code, but a few APIs and Docker tags below assume up-to-date software.

  • Docker Engine 23.0+ (Docker Desktop 4.19+ on macOS or Windows, which bundles Moby 23.06) so the official postgres:18-alpine image runs cleanly. Older Docker engines also work, but the 18-alpine and 18.3-alpine3.23 tags assume a recent client.
  • Node.js 24.15.0 (active LTS as of April 2026) or Node.js 22.x (maintenance LTS until April 2027)7. Express 5 requires Node 18 or newer3, and pg 8.20.0 supports anything still in LTS.
  • A POSIX shell. Examples below assume bash or zsh on macOS or Linux. On Windows, run them inside WSL2.
  • curl and psql (the latter ships with the postgresql-client package on Debian/Ubuntu, the postgresql package on RHEL/Fedora, or via brew install libpq on macOS).
  • Roughly 200 MB of free disk space for the Postgres image and Node modules.

We pin to PostgreSQL 18.3, the most recent patch in the 18 series at the time of writing (released February 26, 2026 as part of the joint 18.3 / 17.9 / 16.13 / 15.17 / 14.22 update1). Postgres 18.0 added asynchronous I/O for sequential scans and vacuums8; LISTEN / NOTIFY itself works the same way it has since Postgres 9.0, so the code in this tutorial runs unchanged on Postgres 14 or later if you can't upgrade yet.

Step 1: Spin up Postgres 18

Create a fresh project directory and a docker-compose.yml that pins the exact patch version. Pinning to postgres:18.3-alpine3.23 (rather than postgres:18-alpine or postgres:latest) gives you reproducible builds across machines.

mkdir presence-demo && cd presence-demo
cat > docker-compose.yml <<'YAML'
services:
  db:
    image: postgres:18.3-alpine3.23
    environment:
      POSTGRES_PASSWORD: presence_dev_pw
      POSTGRES_DB: presence
    ports:
      - "5432:5432"
    volumes:
      - presence-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres -d presence"]
      interval: 2s
      timeout: 3s
      retries: 10
volumes:
  presence-data:
YAML
docker compose up -d

Wait for the container to become healthy (docker compose ps should show (healthy) next to db), then confirm the server is on 18.3:

docker compose exec db psql -U postgres -d presence -c "SELECT version();"

Expected output (truncated):

PostgreSQL 18.3 on x86_64-pc-linux-musl, compiled by gcc ...

If you see connection refused, the container hasn't finished initializing — the pg_isready healthcheck above is what most CI systems use to gate dependent services, and you can simulate it with until docker compose exec db pg_isready -U postgres; do sleep 1; done.

Step 2: Init the Node project and install pinned dependencies

npm init -y
npm install pg@8.20.0 express@5.2.1 ws@8.20.0
npm install --save-dev nodemon@3.1.14

Open package.json and add a start script plus a type: module flag so we can use plain import syntax without TypeScript:

{
  "name": "presence-demo",
  "version": "1.0.0",
  "type": "module",
  "scripts": {
    "start": "node server.js",
    "dev": "nodemon --watch . --ext js,html server.js"
  },
  "dependencies": {
    "express": "5.2.1",
    "pg": "8.20.0",
    "ws": "8.20.0"
  },
  "devDependencies": {
    "nodemon": "3.1.14"
  }
}

Save a .env (or just export it inline) so the connection string lives outside the code:

export DATABASE_URL='postgres://postgres:presence_dev_pw@localhost:5432/presence'

Step 3: Create the presence schema

Create schema.sql with a tiny presence table, a JSON-emitting trigger, and a single notification channel called presence_change:

-- schema.sql
CREATE TABLE IF NOT EXISTS presence (
  user_id    text PRIMARY KEY,
  status     text NOT NULL CHECK (status IN ('online', 'away', 'offline')),
  last_seen  timestamptz NOT NULL DEFAULT now()
);

CREATE OR REPLACE FUNCTION notify_presence_change()
RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
  payload jsonb;
BEGIN
  payload := jsonb_build_object(
    'user_id',   NEW.user_id,
    'status',    NEW.status,
    'last_seen', NEW.last_seen
  );
  -- pg_notify is the function form; payload must be < 8000 bytes total
  -- including the channel name. Keep payloads small or store a row id and
  -- have listeners SELECT the full record.
  PERFORM pg_notify('presence_change', payload::text);
  RETURN NEW;
END;
$$;

DROP TRIGGER IF EXISTS presence_change_trigger ON presence;
CREATE TRIGGER presence_change_trigger
  AFTER INSERT OR UPDATE OF status, last_seen ON presence
  FOR EACH ROW
  EXECUTE FUNCTION notify_presence_change();

Apply the schema:

docker compose exec -T db psql -U postgres -d presence < schema.sql

Expected output (the NOTICE only appears on the first run, when the trigger does not yet exist):

CREATE TABLE
CREATE FUNCTION
NOTICE:  trigger "presence_change_trigger" for relation "presence" does not exist, skipping
DROP TRIGGER
CREATE TRIGGER

The 8000-byte payload cap is documented behavior of LISTEN / NOTIFY: in the default configuration, pg_notify's second argument plus the channel name must be smaller than 8000 bytes total5. For presence data that is fine, but if you ever want to push, say, a full chat message body, the recommended pattern is to write the row to a table and NOTIFY only the row id — listeners then SELECT the body separately.

Step 4: Build a reconnect-safe listener

Create listener.js. The core API surface from pg is small: open a dedicated Client (not a Pool, because pools recycle connections and the moment a LISTEN connection is recycled you silently stop receiving notifications), issue LISTEN, then attach a handler to the client's 'notification' event9.

// listener.js
import pg from 'pg';

const { Client } = pg;

// Postgres unquoted identifiers must start with a letter or underscore and
// contain only letters, digits, or underscores. We validate the channel name
// up front so we can safely interpolate it into the LISTEN command — channel
// names are SQL identifiers, not values, so they cannot be parameterized
// through $1.
const SAFE_CHANNEL = /^[a-z_][a-z0-9_]{0,62}$/;

export function createListener({ connectionString, channel, onNotify }) {
  if (!SAFE_CHANNEL.test(channel)) {
    throw new Error(`unsafe channel name: ${channel}`);
  }
  let client;
  let stopped = false;
  let backoffMs = 500;

  async function connect() {
    if (stopped) return;
    client = new Client({ connectionString });
    client.on('error', handleError);
    client.on('notification', (msg) => {
      // msg = { processId, channel, payload }
      if (!msg.payload) return;
      try {
        onNotify(JSON.parse(msg.payload));
      } catch (err) {
        console.error('presence: bad payload', msg.payload, err);
      }
    });
    try {
      await client.connect();
      await client.query(`LISTEN ${channel}`);
      console.log(`presence: listening on "${channel}"`);
      backoffMs = 500; // reset after a healthy connect
    } catch (err) {
      handleError(err);
    }
  }

  function handleError(err) {
    console.error('presence: client error, reconnecting', err.code || err.message);
    if (client) {
      client.removeAllListeners();
      client.end().catch(() => {});
      client = null;
    }
    if (stopped) return;
    setTimeout(connect, backoffMs);
    backoffMs = Math.min(backoffMs * 2, 15_000);
  }

  function stop() {
    stopped = true;
    if (client) client.end().catch(() => {});
  }

  connect();
  return { stop };
}

Three things worth noting in this listener:

  1. The SAFE_CHANNEL regex — Postgres channel names are SQL identifiers, not values, so they cannot be parameterized through $1. The cleanest defense is to constrain channel names to the unquoted-identifier alphabet up front and refuse anything else; the regex caps total length at 63 characters (one initial letter or underscore plus up to 62 follow-on characters), matching Postgres's 63-byte NAMEDATALEN-derived identifier limit10 — anything longer would be silently truncated by the server.
  2. Exponential backoff with a 15-second ceiling — when the database restarts during a deploy, every listener will attempt to reconnect at once. The cap prevents a thundering herd, and resetting backoffMs after a healthy connection prevents permanent throttling after a single bad night.
  3. Re-issuing LISTEN on every reconnectLISTEN registers against the current backend session, so a dropped connection forgets all subscriptions; you must re-LISTEN after every reconnect, otherwise notifications go silently into the bit bucket. This is the single most common bug in production LISTEN / NOTIFY setups11.

Step 5: Bridge to browsers with Express + ws

Create server.js that wires the listener to a WebSocket server:

// server.js
import express from 'express';
import { WebSocketServer } from 'ws';
import { createServer } from 'node:http';
import { createListener } from './listener.js';

const app = express();
app.use(express.static('public'));
app.use(express.json());

const httpServer = createServer(app);
const wss = new WebSocketServer({ server: httpServer, path: '/ws' });

wss.on('connection', (ws) => {
  ws.on('error', (err) => console.error('ws error', err));
});

function broadcast(event) {
  const data = JSON.stringify(event);
  for (const client of wss.clients) {
    if (client.readyState === 1 /* OPEN */) client.send(data);
  }
}

createListener({
  connectionString: process.env.DATABASE_URL,
  channel: 'presence_change',
  onNotify: (event) => broadcast(event),
});

const PORT = Number(process.env.PORT || 3000);
httpServer.listen(PORT, () => {
  console.log(`presence: http on http://localhost:${PORT}`);
});

Express 5 (the default latest tag on npm since March 202512) is a thin wrapper here — the only thing it does is serve a static public/ directory. The work happens in ws: every Postgres notification is fanned out to every connected browser. For a serious production deployment you would scope notifications to authenticated users; for this demo, every browser sees every status change.

Step 6: Browser client

Create public/index.html:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <title>Presence demo</title>
  <style>
    body { font-family: -apple-system, sans-serif; margin: 2rem; }
    li { padding: 0.25rem 0; }
    .online  { color: #15803d; }
    .away    { color: #ca8a04; }
    .offline { color: #6b7280; text-decoration: line-through; }
  </style>
</head>
<body>
  <h1>Live presence</h1>
  <p>Open two terminals and run the SQL in step 8 — the list below updates in real time.</p>
  <ul id="users"></ul>
  <script>
    const users = new Map();
    const list = document.getElementById('users');

    function render() {
      list.innerHTML = '';
      for (const [id, { status, last_seen }] of users) {
        const li = document.createElement('li');
        li.className = status;
        li.textContent = `${id}${status} (last seen ${last_seen})`;
        list.appendChild(li);
      }
    }

    const ws = new WebSocket(`ws://${location.host}/ws`);
    ws.addEventListener('message', (event) => {
      const e = JSON.parse(event.data);
      users.set(e.user_id, { status: e.status, last_seen: e.last_seen });
      render();
    });
    ws.addEventListener('close', () => {
      // Trivial reconnect: reload after 1s. In production, do exponential backoff
      // on the client too, and resync state from a REST endpoint after reconnect.
      setTimeout(() => location.reload(), 1000);
    });
  </script>
</body>
</html>

This is intentionally bare-bones — no framework, no bundler. The point is to show the end-to-end pipe in as few moving parts as possible.

Step 7: Run it

In one terminal:

npm start

Expected output:

presence: http on http://localhost:3000
presence: listening on "presence_change"

Open http://localhost:3000 in your browser. The list will be empty until you write a row in the next step.

Step 8: Verify end-to-end

Open a second terminal and run a few presence updates against the database:

docker compose exec db psql -U postgres -d presence

Then in the psql shell:

INSERT INTO presence (user_id, status) VALUES ('alice', 'online');
INSERT INTO presence (user_id, status) VALUES ('bob',   'online');
UPDATE presence SET status = 'away'    WHERE user_id = 'alice';
UPDATE presence SET status = 'offline' WHERE user_id = 'bob';

Each statement should appear in the browser instantly. The server.js console will not print anything new (we only log on connect / error), but you can verify the channel is alive directly in psql by listening yourself:

LISTEN presence_change;
NOTIFY presence_change, '{"user_id":"carol","status":"online","last_seen":"now"}';

psql will print:

Asynchronous notification "presence_change" with payload "{...}" received from server process with PID NNNN.

For a load test, a small pgbench custom script will generate notifications at sustainable rates, but be aware of the 8 GB notification queue limit — the queue is stored on disk under the cluster's pg_notify/ directory with active pages mapped into shared memory via the SLRU subsystem, and once it fills, transactions issuing NOTIFY will fail at commit until listeners drain the backlog. You can monitor queue health with:

SELECT pg_notification_queue_usage();  -- 0.0 = empty, 1.0 = full

The function returns the fraction of the notification queue currently occupied5. Anything consistently above ~0.5 is a sign you have a slow or dead listener.

Step 9: When LISTEN/NOTIFY is the right tool — and when it isn't

LISTEN / NOTIFY is excellent for low-to-medium-rate event fanout where you want exactly the same transactional guarantees as the rest of your data: presence indicators, cache invalidation, "new comment" pings, admin dashboards, slow-job-finished signals. It loses to a dedicated broker like Redis Pub/Sub or NATS in three concrete ways:

  1. NOTIFY takes a globally exclusive lock during commit. Recall.ai's investigation of LISTEN/NOTIFY-induced outages traces this to a single global mutex acquired by the notification subsystem during commit, which effectively serializes commits across the entire cluster when many concurrent writers are issuing NOTIFY13. For low-to-medium notify rates this is invisible; under heavy concurrent writes it becomes the bottleneck.
  2. There is no consumer group / replay. A listener that disconnects misses everything that happened while it was gone. If durability matters, write to a table and NOTIFY only a row id, and have listeners reconcile with a SELECT after reconnect. This is the same pattern PgQ and Skytools have used for two decades.
  3. The 8000-byte payload cap is hard. It's not configurable at runtime; it's defined by NOTIFY_PAYLOAD_MAX_LENGTH in the server source. If your messages can ever be larger, design around it from day one.

For chatrooms with hundreds of thousands of subscribers and per-channel scaling, reach for a dedicated broker. For presence, audit logs, slow-job done events, and most "I just want my admin panel to update" use cases, LISTEN / NOTIFY keeps your stack one box smaller and gives you transactional consistency for free.

Troubleshooting

  • "My listener stops getting events after a deploy." You are almost certainly using a Pool instead of a dedicated Client. Pools acquire a fresh connection per query and the connection that issued LISTEN is released as soon as the query returns, so notifications go to a connection nothing is reading. Use new Client(...) exclusively for the listener9.
  • "Notifications are bunched up at the end of a slow transaction." That is the documented behavior — events from a transaction are delivered only at COMMIT, never mid-transaction5. If you need progress signals during a long-running transaction, run the work outside a transaction or in autonomous-transaction-style background jobs.
  • "Some payloads disappear silently." Check pg_notification_queue_usage() — when it approaches 1.0 the queue is full and Postgres will refuse to commit further transactions that issue NOTIFY until consumers catch up. The fix is faster consumers, not a bigger queue.
  • "I'm getting payload string too long errors." Your channel name plus payload exceed 8000 bytes. Move the body to a row in a table and notify only the primary key.
  • "My JSON payload comes through truncated when it contains emoji." Postgres counts the 8000-byte limit in bytes, not characters. UTF-8 emoji are 4 bytes each, so a 2000-character payload of pure emoji is 8000 bytes. Either constrain payload length to ~1500 chars to be safe, or follow the row-id pattern above.

Next steps and further reading

If you enjoyed this pattern, the robust database architectures guide on NerdLevelTech covers the broader trade-off space between in-database pub/sub, message brokers, and CDC pipelines, and the Redis caching patterns guide walks through the closest like-for-like alternative when LISTEN / NOTIFY is no longer the right tool. The reconnect logic from Step 4 is the same shape you'd write for any long-lived database connection. For Postgres 18 itself, the official PG 18 release announcement is the best summary of the asynchronous I/O subsystem and its pg_aios system view14.

The single most common production mistake with LISTEN / NOTIFY is forgetting to re-LISTEN after a reconnect; the second is using a Pool instead of a dedicated Client. If your code in Step 4 looks like the version above, you have already avoided both.

Footnotes

  1. PostgreSQL 18.3, 17.9, 16.13, 15.17, and 14.22 release announcement, February 26, 2026. https://www.postgresql.org/about/news/postgresql-183-179-1613-1517-and-1422-released-3246/ 2

  2. pg 8.20.0 on npm. https://www.npmjs.com/package/pg

  3. Express 5.2.1 on npm. https://www.npmjs.com/package/express. Express 5 announcement and Node 18+ requirement: https://expressjs.com/2025/03/31/v5-1-latest-release.html 2

  4. ws 8.20.0 on npm. https://www.npmjs.com/package/ws

  5. PostgreSQL 18 documentation, NOTIFY: payload size limit (8000 bytes including channel name), 8 GB queue, pg_notification_queue_usage(), transactional delivery semantics. https://www.postgresql.org/docs/current/sql-notify.html 2 3 4

  6. Docker Desktop 4.19 release notes (Moby 23.0 engine update). https://www.docker.com/blog/docker-desktop-4-19/

  7. Node.js release schedule (Node 24 active LTS since October 2025; Node 22 maintenance LTS through April 2027). https://nodejs.org/en/about/previous-releases

  8. PostgreSQL 18 release announcement (September 25, 2025): asynchronous I/O subsystem, io_method, pg_aios view. https://www.postgresql.org/about/news/postgresql-18-released-3142/

  9. node-postgres Client API (events: notification, error, end). https://node-postgres.com/apis/client 2

  10. PostgreSQL 18 lexical structure: NAMEDATALEN defaults to 64, so identifiers are at most 63 bytes; longer names are silently truncated. https://www.postgresql.org/docs/current/sql-syntax-lexical.html

  11. andywer/pg-listen README, "Why?" section — documents the reconnect-and-re-LISTEN failure mode that motivated the library. https://github.com/andywer/pg-listen

  12. Express 5.1.0 release post (Express 5 made the default latest on npm in March 2025). https://expressjs.com/2025/03/31/v5-1-latest-release.html

  13. Recall.ai engineering blog, "Postgres LISTEN/NOTIFY does not scale" — describes the single global queue and contention behavior under heavy notify traffic. https://www.recall.ai/blog/postgres-listen-notify-does-not-scale

  14. PostgreSQL 18 asynchronous I/O feature matrix entry. https://www.postgresql.org/about/featurematrix/detail/asynchronous-io-aio/


FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.