Postgres 18 Zero-Downtime Upgrade: pg_createsubscriber 2026

May 13, 2026

Postgres 18 Zero-Downtime Upgrade: pg_createsubscriber 2026

To upgrade Postgres 17 to 18 without downtime, run pg_createsubscriber on a physical standby to convert it into a logical subscriber, then pg_upgrade that subscriber to Postgres 18.3 while the old primary keeps taking writes. Logical replication catches the new cluster up, you sync sequences manually, and traffic cuts over in seconds instead of minutes.12

The classical pg_upgrade path is fast — usually a few minutes for a small cluster — but it still requires the old cluster to be shut down while it runs.3 For an application that pages support at the first 30-second blip, "a few minutes" is not zero. PostgreSQL 18.3 (released February 26, 2026)4 makes the zero-downtime path dramatically easier than it used to be by polishing pg_createsubscriber, the binary that converts a physical streaming standby into a logical subscriber in one shot. This tutorial walks through the full flow with Docker Compose, ending on a Postgres 18.3 cluster that started life as a Postgres 17.9 primary.

TL;DR

You will run a Postgres 17.9 primary in Docker, attach a streaming physical standby, convert that standby into a logical subscriber with pg_createsubscriber --all (a flag new in PG 185), pg_upgrade it to Postgres 18.3, sync sequences, and cut traffic over. Total writeable-downtime: a single connection switch, typically under one second. Every command and version pin in this tutorial was verified against postgresql.org docs and Docker Hub on May 13, 2026.

What you'll learn

  • Why pg_upgrade on its own can't deliver zero downtime, and what the pg_createsubscriber + pg_upgrade combination changes.
  • How to configure Postgres 17.9 for logical replication and add a streaming physical standby in Docker.
  • How to run pg_createsubscriber --all to convert that standby into a logical subscriber, including the GUC prerequisites.
  • How to pg_upgrade the new subscriber to Postgres 18.3 while the publisher keeps serving writes.
  • How to handle sequences — the one piece of state logical replication does not carry — without colliding with the publisher's IDs.
  • How to verify replication is caught up via pg_subscription_rel, then perform the cutover.
  • Five common failure modes and a clean rollback plan.

Prerequisites

  • Docker 27.0 or later with Compose v2 (bundled in Docker Desktop and modern Docker Engine packages).6
  • A POSIX shell (bash or zsh).
  • Comfort with SQL and Postgres replication terms (publisher, subscriber, replication slot).
  • Free TCP ports 5532, 5533, and 5534 on localhost. Ports are deliberately offset from 5432 to avoid collisions with a developer's existing Postgres.
  • Roughly 2 GB of free disk for the Docker volumes used in this walkthrough.

Why pg_upgrade alone leaves a downtime gap

pg_upgrade rewrites system catalogs and (with --link) hardlinks data files into the new cluster, which is why it is fast.3 But the old cluster has to be stopped before pg_upgrade runs, and the new cluster is not started until pg_upgrade finishes. During that window — minutes for a small database, much longer for a large one — every write is rejected. Add the time to drain connections, run analyze on the new cluster, and warm caches, and "a few minutes" frequently turns into a real maintenance window.

Logical replication sidesteps the problem because publisher and subscriber may run different major versions at the same time.7 You build a Postgres 18.3 replica that follows your Postgres 17.9 primary, wait for it to catch up, then cut over. The only downtime is the connection switch.

The pg_createsubscriber and pg_upgrade workflow

Pre-PG 17, the zero-downtime upgrade dance involved pg_dump --schema-only, manual CREATE PUBLICATION and CREATE SUBSCRIPTION statements, and waiting for the initial COPY to finish — a process that took hours on a multi-terabyte database because every row had to be re-copied across the wire. PostgreSQL 17 introduced pg_createsubscriber to skip that copy step entirely: it takes an existing physical streaming standby (already a byte-for-byte clone of the primary) and converts it into a logical subscriber by stopping it, recording an LSN, creating publications and subscriptions, and rewinding the apply position to that LSN.8 The binary does not copy any data; it only flips the replication mode.

PostgreSQL 18 added three useful flags on top: --all (handle every user database in one invocation), --clean (drop publications carried over from a previous run), and --enable-two-phase (turn on 2PC support).5 The --all flag is the one that matters most for upgrades, because production clusters rarely have a single database.

Combine pg_createsubscriber with pg_upgrade and the path looks like this:

  1. Stand up a streaming physical standby of the existing PG 17 primary.
  2. Stop the standby cleanly.
  3. Run pg_createsubscriber --all against the standby's data directory using PG 17 binaries.
  4. Run pg_upgrade on the standby's data directory to convert PG 17 → PG 18.3.
  5. Start the new PG 18.3 instance; logical replication resumes from the recorded LSN and catches up everything the publisher accepted while the upgrade ran.
  6. Sync sequences, cut traffic over, drop the publications on the old primary.

Step 1: Bring up the Postgres 17.9 publisher

Create a project directory and drop in a docker-compose.yml:

# docker-compose.yml
name: pg18-zdt
services:
  publisher:
    image: postgres:17.9
    container_name: pg17-publisher
    ports:
      - "5532:5432"
    environment:
      POSTGRES_PASSWORD: dev
      POSTGRES_DB: shop
    volumes:
      - publisher_data:/var/lib/postgresql/data
    command:
      - "postgres"
      - "-c"
      - "wal_level=logical"
      - "-c"
      - "max_wal_senders=10"
      - "-c"
      - "max_replication_slots=10"
      - "-c"
      - "listen_addresses=*"

volumes:
  publisher_data:

Three settings matter:

  • wal_level=logical writes the extra information the logical decoder needs.9 In Postgres 18 you can leave the configured wal_level at replica and the effective level auto-promotes the moment the first logical slot is created;10 for PG 17 you still need to set it explicitly and restart, which is why we hard-code it here.
  • max_wal_senders=10 and max_replication_slots=10 are the defaults but worth being explicit about. pg_createsubscriber needs at least one slot per database, plus headroom for any existing physical replicas.11

Bring it up and seed a tiny data set:

docker compose up -d publisher
docker exec -i pg17-publisher psql -U postgres -d shop <<'SQL'
CREATE TABLE products (
  id        bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
  sku       text NOT NULL UNIQUE,
  price_eur numeric(10,2) NOT NULL,
  created_at timestamptz NOT NULL DEFAULT now()
);
INSERT INTO products (sku, price_eur) VALUES
  ('NRD-001', 19.99),
  ('NRD-002', 29.50),
  ('NRD-003', 9.00);

CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replpw';
SQL

Open pg_hba.conf to allow the replication connection from the standby and reload:

docker exec pg17-publisher bash -c "echo 'host replication replicator 0.0.0.0/0 scram-sha-256' >> /var/lib/postgresql/data/pg_hba.conf"
docker exec pg17-publisher psql -U postgres -c "SELECT pg_reload_conf();"

In production you would scope that CIDR to the standby's actual address. For a localhost demo, 0.0.0.0/0 plus a SCRAM password is fine.

Step 2: Add a streaming physical standby

Run pg_basebackup from a fresh PG 17.9 container into a named Docker volume, then start a second Postgres on that volume:

docker volume create pg18-zdt_standby_data
docker run --rm --network pg18-zdt_default \
  -v pg18-zdt_standby_data:/var/lib/postgresql/data \
  -e PGPASSWORD=replpw \
  postgres:17.9 \
  pg_basebackup -h publisher -p 5432 -U replicator \
    -D /var/lib/postgresql/data \
    -Fp -R -X stream -P

The flags do meaningful work:

  • -Fp writes a plain-format directory tree (required by pg_createsubscriber, which works on a real PGDATA).12
  • -R writes standby.signal and a primary_conninfo line into postgresql.auto.conf, so the recipient comes up as a streaming standby with no extra config.12
  • -X stream streams WAL alongside the base backup so the data directory is consistent the moment the command exits.

Now start the standby on its own port:

  standby:
    image: postgres:17.9
    container_name: pg17-standby
    ports:
      - "5533:5432"
    volumes:
      - standby_data:/var/lib/postgresql/data
    command: ["postgres", "-c", "listen_addresses=*"]

volumes:
  publisher_data:
  standby_data:
    external: true
    name: pg18-zdt_standby_data

Append the standby service to docker-compose.yml and the standby_data volume entry, then docker compose up -d standby. Confirm streaming replication is healthy from the publisher:

docker exec pg17-publisher psql -U postgres -c \
  "SELECT application_name, state, sync_state FROM pg_stat_replication;"

You should see one row in streaming state. Insert a row on the publisher and read it back from port 5533 to confirm replay is live.

Step 3: Convert the standby with pg_createsubscriber --all

pg_createsubscriber requires the target server to be shut down before it runs.1 Stop the standby cleanly, then invoke the binary using a one-shot PG 17.9 container that mounts the same data volume:

docker compose stop standby

docker run --rm --network pg18-zdt_default \
  -v pg18-zdt_standby_data:/var/lib/postgresql/data \
  -e PGPASSWORD=dev \
  postgres:17.9 \
  pg_createsubscriber --all \
    -D /var/lib/postgresql/data \
    -P "host=publisher port=5432 user=postgres dbname=postgres" \
    -p 5432 \
    --verbose

--all tells the binary to discover every user database on the standby and create a publication/subscription pair for each one — the new PG 18 ergonomics that replaces hand-listing each database with -d.5 The slot names follow the documented pg_createsubscriber_<DB_OID>_<random_int> convention so they don't collide with any slots you already have.13

Behind the scenes the binary does six things:1

  1. Verifies prerequisites: the source must have wal_level = logical, max_replication_slots and max_wal_senders high enough, and the target must have max_active_replication_origins, max_logical_replication_workers, and max_worker_processes high enough.
  2. Creates a FOR ALL TABLES publication on the source for each database.
  3. Creates a logical replication slot on the source for each database.
  4. Writes recovery parameters into the target's data dir setting recovery_target_lsn and recovery_target_action = promote.
  5. Restarts the target briefly so it replays up to that LSN and promotes.
  6. Creates the matching subscription on the target — without copying initial data, because the physical replica already has it.

When the binary exits, restart the standby container:

docker compose start standby
docker exec pg17-standby psql -U postgres -d shop \
  -c "SELECT subname, subenabled FROM pg_subscription;"

You should see pg_createsubscriber_<oid>_<rand> listed, with subenabled = t. The standby is now a Postgres 17.9 logical subscriber.

Step 4: pg_upgrade the standby to Postgres 18.3

The next step is a standard pg_upgrade from PG 17.9 to PG 18.3, with a critical pre-step: disable subscriptions on the subscriber before running pg_upgrade, per the official upgrade docs.14 If you skip this step, the upgrade refuses to migrate the subscription state.

docker exec pg17-standby psql -U postgres -d shop \
  -c "ALTER SUBSCRIPTION pg_createsubscriber_5_<rand> DISABLE;"
docker compose stop standby

Replace <rand> with the actual suffix from pg_subscription. With --all, you'll have one subscription per user database — disable each one.

pg_upgrade requires both the old and the new PostgreSQL binaries to be available on the same host. Neither the official postgres:17.9 nor postgres:18.3 Docker image ships both versions, so we build a tiny dual-version image. Drop this Dockerfile.upgrade next to the compose file:

# Dockerfile.upgrade
FROM postgres:18.3-bookworm
RUN apt-get update \
 && apt-get install -y --no-install-recommends postgresql-17 \
 && rm -rf /var/lib/apt/lists/*

The PGDG repository configured inside the official postgres:18.3-bookworm image already serves every supported major version,15 so adding the postgresql-17 package gives us /usr/lib/postgresql/17/bin/ alongside /usr/lib/postgresql/18/bin/ in the same image. Build it once:

docker build -t pg-upgrade:17-to-18 -f Dockerfile.upgrade .

Now run pg_upgrade against the standby's data directory. Note that PostgreSQL 18 changed the default PGDATA location for its Docker image — it now lives at /var/lib/postgresql/18/docker so that future major-version upgrades can mount one shared volume.16 We follow the same convention here, telling pg_upgrade to write the new cluster into /var/lib/postgresql/18/docker. The script runs as root so it can chown the freshly mounted volume, then drops to the postgres user via gosu (already installed in the official image) for the actual Postgres operations:

docker volume create pg18-zdt_subscriber_v18

docker run --rm --network pg18-zdt_default \
  -v pg18-zdt_standby_data:/var/lib/postgresql/data \
  -v pg18-zdt_subscriber_v18:/var/lib/postgresql/18 \
  pg-upgrade:17-to-18 bash -c '
    chown -R postgres:postgres /var/lib/postgresql/18 &&
    gosu postgres /usr/lib/postgresql/18/bin/initdb \
      -D /var/lib/postgresql/18/docker \
      -U postgres --auth-local=trust --auth-host=scram-sha-256 &&
    printf "wal_level = logical\nmax_replication_slots = 10\nmax_active_replication_origins = 10\n" \
      | gosu postgres tee -a /var/lib/postgresql/18/docker/postgresql.conf &&
    cd /tmp &&
    gosu postgres /usr/lib/postgresql/18/bin/pg_upgrade \
      --old-bindir=/usr/lib/postgresql/17/bin \
      --new-bindir=/usr/lib/postgresql/18/bin \
      --old-datadir=/var/lib/postgresql/data \
      --new-datadir=/var/lib/postgresql/18/docker \
      --link
  '

Three things to notice:

  • --link hardlinks the data files into the new cluster, which is why pg_upgrade is fast even on large clusters. You cannot start the old PG 17 cluster after a --link upgrade completes, so the standby is now permanently a Postgres 18 instance.
  • The printf | tee line writes the three GUCs that pg_upgrade checks before it will migrate logical slots — the new cluster needs wal_level = logical and max_replication_slots at least as large as the old cluster's slot count.14
  • The publisher keeps accepting writes the entire time pg_upgrade runs — those changes will replay through the existing logical slot the moment the upgraded cluster comes up.

Add the upgraded cluster to your compose file. Set PGDATA explicitly so the official entrypoint discovers the data directory at the new PG 18 location, and pin the GUCs that the upgrade flagged:

  subscriber_v18:
    image: postgres:18.3
    container_name: pg18-subscriber
    ports:
      - "5534:5432"
    environment:
      PGDATA: /var/lib/postgresql/18/docker
    volumes:
      - subscriber_v18:/var/lib/postgresql/18
    command:
      - "postgres"
      - "-c"
      - "wal_level=logical"
      - "-c"
      - "max_replication_slots=10"
      - "-c"
      - "max_active_replication_origins=10"
      - "-c"
      - "listen_addresses=*"

volumes:
  publisher_data:
  standby_data:
    external: true
    name: pg18-zdt_standby_data
  subscriber_v18:
    external: true
    name: pg18-zdt_subscriber_v18

max_active_replication_origins is a new Postgres 18 GUC — split out from max_replication_slots so subscribers can size their replication-origin tracking independently. It defaults to 10 and is set at server start.17 Bring the cluster up and re-enable the subscription:

docker compose up -d subscriber_v18
docker exec pg18-subscriber psql -U postgres -d shop \
  -c "ALTER SUBSCRIPTION pg_createsubscriber_5_<rand> ENABLE;"

Step 5: Sync sequences and cut over

Logical replication does not replicate sequences in Postgres 18.18 Sequence-backed IDENTITY and serial column values flow through fine because they ride along as ordinary column data, but the sequence's internal pointer on the subscriber still points at the start value. The first insert directly against the new cluster will hand out an id that already exists — instant duplicate-key error.

Sequence synchronization is on the roadmap for Postgres 19,19 but for 18 you have to do it manually. Generate a setval() script on the publisher and apply it on the subscriber immediately before the cutover, with a buffer to absorb any in-flight transactions:

-- Run on publisher, capture output
SELECT format(
  'SELECT pg_catalog.setval(%L, %s, true);',
  schemaname || '.' || sequencename,
  COALESCE(last_value, 1) + 1000
)
FROM pg_sequences
WHERE schemaname NOT IN ('pg_catalog','information_schema')
ORDER BY schemaname, sequencename;

The + 1000 buffer is the conventional safety margin — it guarantees the subscriber's next sequence value is past whatever the publisher has handed out plus anything in flight during your cutover window.

Apply the generated setval() statements on pg18-subscriber, then perform the cutover:

# 1. Block writes on publisher (revoke INSERT/UPDATE/DELETE for app role,
#    or flip the load balancer to read-only). Keep reads alive.
# 2. Wait for the subscription to drain to the publisher's current LSN:
docker exec pg18-subscriber psql -U postgres -d shop -c "
  SELECT subname, latest_end_lsn FROM pg_stat_subscription;"
docker exec pg17-publisher psql -U postgres -c "SELECT pg_current_wal_lsn();"
# 3. When latest_end_lsn >= pg_current_wal_lsn, redirect application traffic
#    to port 5534 (the PG 18 cluster).
# 4. Drop the subscription and the publisher's slot to clean up.
docker exec pg18-subscriber psql -U postgres -d shop -c "
  ALTER SUBSCRIPTION pg_createsubscriber_5_<rand> DISABLE;
  ALTER SUBSCRIPTION pg_createsubscriber_5_<rand> SET (slot_name = NONE);
  DROP SUBSCRIPTION pg_createsubscriber_5_<rand>;"

The application is now writing to Postgres 18.3.

Verification

Two checks tell you the upgrade landed cleanly. First, every replicated table should be in the r (ready) state in pg_subscription_rel:20

SELECT srrelid::regclass AS table_name, srsubstate, srsublsn
FROM pg_subscription_rel
ORDER BY srrelid::regclass;

A srsubstate other than r on any row means a table is still synchronizing or stuck. Second, sequence pointers should be ahead of any existing IDs — MAX(id) on each table should be less than nextval('<seq>'). Insert a probe row on the new cluster and confirm no duplicate-key error.

Troubleshooting

pg_createsubscriber: error: target server must be shut down — the standby is still running. docker compose stop standby and rerun. The binary refuses to operate on a live cluster on purpose.1

max_active_replication_origins must be at least N — the new PG 18 GUC, split out from max_replication_slots in PG 18, defaults to 10 and is the per-subscriber equivalent of replication-slot accounting. Restart with the value bumped.17

subscription has tables in srsubstate = 's' after pg_upgradepg_upgrade requires every subscription table to be in i (init) or r (ready) state before it runs.14 Wait for the initial sync to finish before you upgrade.

duplicate key value violates unique constraint immediately after cutover — sequences. Re-run the setval() script with a larger buffer; this is among the most common foot-guns in a logical-replication cutover.18

Replication appears stuck, pg_replication_slots.confirmed_flush_lsn not advancing — confirm the apply worker is alive on the subscriber (pg_stat_subscription should show a non-null received_lsn), and check pg_stat_database_conflicts on the publisher. If the publisher's WAL is filling up because the subscriber is far behind, raise max_slot_wal_keep_size or fix whatever is blocking apply (long-running transaction, lock conflict, missing primary key).

Rollback

Because the upgrade kept the original Postgres 17.9 primary running and accepting writes the entire time, rollback is just a connection-string flip. If something looks wrong on Postgres 18.3 within minutes of cutover, repoint the application back to port 5532. The PG 17.9 primary is still authoritative; the only data it doesn't have is whatever the application wrote against PG 18.3 after cutover. For a true bidirectional safety net, set up a reverse PG 18 → PG 17 logical subscription before cutover (publisher on the new cluster, subscriber on the old), so post-cutover writes flow back. That's worth the extra hour of setup for a critical OLTP workload.

Next steps

The same pg_createsubscriber workflow underpins blue-green deployments — set up a PG 18 replica, drain reads, cut over writes, decommission the old primary on a schedule rather than at 3 AM. Pair it with PgBouncer or Supabase Supavisor in front of Postgres so the connection switch is a pooler config reload instead of an application redeploy. If you need real-time UI events on top of this stack, Postgres LISTEN/NOTIFY for live presence layers on top cleanly without any new replication. And if you want the entire Docker-Compose-based dev environment behind production-grade HTTPS, see the Caddy reverse-proxy tutorial.

Footnotes

  1. PostgreSQL 18 documentation, pg_createsubscriber. https://www.postgresql.org/docs/current/app-pgcreatesubscriber.html 2 3 4

  2. PostgreSQL 18 documentation, Chapter 29. Logical Replication — Upgrade. https://www.postgresql.org/docs/current/logical-replication-upgrade.html

  3. PostgreSQL 18 documentation, pg_upgrade. https://www.postgresql.org/docs/current/pgupgrade.html 2

  4. PostgreSQL News, PostgreSQL 18.3, 17.9, 16.13, 15.17, and 14.22 Released! (February 26, 2026). https://www.postgresql.org/about/news/postgresql-183-179-1613-1517-and-1422-released-3246/

  5. PostgreSQL 18 documentation, pg_createsubscriber--all, --clean, --enable-two-phase switches. https://www.postgresql.org/docs/current/app-pgcreatesubscriber.html 2 3

  6. Docker, Compose specification. https://docs.docker.com/compose/

  7. PostgreSQL 18 documentation, Chapter 29. Logical Replication. https://www.postgresql.org/docs/current/logical-replication.html

  8. pgPedia, pg_createsubscriber. https://pgpedia.info/p/pg_createsubscriber.html

  9. PostgreSQL 18 documentation, 19.5. Write Ahead Log — wal_level. https://www.postgresql.org/docs/current/runtime-config-wal.html

  10. PostgreSQL 18 documentation, 19.5. Write Ahead Log — automatic effective WAL-level promotion when a logical slot exists. https://www.postgresql.org/docs/current/runtime-config-wal.html

  11. PostgreSQL 18 documentation, pg_createsubscriber — prerequisites for max_replication_slots and max_wal_senders on the source. https://www.postgresql.org/docs/current/app-pgcreatesubscriber.html

  12. PostgreSQL 18 documentation, pg_basebackup. https://www.postgresql.org/docs/current/app-pgbasebackup.html 2

  13. pgPedia, pg_createsubscriber — slot naming convention pg_createsubscriber_%u_%x. https://pgpedia.info/p/pg_createsubscriber.html

  14. PostgreSQL 18 documentation, Chapter 29.13. Upgrade — pg_upgrade prerequisites for logical-replication clusters, including disabling subscriptions before upgrade. https://www.postgresql.org/docs/current/logical-replication-upgrade.html 2 3

  15. PostgreSQL APT repository, PostgreSQL Apt Repository — serves all currently supported major versions. https://wiki.postgresql.org/wiki/Apt

  16. docker-library/postgres GitHub PR / discussion — Postgres 18 image VOLUME change to /var/lib/postgresql to enable side-by-side major upgrades via pg_upgrade --link. https://github.com/docker-library/postgres/issues/37

  17. PostgreSQL 18 documentation, 19.6. Replication — max_active_replication_origins (new in PG 18). https://www.postgresql.org/docs/current/runtime-config-replication.html 2

  18. PostgreSQL 18 documentation, 29.8. Restrictions — sequences and large objects are not replicated. https://www.postgresql.org/docs/current/logical-replication-restrictions.html 2

  19. depesz blog, Waiting for PostgreSQL 19 — Sequence synchronization in logical replication (November 2025). https://www.depesz.com/2025/11/11/waiting-for-postgresql-19-sequence-synchronization-in-logical-replication/

  20. PostgreSQL 18 documentation, pg_subscription_rel system catalog. https://www.postgresql.org/docs/current/catalog-pg-subscription-rel.html


FREE WEEKLY NEWSLETTER

Stay on the Nerd Track

One email per week — courses, deep dives, tools, and AI experiments.

No spam. Unsubscribe anytime.