Production Postgres Pooling: PgBouncer + Supavisor 2026
May 7, 2026
A single idle Postgres connection costs you roughly 2–3 MB of memory before it ever runs a query.1 Push that to a few thousand serverless invocations all opening their own connections and your database falls over from memory pressure long before CPU becomes the bottleneck. Connection pooling is the fix — but the moment you put a pooler in front of Postgres, half of your favorite features quietly stop working.
This guide walks through a production-grade setup for both worlds: self-hosted PgBouncer 1.25.1 in front of Postgres 18, then the same application pointed at Supabase's managed Supavisor pooler. Every config value, port, and command was verified against the official PgBouncer 1.25.1 release notes,2 the Supavisor source repo,3 and Supabase's own docs4 on the day of writing.
TL;DR
You will run Postgres 18 and PgBouncer 1.25.1 with Docker Compose, configure transaction pooling with SCRAM-SHA-256 auth, generate a SCRAM userlist, and connect a Node.js app through port 6432. You will then swap the connection string to Supabase Supavisor (port 6543) and watch the same code work against a managed pooler. By the end you will know which Postgres features break under transaction pooling, how to enable protocol-level prepared statements (a 15–250 percent throughput win5), and how to read SHOW POOLS to spot a starving pool before users notice.
What you'll learn
- Why one Postgres connection is more expensive than you think, with real memory math.
- The three PgBouncer pool modes — session, transaction, statement — and what each one breaks.
- How to bring up Postgres 18 + PgBouncer 1.25.1 with a single
docker compose up. - How to configure SCRAM-SHA-256 authentication and a real
userlist.txt. - How to enable protocol-level named prepared statements (PgBouncer 1.21+ feature).
- How to wire a Node.js application using
pg@8.20.0to the pooler, and what changes vs. talking to Postgres directly. - How to point the same code at Supabase Supavisor by changing only the connection string.
- How to read
SHOW POOLSandSHOW STATSlike a debugger, not a dashboard. - The five real production failures every team hits — LISTEN over a transaction pool, broken prepared statements, advisory locks, session-scoped SET, and IPv4 surprises on Supabase direct connections.
Prerequisites
| Tool | Pinned version | Why |
|---|---|---|
| Docker Desktop | 4.30+ (or Docker Engine 27+) | Compose v2 syntax used below |
| Postgres image | postgres:18-alpine | Postgres 18 GA shipped 2025-09-256 |
| PgBouncer image | edoburu/pgbouncer:v1.25.1-p0 | Ships PgBouncer 1.25.1 on Alpine 3.227 |
| Node.js | 22 LTS | For the demo client |
pg (npm) | 8.20.0 | Latest stable on the npm registry |
dotenv (npm) | 17.4.2 | For env-file loading |
psql | 16+ | For the admin console |
| Supabase project (Step 6) | Free tier works for testing | Pro+ if you need a dedicated IPv4 address8 |
Why pooling — the memory math nobody quotes
Postgres ships with a process-per-connection model. The default max_connections is 100 and work_mem is 4 MB.1 An idle connection runs in its own backend process and accounts for roughly 2–3 MB of resident memory before any query runs9 — the exact number depends on huge pages, kernel copy-on-write behavior, and workload, but the order of magnitude holds. An active connection running a complex query may use multiple work_mem buffers concurrently: a sort and a hash join can each take their own 4 MB chunk on top of that baseline.
That math has consequences. A Lambda-style app that spins up 1,000 cold containers and each opens a connection isn't asking your database for "1,000 sockets." It's asking for around 1000 × ~3 MB ≈ 3 GB of process memory, plus whatever queries are running at the time. On a db.t4g.medium with 4 GB RAM, that's already game over.
The classic Postgres pool-sizing formula is (CPU_cores × 2) + number_of_disks from the Postgres wiki tuning guide.10 For a 4-core box backed by an SSD, that's 9 connections. Your app, however, plausibly needs to handle 500 concurrent HTTP requests. Pooling is the bridge between those two numbers.
The three pool modes — and what each one breaks
PgBouncer can be configured in three pooling modes,11 and the mode determines which Postgres features survive.
| Mode | Server held for | Session features survive? | Realistic use |
|---|---|---|---|
session | Lifetime of the client connection | Yes — full Postgres parity | Long-lived workers, pgsql admin tools, migration runners |
transaction | BEGIN → COMMIT/ROLLBACK | No: LISTEN, session SET, session advisory locks, temp tables across txns, WITH HOLD cursors | Default for stateless web apps |
statement | One statement only | Multi-statement transactions are rejected | Pure key-lookup workloads in autocommit |
Transaction pooling is what most production web apps want — it's the practical mode that lets you put 10,000 web clients in front of 20 Postgres backends without giving up multi-statement transactions (statement pooling forbids those by definition). The trade-off is sharp: session-state features stop working because the next query may land on a different backend connection.12 LISTEN/NOTIFY in particular is a documented landmine — LISTEN does not work in transaction mode, although NOTIFY does.13 If you also write realtime apps, see our Postgres LISTEN/NOTIFY presence tutorial for the workaround pattern (a dedicated direct connection for the listener, with the rest of your app pooled).
Step 1 — Bring up Postgres 18 + PgBouncer with Docker Compose
Create a fresh directory and a Compose file. We'll wire PgBouncer to talk to Postgres over a private Docker network, and expose both services to the host on different ports — that way the load test in the final section can hit either one directly.
mkdir pg-pooling-demo && cd pg-pooling-demo
mkdir pgbouncer
Save the following as docker-compose.yaml:
services:
postgres:
image: postgres:18-alpine
container_name: demo-pg
environment:
POSTGRES_DB: appdb
POSTGRES_USER: appuser
POSTGRES_PASSWORD: app-secret-rotate-me
POSTGRES_HOST_AUTH_METHOD: scram-sha-256
POSTGRES_INITDB_ARGS: "--auth-host=scram-sha-256 --auth-local=scram-sha-256"
command:
- "postgres"
- "-c"
- "password_encryption=scram-sha-256"
- "-c"
- "max_connections=50"
ports:
- "5433:5432"
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- poolnet
pgbouncer:
image: edoburu/pgbouncer:v1.25.1-p0
container_name: demo-pgbouncer
depends_on:
- postgres
ports:
- "6432:5432"
volumes:
- ./pgbouncer/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini:ro
- ./pgbouncer/userlist.txt:/etc/pgbouncer/userlist.txt:ro
networks:
- poolnet
volumes:
pgdata:
networks:
poolnet:
A few choices worth calling out. First, we deliberately set max_connections=50 on Postgres — small enough that you can prove pooling works by overwhelming it without a pool. Second, we force password_encryption=scram-sha-256 so the userlist file we generate next will work (Postgres 14+ defaults to this anyway, but it's worth being explicit). Third, we expose Postgres on host port 5433 AND PgBouncer on host port 6432, so the load test in the Verification section can hit both directly: 5433 = direct Postgres (capped at 50 conns), 6432 = pooled (queues up to 1000).
We're not running PgBouncer yet because it needs pgbouncer.ini and userlist.txt. Bring up only Postgres for now:
docker compose up -d postgres
docker compose logs -f postgres # wait for "database system is ready to accept connections"
Step 2 — Configure PgBouncer for transaction pooling
Save the following as pgbouncer/pgbouncer.ini:
[databases]
appdb = host=postgres port=5432 dbname=appdb
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 5432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
# Pooling
pool_mode = transaction
default_pool_size = 20
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
max_client_conn = 1000
max_db_connections = 40
# Prepared statements (protocol level — PgBouncer 1.21+)
max_prepared_statements = 200
# Hygiene
server_reset_query = DISCARD ALL
query_wait_timeout = 30
server_idle_timeout = 600
log_connections = 1
log_disconnections = 1
# Admin console
admin_users = appuser
stats_users = appuser
ignore_startup_parameters = extra_float_digits,application_name
The values above are tuned, not random. default_pool_size = 20 is a reasonable starting point against a 50-connection Postgres,14 leaving room for migrations, replication slots, and one-off psql sessions. max_client_conn = 1000 lets a thousand HTTP workers all sit on the pooler simultaneously. query_wait_timeout = 30 overrides PgBouncer's 120-second default15 — most APIs would rather fail fast than queue for two minutes. server_reset_query = DISCARD ALL is also the PgBouncer default;16 it's listed explicitly so you remember it exists when you start debugging "why does this query see leftover state from another client."
max_prepared_statements = 200 is the line that turns on protocol-level named prepared statements in transaction mode — a feature added in PgBouncer 1.21.017 that benchmarks at 15–250 percent more throughput depending on the workload.5 We'll exercise it from Node in Step 5.
Step 3 — Generate a SCRAM-SHA-256 userlist
PgBouncer 1.25.1 expects auth secrets in userlist.txt. With auth_type = scram-sha-256 you can supply either plaintext (a maintenance footgun) or the SCRAM secret stored in pg_authid.rolpassword.18 We will use the second approach.
Connect to the running Postgres directly and grab the SCRAM secret. Note that pg_authid is a privileged catalog — only superusers can SELECT from it. The official postgres Docker image creates POSTGRES_USER as a bootstrap superuser, which is why the next command works as appuser. In production you would run this as a dedicated DBA role.
docker exec -it demo-pg psql -U appuser -d appdb -c \
"SELECT rolname, rolpassword FROM pg_authid WHERE rolname='appuser';"
You should see something like:
rolname | rolpassword
---------+----------------------------------------------------------------------------------------------------------------------------
appuser | SCRAM-SHA-256$4096:abc123==$storedKeyBase64=:serverKeyBase64=
(1 row)
Copy the rolpassword value verbatim — quotes and dollar signs included — and write it to pgbouncer/userlist.txt:
"appuser" "SCRAM-SHA-256$4096:abc123==$storedKeyBase64=:serverKeyBase64="
Quote both fields. The format is "username" "password-or-secret" per the PgBouncer config docs.19 Lock the file down so it isn't world-readable:
chmod 600 pgbouncer/userlist.txt
Now bring PgBouncer up:
docker compose up -d pgbouncer
docker compose logs -f pgbouncer
You're looking for a clean startup with no auth_file errors. If you see password authentication failed, the most common cause is copy-pasting the SCRAM secret with a trailing newline or with smart quotes from a rich-text editor — re-paste from a plain terminal.
Step 4 — Verify the pool from psql
Connect through PgBouncer on port 6432 — not directly to Postgres on 5432:
psql "host=127.0.0.1 port=6432 user=appuser dbname=appdb"
Run a simple query, then disconnect. Now connect to PgBouncer's admin console — a special pseudo-database called pgbouncer:
psql "host=127.0.0.1 port=6432 user=appuser dbname=pgbouncer"
Once inside, run the two most useful diagnostic commands:
SHOW POOLS;
SHOW STATS;
SHOW POOLS lists one row per (database, user) pair with the live count of client and server connections in each state.20 The fields you actually care about in production are:
cl_active— clients paired with a server connection, processing a query.cl_waiting— clients holding the line waiting for a server. Anything above zero for sustained intervals is your warning sign.cl_active_cancel_req— clients that issued a cancel and are waiting for the server to acknowledge.21sv_active— server connections currently paired with a client.sv_idle— server connections sitting in the pool ready to be handed out.sv_used— server connections idle long enough that PgBouncer wants to health-check them before reuse.
SHOW STATS gives you per-database totals and rolling averages — total transactions, total queries, average query duration in microseconds, average wait time, bytes received and sent — so you can chart pool throughput over time without instrumenting your application.
Step 5 — Wire a Node.js app through the pooler
The contract for application code is "pretend the pooler is Postgres." With one caveat we'll see in a moment.
mkdir client && cd client
npm init -y
npm install pg@8.20.0 dotenv@17.4.2
Save the following as client/index.mjs:
import 'dotenv/config';
import pg from 'pg';
const { Pool } = pg;
const pool = new Pool({
host: process.env.PGHOST,
port: Number(process.env.PGPORT),
database: process.env.PGDATABASE,
user: process.env.PGUSER,
password: process.env.PGPASSWORD,
max: 10,
idleTimeoutMillis: 30_000,
connectionTimeoutMillis: 5_000,
});
async function main() {
const start = Date.now();
const tasks = Array.from({ length: 200 }, (_, i) =>
pool.query({
name: 'select_const', // gives the driver a stable name -> protocol-level prepare
text: 'SELECT $1::int AS n, current_setting($2) AS app',
values: [i, 'application_name'],
}),
);
const results = await Promise.all(tasks);
console.log(`ran ${results.length} queries in ${Date.now() - start} ms`);
await pool.end();
}
main().catch((err) => {
console.error(err);
process.exit(1);
});
And client/.env:
PGHOST=127.0.0.1
PGPORT=6432
PGDATABASE=appdb
PGUSER=appuser
PGPASSWORD=app-secret-rotate-me
Run it:
node index.mjs
You should see something like ran 200 queries in 180 ms. Now go back to the admin console and run SHOW POOLS; again — you'll see sv_active briefly spike toward 10 (your Node Pool max), cl_waiting may show non-zero entries during the burst, then everything settles back to sv_idle.
The important detail in that snippet is the name field on the query. When name is set, node-postgres uses the extended query protocol — Parse with that name, then Bind and Execute — caching the parsed plan on a per-connection basis. With max_prepared_statements = 200 on the PgBouncer side, the pooler tracks those named protocol-level prepared statements and transparently re-prepares them on whichever backend it hands you next.22 Without max_prepared_statements (which defaults to 0 — disabled), PgBouncer rejects named prepared statements in transaction mode and most drivers fall back to unnamed extended queries, throwing away the win.
Step 6 — Swap to Supabase Supavisor
So far we've been the SRE for our own pooler. The other half of this guide is the managed path. Supabase ships Supavisor, a multi-tenant Postgres pooler written in Elixir and Postgres-wire compatible, in front of every project.3 To switch your app from PgBouncer to Supavisor you change exactly one thing — the connection string.
In the Supabase dashboard, open Project Settings → Database → Connection string. You will see two connection strings:4
# Session mode (port 5432) — full Postgres parity, persistent clients
postgres://postgres.<project-ref>:<password>@aws-0-<region>.pooler.supabase.com:5432/postgres
# Transaction mode (port 6543) — default for serverless / web apps
postgres://postgres.<project-ref>:<password>@aws-0-<region>.pooler.supabase.com:6543/postgres
Update client/.env to point at the transaction-mode pooler:
PGHOST=aws-0-us-east-1.pooler.supabase.com
PGPORT=6543
PGDATABASE=postgres
PGUSER=postgres.<your-project-ref>
PGPASSWORD=<your-db-password>
Re-run node index.mjs. The exact same code works.
A few Supavisor-specific facts your team will hit. Port 5432 is now session mode and port 6543 is transaction mode — Supabase deprecated session mode on port 6543 on February 28, 2025.23 Supavisor 1.0 added named prepared statement support;24 initial transaction-mode prepared-statement work also landed but had bugs at the time of writing, so if you're using prepared statements at high concurrency, sanity-check on your specific Supavisor version. And on the IPv4 side: Supabase's direct database hostnames are IPv6-only across all tiers, while the pooler hostname (pooler.supabase.com) is dual-stack — using the pooler is itself the IPv4 fix. If you specifically need an IPv4 address on the direct connection, the Dedicated IPv4 add-on is $0.0055/hour ≈ $4/month per database and requires the Pro tier or higher.8
Verification — load test the difference
The whole point is that the pooler lets one Postgres backend serve far more clients than max_connections would allow. Prove it.
Because we mapped both 5433 → Postgres and 6432 → PgBouncer in the Compose file, you can drive load straight at each port from the host. Run 80 concurrent pg_sleep(0.5) queries directly against Postgres on port 5433:
for i in $(seq 1 80); do
PGPASSWORD=app-secret-rotate-me psql \
"host=127.0.0.1 port=5433 user=appuser dbname=appdb" \
-c "SELECT pg_sleep(0.5);" &
done
wait
You will see FATAL: sorry, too many clients already once concurrent connections cross the 50-connection cap we set in the Compose file. That is Postgres telling you, plainly, that it cannot serve client #51 directly.
Now repeat the same loop against PgBouncer on port 6432:
for i in $(seq 1 80); do
PGPASSWORD=app-secret-rotate-me psql \
"host=127.0.0.1 port=6432 user=appuser dbname=appdb" \
-c "SELECT pg_sleep(0.5);" &
done
wait
All 80 clients succeed. PgBouncer queues clients beyond default_pool_size and serves them as backends free up. While the loop is running, open a second terminal, connect to the admin console, and watch cl_waiting rise and fall during the burst.
Troubleshooting — the five real failures
These are the failures you will actually hit, with the explanation that maps to the underlying cause.
1. LISTEN returns no notifications. You're on transaction mode. LISTEN is documented as unsupported in transaction pooling — although NOTIFY works.13 Move the listener to a dedicated direct connection (or a session-mode pool) and keep the rest of your app on transaction. The pattern is detailed in our Postgres LISTEN/NOTIFY tutorial.
2. Prepared statements break with "prepared statement does not exist." You forgot max_prepared_statements. The default is 0 — protocol-level prepared statements are disabled.22 Set it to a non-zero value (100–200 is reasonable), reload PgBouncer, and let the driver re-prepare on the next request.
3. Session-level SET silently has no effect. A SET search_path (without LOCAL) sets a session GUC; transaction pooling clears it as soon as the transaction ends and the next transaction lands on a different backend connection. Use SET LOCAL inside a transaction, or add the parameter to track_extra_parameters if you really need session scope. Note the security implication: track_extra_parameters containing search_path is part of the precondition for CVE-2025-12819 in PgBouncer < 1.25.1,25 which is why we pinned 1.25.1-p0 at the top.
4. Advisory locks across queries don't hold. pg_advisory_lock (the session form) attaches to the backend connection. The next query in your app may land on a different connection in transaction mode, so the lock evaporates from the caller's perspective. Use the transaction-scoped variants (pg_advisory_xact_lock) or move locking into a dedicated session-mode pool.
5. Supabase connect fails from your laptop / GitHub Actions / Vercel with "no route to host." Supabase's direct database hostname (e.g. db.<project>.supabase.co) is IPv6-only across all tiers, while many home ISPs and CI runners are IPv4-only. The fix is either to route through Supavisor pooler hostnames, which are dual-stack and what Step 6 used, or to upgrade to Pro and enable the dedicated IPv4 add-on (~$4/month per database8).
Tuning checklist before you ship
A short list of dials worth setting before this is in front of users.
- Right-size
default_pool_size. Start with(CPU_cores × 2) + disks,10 then tune by watchingcl_waitingunder realistic load — non-zero sustained queue means raise it (or scale Postgres). - Set
query_wait_timeoutshort (5–30 seconds). The PgBouncer default of 120 seconds15 hides incidents for two minutes before your client even knows. - Pin
auth_type = scram-sha-256and rotateuserlist.txtregularly. Treat it like a secret on the same rotation cadence as your DB password. - Set
max_db_connectionsstrictly below Postgresmax_connections— leave headroom for migrations, replication, and superuser logins. - Monitor
SHOW STATSover time. The two metrics that catch incidents early are average query duration andtotal_xact_count— an unexplained jump in either usually beats your APM by 30 seconds.
Next steps
If you're new to the surrounding Postgres stack, our database architecture guide covers replication and partitioning trade-offs that interact with pool sizing. If you're stitching pooled Postgres into a larger backend, the backend web development guide walks through the rest of the layers — caching, message queues, and zero-downtime database migrations.
Footnotes
-
PostgreSQL 18 Resource Consumption documentation, default
max_connections = 100, defaultwork_mem = 4MB. https://www.postgresql.org/docs/current/runtime-config-resource.html ↩ ↩2 -
PgBouncer 1.25.1 release announcement (December 3, 2025), fixing CVE-2025-12819. https://www.postgresql.org/about/news/pgbouncer-1251-released-fixing-a-bunch-of-bugs-before-christmas-including-cve-2025-12819-3189/ ↩
-
Supavisor source repository — "A cloud-native, multi-tenant Postgres connection pooler." https://github.com/supabase/supavisor ↩ ↩2
-
Supabase docs — "Connect to your database," covering session and transaction mode pooler URLs. https://supabase.com/docs/guides/database/connecting-to-postgres ↩ ↩2
-
PgBouncer 1.21.0 release announcement, "this feature was able to increase query throughput anywhere from 15% to 250%, depending on the workload." https://www.postgresql.org/about/news/pgbouncer-1210-released-now-with-prepared-statements-2735/ ↩ ↩2
-
PostgreSQL 18 GA announcement, September 25, 2025. https://www.postgresql.org/about/news/postgresql-18-released-3142/ ↩
-
edoburu/pgbouncer Docker Hub tags —
v1.25.1-p0ships PgBouncer 1.25.1 on Alpine 3.22. https://hub.docker.com/r/edoburu/pgbouncer/tags ↩ -
Supabase Dedicated IPv4 Address docs — "$0.0055 an hour ... approximately $4.00 if left on for a full month ... available to Pro and above organizations." https://supabase.com/docs/guides/platform/ipv4-address ↩ ↩2 ↩3
-
PostgreSQL Wiki, Tuning Your PostgreSQL Server — connection memory overhead and the
(CPU_cores × 2) + number_of_disksrule. https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ↩ -
Same source as 9, referenced directly by the Heroku PgBouncer best practices guide. https://devcenter.heroku.com/articles/best-practices-pgbouncer-configuration ↩ ↩2
-
PgBouncer features documentation — pool modes (
session,transaction,statement). https://www.pgbouncer.org/features.html ↩ -
PgBouncer features doc — feature compatibility matrix, listing what is unsafe in each mode. https://www.pgbouncer.org/features.html ↩
-
PgBouncer issue #655, "Feature: Listen/Notify Support with Transaction Pooling" — confirms
LISTENis unsupported in transaction mode whileNOTIFYworks. https://github.com/pgbouncer/pgbouncer/issues/655 ↩ ↩2 -
PgBouncer config docs —
default_pool_sizedefaults to 20. https://www.pgbouncer.org/config.html ↩ -
PgBouncer config docs —
query_wait_timeoutdefaults to 120 seconds. https://www.pgbouncer.org/config.html ↩ ↩2 -
PgBouncer config docs —
server_reset_querydefaults toDISCARD ALL. https://www.pgbouncer.org/config.html ↩ -
PgBouncer 1.21.0 release notes, "The one with prepared statements." https://github.com/pgbouncer/pgbouncer/releases/tag/pgbouncer_1_21_0 ↩
-
Crunchy Data, "How to SCRAM in Postgres with pgBouncer," walks through copying
rolpasswordfrompg_authidintouserlist.txt. https://www.crunchydata.com/blog/pgbouncer-scram-authentication-postgresql ↩ -
PgBouncer config docs —
auth_fileformat described as"username" "password"with optional SCRAM, MD5, or plaintext secret. https://www.pgbouncer.org/config.html ↩ -
PgBouncer command-line usage —
SHOW POOLSandSHOW STATS. https://www.pgbouncer.org/usage.html ↩ -
Heap Engineering, "Decrypting PgBouncer's diagnostic information" —
cl_active_cancel_reqandsv_usedfield meanings. https://www.heap.io/blog/decrypting-pgbouncers-diagnostic-information ↩ -
pganalyze, "5mins of Postgres" — PgBouncer 1.21 added protocol-level prepared statements;
max_prepared_statementsmust be > 0 to enable. https://pganalyze.com/blog/5mins-postgres-pgbouncer-prepared-statements-transaction-mode ↩ ↩2 -
Supabase discussion #32755 — Supabase Connection Pooler deprecating session mode on port 6543 on February 28, 2025. https://github.com/orgs/supabase/discussions/32755 ↩
-
Supavisor issue #69 — "Support named prepared statements" merged in Supavisor 1.0. https://github.com/supabase/supavisor/issues/69 ↩
-
NVD CVE-2025-12819 — PgBouncer untrusted search_path in
auth_queryconnection handler, fixed in 1.25.1. https://nvd.nist.gov/vuln/detail/CVE-2025-12819 ↩