Read Replicas for Self-Hosted Supabase: A Complete Setup Guide

Learn how to configure PostgreSQL read replicas for your self-hosted Supabase instance to improve performance, reduce latency, and scale read operations.

Cover Image for Read Replicas for Self-Hosted Supabase: A Complete Setup Guide

When your self-hosted Supabase project grows, you'll eventually hit a wall: your primary database becomes a bottleneck. Read-heavy workloads, complex analytics queries, and globally distributed users all compete for the same database resources. The solution? Read replicas.

Unlike Supabase Cloud's managed read replicas, setting up replicas for a self-hosted Supabase instance requires hands-on PostgreSQL configuration. This guide walks you through the complete process.

Why Read Replicas Matter for Self-Hosted Supabase

Read replicas serve two primary purposes:

  1. Load distribution: Offload read queries from your primary database, reserving it for writes
  2. Geographic latency reduction: Deploy replicas closer to your users for faster response times

Consider a typical scenario: your primary Supabase instance handles user authentication, real-time subscriptions, and data writes. Meanwhile, your analytics dashboard runs complex aggregation queries that lock tables and consume resources. With a read replica, those analytics queries run on a separate database, leaving your primary responsive for user-facing operations.

The Supabase community on GitHub frequently discusses this challenge. Users running self-hosted instances on Kubernetes with CloudNative PostgreSQL (CNPG) want to leverage their existing read-only endpoints but lack clear guidance on connecting them to Supabase services.

Understanding PostgreSQL Replication Methods

Before diving into configuration, understand the two main replication approaches:

Streaming Replication (Physical)

Streaming replication copies the entire database cluster at the block level. It's fast and efficient but requires identical PostgreSQL versions and architectures between primary and replica.

-- Check if streaming replication is active
SELECT * FROM pg_stat_replication;

Pros:

  • Near-zero lag
  • Simpler setup
  • Exact binary copy

Cons:

  • Cannot replicate to different PostgreSQL versions
  • All databases in the cluster are replicated (no selective replication)

Logical Replication

Logical replication streams individual changes (INSERT, UPDATE, DELETE) that can be applied to different PostgreSQL versions or even different database systems. Supabase internally uses logical replication for its managed features.

-- Check logical replication slots
SELECT * FROM pg_replication_slots WHERE slot_type = 'logical';

Pros:

  • Version-independent
  • Selective table replication
  • Can transform data during replication

Cons:

  • Higher overhead
  • Schema changes don't propagate automatically
  • DDL statements require manual handling

For most self-hosted Supabase deployments, streaming replication provides the best balance of simplicity and performance.

Prerequisites

Before setting up read replicas, ensure you have:

  • A working self-hosted Supabase instance (see our deployment guide)
  • PostgreSQL 15 or higher (16+ recommended for improved replication features)
  • Network connectivity between primary and replica servers
  • Sufficient storage on the replica (at least equal to primary)
  • Proper backup procedures in place

Step 1: Configure the Primary Database

First, modify your primary PostgreSQL configuration to enable replication.

Update postgresql.conf

Add or modify these settings in your primary's postgresql.conf:

# Enable WAL archiving for replication
wal_level = replica
max_wal_senders = 5
wal_keep_size = 1GB

# For better performance with replication
synchronous_commit = on
checkpoint_timeout = 5min
max_replication_slots = 5

If you're using Docker Compose for your Supabase deployment, you can add these as command arguments:

services:
  db:
    image: supabase/postgres:15.6.1.143
    command: >
      postgres
      -c wal_level=replica
      -c max_wal_senders=5
      -c wal_keep_size=1GB
      -c max_replication_slots=5

Create a Replication User

Create a dedicated user for replication with minimal privileges:

CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'your_secure_password';

Configure pg_hba.conf

Allow the replica to connect. Add this line to pg_hba.conf:

host    replication     replicator      replica_ip/32         scram-sha-256

Restart PostgreSQL for changes to take effect:

docker compose restart db

Step 2: Set Up the Read Replica

Option A: Using pg_basebackup

The simplest approach uses pg_basebackup to create an exact copy of your primary:

# On the replica server
pg_basebackup -h primary_host -D /var/lib/postgresql/data \
  -U replicator -P -R --slot=replica_slot

The -R flag automatically creates the standby.signal file and configures recovery settings.

Option B: Docker-Based Replica

For Docker deployments, create a separate compose file for the replica:

version: '3.8'

services:
  db-replica:
    image: supabase/postgres:15.6.1.143
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - replica-data:/var/lib/postgresql/data
    command: >
      postgres
      -c primary_conninfo='host=primary_host port=5432 user=replicator password=your_password'
      -c primary_slot_name=replica_slot
      -c hot_standby=on
      -c hot_standby_feedback=on

volumes:
  replica-data:

You'll need to initialize this with a base backup first:

# Initialize the replica volume with base backup
docker run --rm -v replica-data:/var/lib/postgresql/data \
  supabase/postgres:15.6.1.143 \
  pg_basebackup -h primary_host -D /var/lib/postgresql/data \
  -U replicator -P -R --slot=replica_slot

Step 3: Configure Supabase Services to Use Replicas

Here's where self-hosted Supabase differs from Cloud. You need to configure services to route read queries to replicas manually.

PostgREST Configuration

PostgREST (the REST API) doesn't natively support read replicas. You have two options:

Option 1: Separate PostgREST Instance

Deploy a second PostgREST instance pointing to the replica:

services:
  rest-replica:
    image: postgrest/postgrest
    environment:
      PGRST_DB_URI: postgresql://authenticator:${POSTGRES_PASSWORD}@db-replica:5432/postgres
      PGRST_DB_SCHEMAS: public,storage,graphql_public
      PGRST_DB_ANON_ROLE: anon
      PGRST_JWT_SECRET: ${JWT_SECRET}
    ports:
      - "3001:3000"

Option 2: Load Balancer with Request Routing

Use a load balancer (nginx, HAProxy, or Traefik) to route GET requests to the replica:

upstream supabase_read {
    server db-replica:5432;
}

upstream supabase_write {
    server db-primary:5432;
}

map $request_method $upstream {
    GET     supabase_read;
    default supabase_write;
}

Important Limitations

Not all Supabase services support read replicas:

ServiceRead Replica Support
REST API (PostgREST)Partial (GET only)
AuthNo (writes required)
StorageNo (writes required)
RealtimeNo (needs write access)
Edge FunctionsQuery-dependent

This matches the Supabase Cloud limitations where only GET requests are supported for replicas.

Step 4: Monitor Replication Lag

Replication lag is the delay between a write on the primary and its availability on replicas. Monitor this closely:

-- On the primary
SELECT
  client_addr,
  state,
  sent_lsn,
  write_lsn,
  flush_lsn,
  replay_lsn,
  pg_wal_lsn_diff(sent_lsn, replay_lsn) AS lag_bytes
FROM pg_stat_replication;

Set up alerts when lag exceeds acceptable thresholds. For most applications, lag under 1 second is acceptable. For financial or consistency-critical applications, consider synchronous replication (with the performance trade-off).

Add this to your monitoring stack:

# prometheus/postgres_exporter queries
pg_replication:
  query: |
    SELECT
      client_addr,
      pg_wal_lsn_diff(sent_lsn, replay_lsn) as lag_bytes
    FROM pg_stat_replication
  metrics:
    - client_addr:
        usage: LABEL
    - lag_bytes:
        usage: GAUGE

Step 5: Handle Schema Changes

This is the trickiest part of managing read replicas with Supabase. Schema changes on the primary don't automatically propagate to replicas using logical replication.

For streaming replication, this isn't an issue—changes propagate automatically. But if you're using logical replication or have application-level routing, follow this process:

  1. Pause application traffic to the replica
  2. Apply schema changes to primary
  3. Wait for replication to catch up
  4. Apply same schema changes to replica (for logical replication)
  5. Resume traffic

For critical applications, consider using tools like pglogical or Supabase's migration workflow.

Kubernetes Deployments with CNPG

If you're running self-hosted Supabase on Kubernetes with CloudNative PostgreSQL (CNPG), you already have replication capabilities:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: supabase-postgres
spec:
  instances: 3
  primaryUpdateStrategy: unsupervised
  
  postgresql:
    parameters:
      wal_level: replica
      max_wal_senders: "10"
      
  bootstrap:
    initdb:
      database: postgres
      owner: supabase_admin

CNPG automatically manages read replicas and provides read-only service endpoints. Configure your Supabase services to use these endpoints:

services:
  rest:
    environment:
      PGRST_DB_URI: postgresql://authenticator:${POSTGRES_PASSWORD}@supabase-postgres-rw:5432/postgres
      
  rest-readonly:
    environment:
      PGRST_DB_URI: postgresql://authenticator:${POSTGRES_PASSWORD}@supabase-postgres-ro:5432/postgres

When Not to Use Read Replicas

Read replicas add operational complexity. They're not always the right solution:

  • Low traffic: If your primary handles the load comfortably, don't add replicas
  • Write-heavy workloads: Replicas only help with reads; writes still go to primary
  • Strict consistency requirements: Replication lag means replicas may serve stale data
  • Simple deployments: The operational overhead may outweigh benefits

Consider alternatives first:

Simplifying Operations with Supascale

Managing read replicas manually requires significant PostgreSQL expertise and ongoing maintenance. Each replica needs monitoring, backup coordination, and careful handling during upgrades.

Supascale simplifies self-hosted Supabase operations by handling the complex infrastructure tasks. While you focus on your application, Supascale manages automated backups, custom domains, and OAuth configuration—reducing the operational burden that comes with self-hosting.

For teams that need read replicas, having a solid operational foundation makes the additional complexity manageable. Check our pricing to see how Supascale fits your self-hosting strategy.

Conclusion

Read replicas can dramatically improve your self-hosted Supabase performance, but they require careful planning and ongoing maintenance. Start with streaming replication for simplicity, monitor replication lag closely, and have a clear process for schema changes.

The key trade-offs to remember:

  • Replicas only help with read operations
  • Replication lag means eventual consistency
  • Schema changes require coordination
  • Not all Supabase services support replicas

For most growing applications, read replicas are worth the complexity when you've exhausted simpler optimizations. Just ensure you have proper backup procedures and monitoring in place before adding this infrastructure layer.


Further Reading