Geographic Redundancy for Self-Hosted Supabase: A Multi-Region Guide

Learn how to deploy self-hosted Supabase across multiple regions for disaster recovery, lower latency, and compliance with data residency requirements.

Cover Image for Geographic Redundancy for Self-Hosted Supabase: A Multi-Region Guide

The recent blocking of Supabase in India reminded the developer community of a harsh reality: single-region deployments are a single point of failure. Whether it's government action, regional outages, or natural disasters, having your entire infrastructure in one location is a risk most production applications can't afford.

If you're self-hosting Supabase, you have the flexibility to deploy across multiple geographic regions—something that Supabase Cloud makes expensive with their read replica pricing. This guide walks through building geographic redundancy for your self-hosted Supabase deployment.

Why Geographic Redundancy Matters

Geographic redundancy isn't just about uptime statistics. Here's what multi-region deployment actually buys you:

Disaster Recovery: When your primary region goes down—whether from infrastructure failure, cyberattack, or regulatory action—your application stays online. Users get routed to the nearest healthy region automatically.

Lower Latency: A user in Singapore connecting to a Frankfurt server experiences 150-200ms latency on every database query. Deploy a read replica in Singapore, and that drops to 20-40ms. For real-time applications, this difference is visible.

Data Residency Compliance: GDPR, HIPAA, and country-specific regulations often require data to stay within certain geographic boundaries. In France, many companies must host health data on HDS-compliant infrastructure. Multi-region deployment lets you satisfy these requirements while maintaining a unified architecture.

Regional Isolation: If one region experiences issues—whether technical or regulatory—your other regions continue operating independently.

The Architecture Options

There are three main approaches to geographic redundancy with self-hosted Supabase, each with different trade-offs.

Option 1: Primary-Replica with Streaming Replication

This is the most straightforward approach. You have one primary region that handles all writes, and one or more replica regions that handle reads.

┌─────────────────────┐     Streaming Replication     ┌─────────────────────┐
│   Primary Region    │ ────────────────────────────> │   Replica Region    │
│   (EU-Frankfurt)    │                               │   (Asia-Singapore)  │
│   - Full Supabase   │                               │   - Full Supabase   │
│   - Reads + Writes  │                               │   - Reads Only      │
└─────────────────────┘                               └─────────────────────┘

Pros:

  • Simple to set up with PostgreSQL's built-in streaming replication
  • Supabase services work normally with read replicas
  • Replication lag is typically under a second

Cons:

  • Single write region creates a bottleneck
  • Failover requires manual intervention or additional tooling
  • Cross-region writes still have latency

Option 2: Active-Active with Conflict Resolution

Both regions accept writes, and conflicts are resolved through logical replication and application-level rules.

Pros:

  • No single point of failure for writes
  • Users always write to the nearest region

Cons:

  • Significantly more complex
  • Requires careful schema design to avoid conflicts
  • Not all Supabase features work seamlessly (Realtime, in particular, gets complicated)

Recommendation: Unless you have specific requirements that demand active-active, stick with primary-replica. The complexity isn't worth it for most applications.

Option 3: Independent Instances with Application-Level Routing

Each region runs a completely independent Supabase instance. Your application routes users to the appropriate region based on their location or account settings.

Pros:

  • Complete isolation between regions
  • Perfect for data residency requirements
  • Simpler operationally—each instance is independent

Cons:

  • No data sharing between regions
  • Users can't move between regions easily
  • More infrastructure to manage

Setting Up Primary-Replica Replication

Let's implement Option 1, as it provides the best balance of reliability and complexity for most teams.

Prerequisites

  • Two VPS instances in different regions (see our VPS provider guide for recommendations)
  • Each instance meeting system requirements
  • Secure network connectivity between regions (WireGuard VPN or private networking)

Step 1: Configure the Primary Instance

First, deploy Supabase on your primary region following the standard installation process. Then modify PostgreSQL to enable replication.

Edit your postgresql.conf (typically in the Supabase data directory):

# Replication settings
wal_level = replica
max_wal_senders = 5
wal_keep_size = 1GB
hot_standby = on

# Network settings for replication
listen_addresses = '*'

Create a replication user with the necessary permissions:

CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'your-secure-password';

Update pg_hba.conf to allow the replica to connect:

# Allow replication connections from replica server
host replication replicator replica_ip/32 scram-sha-256

Step 2: Set Up the Replica Instance

On your replica server, you'll need to create a base backup from the primary and configure it as a standby.

# Stop PostgreSQL on the replica
sudo systemctl stop postgresql

# Clear the data directory
sudo rm -rf /var/lib/postgresql/data/*

# Create base backup from primary
pg_basebackup -h primary_ip -D /var/lib/postgresql/data \
  -U replicator -P -v -R -X stream -C -S replica_slot

The -R flag automatically creates the standby.signal file and configures connection info. Verify the replica is receiving changes:

SELECT * FROM pg_stat_wal_receiver;

Step 3: Deploy Supabase Services on the Replica

Install Supabase on the replica server, but configure it to connect to the replicated PostgreSQL instance. The key differences:

  1. Point all services at the replica PostgreSQL
  2. Configure PostgREST as read-only
  3. Set up Kong to reject write operations (POST, PUT, DELETE, PATCH)

Here's an example Kong configuration to reject writes:

plugins:
  - name: request-termination
    config:
      status_code: 503
      message: "This is a read-only replica"
    route:
      methods: ["POST", "PUT", "DELETE", "PATCH"]

Step 4: Set Up DNS-Based Routing

Use GeoDNS or a load balancer with geographic routing to direct users to the nearest region. Options include:

  • Cloudflare: Load balancing with geographic routing
  • AWS Route 53: Geolocation routing policies
  • Self-hosted: PowerDNS with GeoIP module

Example Cloudflare configuration logic:

IF user_region == "Asia" OR user_region == "Oceania"
  → Route to singapore.supabase.yourdomain.com
ELSE
  → Route to frankfurt.supabase.yourdomain.com

For write operations, always route to the primary. Your application should be aware of which operations need the primary endpoint.

Handling Failover

When the primary goes down, you need to promote a replica. This isn't automatic with basic streaming replication.

Manual Promotion

On the replica you want to promote:

pg_ctl promote -D /var/lib/postgresql/data

This removes the standby.signal file and allows writes. Update your DNS to point to the new primary.

Automated Failover with Patroni

For production deployments, consider Patroni for automated failover. Patroni monitors your PostgreSQL cluster and automatically promotes a replica when the primary fails.

Basic Patroni setup requires:

  • A distributed consensus store (etcd, Consul, or ZooKeeper)
  • Patroni agents on each PostgreSQL node
  • Load balancer integration for automatic routing

This adds operational complexity but eliminates the need for manual intervention during outages.

Monitoring Replication Health

You can't fix what you can't see. Set up monitoring for:

Replication Lag: Time between a write on the primary and its appearance on replicas.

-- On the primary
SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn,
       (sent_lsn - replay_lsn) AS lag_bytes
FROM pg_stat_replication;

Connection Status: Is the replica connected and receiving data?

WAL Size: If replication falls behind, WAL files accumulate on the primary. Set alerts for unusual growth.

Integrate these metrics with your existing monitoring stack. A simple Prometheus exporter like postgres_exporter captures these metrics automatically.

Cost Considerations

Running multiple regions doubles (or triples) your infrastructure costs. Here's a realistic breakdown:

ComponentSingle RegionTwo Regions
VPS (4 vCPU, 8GB RAM)$40/month$80/month
S3-compatible storage$5/month$10/month
Inter-region bandwidth-$10-30/month
GeoDNS/Load Balancer-$20-50/month
Total~$45/month~$120-170/month

Compare this to Supabase Cloud's read replica pricing ($85/month per replica on Pro plan), and self-hosting remains competitive—especially when you factor in the full feature access and unlimited database growth.

For organizations where downtime costs thousands per hour, the additional $75-125/month for geographic redundancy is trivial insurance.

When Geographic Redundancy Isn't Worth It

Not every application needs multi-region deployment. Skip it if:

  • Your users are concentrated in one geographic area
  • Your application tolerates occasional downtime
  • You're in early development and optimizing for velocity
  • Your compliance requirements don't mandate it

Start with solid backup and restore procedures first. Backups with tested restore procedures provide disaster recovery without the operational complexity of multi-region deployment. You can always add geographic redundancy later.

Supascale and Multi-Region Management

Managing multiple Supabase instances across regions multiplies operational overhead. Each instance needs backups configured, SSL certificates maintained, and OAuth providers set up.

Supascale simplifies this by providing a unified interface for managing multiple self-hosted Supabase instances. Configure automated S3 backups, set up custom domains with automatic SSL, and manage OAuth providers—all from one dashboard.

For teams running Supabase across multiple regions, this centralized management reduces the operational burden significantly. Check our pricing for details on managing multiple projects.

Further Reading