One of the most common questions from developers deploying self-hosted Supabase is deceptively simple: "How do I run multiple projects?" On Supabase Cloud, you click a button and spin up a new project. Self-hosted? It's not that straightforward.
The Supabase Studio readme is explicit: "Studio is designed to work with existing deployments... It is not intended for managing the deployment and administration of projects—that's out of scope." This leaves teams to figure out multi-project strategies on their own.
This guide covers the practical approaches to managing multiple Supabase projects on your own infrastructure, including when each makes sense and the real trade-offs involved.
Why Multi-Project Management Is Harder Self-Hosted
Supabase Cloud's dashboard creates an illusion of simplicity. Behind the scenes, each project is a completely separate instance with its own PostgreSQL database, API servers, and authentication service. The dashboard just makes them appear unified.
When self-hosting, you're responsible for that infrastructure yourself. There's no built-in way to create multiple "projects" within a single deployment because that's not how the architecture works.
You have three fundamental approaches:
- Separate instances: One Docker Compose stack per project
- Schema isolation: Multiple logical projects in one database
- Hybrid approaches: Shared infrastructure with logical separation
Each has distinct implications for security, performance, cost, and operational complexity.
Approach 1: Separate Instances (Recommended for Production)
The safest and most scalable approach is running each project as an independent Supabase deployment. This mirrors how Supabase Cloud operates.
How It Works
Each project gets its own Docker Compose stack with a unique configuration:
projects/
├── project-alpha/
│ ├── docker-compose.yml
│ ├── .env
│ └── volumes/
├── project-beta/
│ ├── docker-compose.yml
│ ├── .env
│ └── volumes/
└── project-gamma/
├── docker-compose.yml
├── .env
└── volumes/
To avoid port conflicts, each instance uses different port ranges. In your .env files:
# project-alpha/.env KONG_HTTP_PORT=8000 STUDIO_PORT=3000 POSTGRES_PORT=5432 # project-beta/.env KONG_HTTP_PORT=8100 STUDIO_PORT=3100 POSTGRES_PORT=5532 # project-gamma/.env KONG_HTTP_PORT=8200 STUDIO_PORT=3200 POSTGRES_PORT=5632
A reverse proxy like Nginx or Traefik routes requests to the correct instance based on domain:
# Nginx configuration
server {
server_name api.alpha.yourdomain.com;
location / {
proxy_pass http://localhost:8000;
}
}
server {
server_name api.beta.yourdomain.com;
location / {
proxy_pass http://localhost:8100;
}
}
Advantages
- Complete isolation: Security vulnerability in one project can't affect others
- Independent scaling: Allocate resources based on each project's needs
- Clean backups: Each database backs up separately with clear boundaries
- Version flexibility: Upgrade projects independently, test new Supabase versions on one before rolling out
Disadvantages
- Resource overhead: Each instance runs its own PostgreSQL, PostgREST, GoTrue, Kong, and optional services
- Memory multiplication: A minimal Supabase stack needs ~1GB RAM; ten projects means ~10GB baseline
- Configuration sprawl: Managing secrets, SSL certificates, and updates across many instances
- Operational complexity: More moving parts to monitor and maintain
When to Use
Separate instances make sense when:
- Projects have different clients or business units with data isolation requirements
- You need independent scaling or resource allocation
- Security compliance requires infrastructure-level separation
- Projects are in different stages (dev, staging, production)
Approach 2: Schema Isolation (Single Database, Multiple Projects)
For smaller projects or internal tools, you can run multiple logical "projects" within a single Supabase instance using PostgreSQL schemas.
How It Works
Create separate schemas for each project's tables:
-- Create schemas for each project
CREATE SCHEMA project_alpha;
CREATE SCHEMA project_beta;
CREATE SCHEMA project_gamma;
-- Create tables in each schema
CREATE TABLE project_alpha.users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL
);
CREATE TABLE project_beta.users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
email TEXT NOT NULL
);
Configure PostgREST to expose specific schemas by modifying PGRST_DB_SCHEMAS:
PGRST_DB_SCHEMAS=public,project_alpha,project_beta
Row Level Security policies can reference the current schema or use custom claims to restrict access:
-- RLS policy using JWT claims
ALTER TABLE project_alpha.users ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users can only access their project"
ON project_alpha.users
USING (
auth.jwt() ->> 'project' = 'alpha'
);
Advantages
- Resource efficiency: One PostgreSQL instance, one set of services
- Lower infrastructure costs: Single server can host many small projects
- Simpler operations: One stack to monitor, update, and backup
Disadvantages
- Shared failure domain: Database issue affects all projects
- Authentication complexity: GoTrue (auth) is designed for single-tenant use; sharing auth across projects requires custom implementation
- No per-project resource limits: One heavy query can impact all projects
- Backup granularity: You backup everything together, can't restore one project independently
- Storage sharing: Supabase Storage isn't designed for multi-tenant isolation
When to Use
Schema isolation works for:
- Internal tools and admin dashboards
- Microservices that share the same authentication
- Development and testing environments
- Projects you fully control with no external users
It's not appropriate for:
- Customer-facing applications with data isolation requirements
- Projects with different scaling needs
- Anything where a noisy neighbor could cause problems
Approach 3: Hybrid—Shared Database, Separate Auth
A middle ground separates the most critical component (authentication) while sharing the database:
┌─────────────────┐
│ Reverse Proxy │
└────────┬────────┘
┌─────────────────┼─────────────────┐
▼ ▼ ▼
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ GoTrue (A) │ │ GoTrue (B) │ │ GoTrue (C) │
│ Port 9999 │ │ Port 9998 │ │ Port 9997 │
└──────────────┘ └──────────────┘ └──────────────┘
│ │ │
└─────────────────┼─────────────────┘
▼
┌─────────────────┐
│ PostgreSQL │
│ (Shared) │
└─────────────────┘
Each project gets its own GoTrue instance with separate JWT secrets, but they connect to schemas in a shared database. This provides:
- Independent auth: Users and sessions are project-specific
- Shared infrastructure: One PostgreSQL instance to manage
- Clear data boundaries: Separate schemas with their own RLS policies
The complexity here is in the routing layer—you need to direct API requests to the correct PostgREST instance based on the project context.
Managing Multiple Instances at Scale
Running three projects manually is manageable. Running ten? Twenty? You need automation.
Configuration Management
Store your configurations in version control with environment-specific overrides:
supabase-infrastructure/
├── base/
│ ├── docker-compose.yml # Shared configuration
│ └── kong.yml # Common routing rules
├── projects/
│ ├── alpha/
│ │ ├── .env.production
│ │ └── overrides.yml
│ └── beta/
│ ├── .env.production
│ └── overrides.yml
└── scripts/
├── deploy.sh
└── backup-all.sh
Deployment Automation
A deployment script reduces manual errors:
#!/bin/bash
# deploy.sh - Deploy or update a specific project
PROJECT=$1
ACTION=${2:-up}
if [ -z "$PROJECT" ]; then
echo "Usage: ./deploy.sh <project-name> [up|down|pull]"
exit 1
fi
cd "projects/$PROJECT"
case $ACTION in
up)
docker compose pull
docker compose up -d
;;
down)
docker compose down
;;
pull)
docker compose pull
;;
esac
Centralized Monitoring
With multiple instances, centralized monitoring becomes essential. Each Supabase instance exposes health endpoints:
- Kong:
http://localhost:8000/health - PostgREST:
http://localhost:3000/ - GoTrue:
http://localhost:9999/health
A monitoring stack like Prometheus can scrape all instances:
# prometheus.yml
scrape_configs:
- job_name: 'supabase-alpha'
static_configs:
- targets: ['localhost:8000']
metrics_path: /metrics
- job_name: 'supabase-beta'
static_configs:
- targets: ['localhost:8100']
For a complete monitoring setup, see our observability guide.
Backup Coordination
Each project needs its own backup schedule. A wrapper script can coordinate:
#!/bin/bash
# backup-all.sh
PROJECTS=("alpha" "beta" "gamma")
BACKUP_ROOT="/backups"
DATE=$(date +%Y%m%d)
for project in "${PROJECTS[@]}"; do
echo "Backing up $project..."
docker exec "supabase-db-$project" pg_dump \
-U postgres \
--format=custom \
postgres > "$BACKUP_ROOT/$project/db_$DATE.dump"
# Verify backup
if [ $? -eq 0 ]; then
echo "$project backup successful"
else
echo "ERROR: $project backup failed"
# Send alert
fi
done
For comprehensive backup strategies, check our backup and restore guide.
Resource Planning for Multiple Projects
Each Supabase instance consumes baseline resources even when idle. Plan accordingly:
| Service | Minimum RAM | Typical RAM | Per Instance |
|---|---|---|---|
| PostgreSQL | 256 MB | 1-2 GB | Yes |
| PostgREST | 64 MB | 128 MB | Yes |
| GoTrue | 64 MB | 128 MB | Yes |
| Kong | 128 MB | 256 MB | Yes |
| Realtime | 128 MB | 256 MB | Optional |
| Storage | 128 MB | 256 MB | Optional |
| Studio | 256 MB | 512 MB | Optional |
A minimal stack (PostgreSQL + PostgREST + GoTrue + Kong) needs roughly 700MB-1GB per project. For a server with 8GB RAM, you can comfortably run 4-5 production projects with headroom for growth.
See our VPS provider comparison for server recommendations based on project count.
The Operational Reality
The true cost of self-hosting includes the time spent managing deployments. Each additional project adds:
- Configuration and secrets management
- Backup verification
- Update coordination
- Troubleshooting when things break
- SSL certificate renewals
For teams running a handful of projects, scripting handles this well. Beyond that, the operational burden grows non-linearly.
Simplifying Multi-Project Management
Managing multiple self-hosted Supabase instances manually is achievable but time-intensive. Supascale exists specifically to reduce this overhead.
With Supascale, you deploy unlimited projects from a single dashboard. Each project is fully isolated with:
- Automated deployments with proper security configuration
- Built-in backups to S3-compatible storage
- Custom domains with automatic SSL
- OAuth configuration through a UI
- Selective services: Only run what each project needs
For a one-time $39.99 purchase, you skip the scripting, the port management, and the configuration sprawl. If you're spending hours managing Docker Compose files across projects, see if Supascale fits your workflow.
Summary
Managing multiple Supabase projects on self-hosted infrastructure requires choosing the right isolation strategy:
- Separate instances: Best for production, provides complete isolation at higher resource cost
- Schema isolation: Works for small internal projects sharing infrastructure
- Hybrid approaches: Balance isolation needs with resource efficiency
The operational complexity scales with project count. For a few projects, scripted management works. For many projects or teams wanting to focus on building rather than infrastructure, purpose-built management tools reduce the burden.
