If you're running self-hosted Supabase, there's a silent performance killer lurking in your database: dead tuples. PostgreSQL's autovacuum process is supposed to clean these up automatically, but the default settings are designed for small, low-write workloads. In production, those defaults can leave your database bloated, slow, and at risk of transaction ID wraparound—a scenario where PostgreSQL stops accepting writes entirely.
This guide covers everything you need to know about tuning autovacuum for self-hosted Supabase deployments, from understanding why it matters to implementing per-table configurations for high-write workloads.
Why Autovacuum Matters for Self-Hosted Supabase
PostgreSQL uses Multi-Version Concurrency Control (MVCC), which means updates and deletes don't remove old row versions immediately. Instead, they create new versions and mark old ones as "dead tuples." Over time, these dead tuples accumulate and cause:
- Table bloat: Wasted disk space that grows progressively
- Degraded query performance: Index scans hit dead rows, slowing reads
- Increased I/O: More disk reads for the same logical data
- Transaction ID wraparound: When you hit ~2 billion transactions without cleanup, PostgreSQL stops accepting writes
Unlike Supabase Cloud, which handles vacuum tuning automatically, self-hosted deployments require you to monitor and configure this yourself.
Understanding Default Autovacuum Behavior
The default autovacuum triggers when dead tuples exceed this threshold:
autovacuum_vacuum_threshold + (autovacuum_vacuum_scale_factor × table_rows)
With PostgreSQL defaults:
autovacuum_vacuum_threshold = 50autovacuum_vacuum_scale_factor = 0.2(20%)
For a table with 1 million rows, autovacuum won't start until 200,050 dead tuples accumulate. That's 20% of your table becoming dead before cleanup begins. For high-write applications—user sessions, real-time events, audit logs—this is far too permissive.
Monitoring Dead Tuples in Self-Hosted Supabase
Before tuning, you need visibility into your current state. Connect to your Supabase PostgreSQL instance and run:
SELECT schemaname, relname AS table_name, n_live_tup AS live_tuples, n_dead_tup AS dead_tuples, ROUND(100.0 * n_dead_tup / NULLIF(n_live_tup + n_dead_tup, 0), 2) AS dead_pct, last_vacuum, last_autovacuum FROM pg_stat_user_tables WHERE n_dead_tup > 1000 ORDER BY n_dead_tup DESC;
Key indicators that autovacuum needs tuning:
- dead_pct consistently above 10-20% means autovacuum can't keep up
- last_autovacuum is NULL or very old on high-write tables
- dead_tuples exceeding live_tuples indicates severe bloat
You can also check transaction ID age to gauge wraparound risk:
SELECT
datname,
age(datfrozenxid) AS xid_age,
ROUND(100.0 * age(datfrozenxid) / 2000000000, 2) AS pct_to_wraparound
FROM pg_database
WHERE datname NOT IN ('template0', 'template1')
ORDER BY age(datfrozenxid) DESC;
If pct_to_wraparound exceeds 50%, you should investigate immediately. At 95%, PostgreSQL will start aggressive anti-wraparound vacuums that can cause significant performance degradation.
Configuring Autovacuum for Docker-Based Supabase
For self-hosted Supabase using Docker Compose, you'll configure autovacuum in your PostgreSQL container. Add these parameters to your docker-compose.yml under the db service:
services:
db:
image: supabase/postgres:15.6.1.143
command:
- postgres
- -c
- autovacuum=on
- -c
- autovacuum_max_workers=4
- -c
- autovacuum_naptime=30s
- -c
- autovacuum_vacuum_scale_factor=0.05
- -c
- autovacuum_vacuum_threshold=50
- -c
- autovacuum_vacuum_cost_delay=2ms
- -c
- autovacuum_vacuum_cost_limit=1000
- -c
- log_autovacuum_min_duration=1000
Key Parameters Explained
| Parameter | Default | Recommended | Purpose |
|---|---|---|---|
autovacuum_max_workers | 3 | 4-6 | Parallel vacuum workers |
autovacuum_naptime | 60s | 30s | Time between autovacuum checks |
autovacuum_vacuum_scale_factor | 0.2 | 0.05 | % of table triggering vacuum |
autovacuum_vacuum_threshold | 50 | 50-100 | Minimum dead tuples to trigger |
autovacuum_vacuum_cost_delay | 2ms | 0-2ms | Delay between vacuum I/O |
autovacuum_vacuum_cost_limit | 200 | 800-1000 | I/O credits per cycle |
Setting autovacuum_vacuum_cost_delay to 0 makes autovacuum as fast as manual VACUUM, but increases resource usage. For dedicated database servers, this is usually acceptable.
Per-Table Autovacuum Configuration
The real power comes from per-table settings. Your high-write tables—sessions, events, notifications—need aggressive tuning while static reference tables can use defaults.
Identify High-Write Tables
First, find tables with the most modifications:
SELECT schemaname, relname, n_tup_ins AS inserts, n_tup_upd AS updates, n_tup_del AS deletes, n_tup_ins + n_tup_upd + n_tup_del AS total_writes FROM pg_stat_user_tables ORDER BY total_writes DESC LIMIT 20;
Configure Aggressive Autovacuum for Hot Tables
For tables with high write volume:
-- Sessions table: frequent inserts and deletes ALTER TABLE auth.sessions SET ( autovacuum_vacuum_scale_factor = 0.01, autovacuum_vacuum_threshold = 1000, autovacuum_analyze_scale_factor = 0.005 ); -- Audit logs: append-only, rarely updated ALTER TABLE audit.logs SET ( autovacuum_vacuum_scale_factor = 0.02, autovacuum_vacuum_threshold = 5000 ); -- Realtime messages: high churn ALTER TABLE realtime.messages SET ( autovacuum_vacuum_scale_factor = 0.005, autovacuum_vacuum_threshold = 500, autovacuum_vacuum_cost_delay = 0 );
With these settings:
- Sessions table triggers vacuum at 1% dead tuples (instead of 20%)
- Audit logs clean up at 2% (good for append-heavy workloads)
- Realtime messages vacuum aggressively at 0.5% with no delay
Verify Per-Table Settings
SELECT
relname,
reloptions
FROM pg_class
WHERE reloptions IS NOT NULL
AND relnamespace NOT IN (
SELECT oid FROM pg_namespace
WHERE nspname IN ('pg_catalog', 'information_schema')
);
Handling Autovacuum Blockers
Autovacuum can be blocked by long-running transactions, which prevent dead tuple removal. Common culprits in Supabase deployments:
- Forgotten transactions: Connections left "idle in transaction"
- Long-running analytics queries: BI tools holding snapshots
- Slow migrations: Unoptimized schema changes
Detect Blocking Transactions
SELECT pid, usename, state, backend_xid, backend_xmin, query_start, NOW() - query_start AS duration, LEFT(query, 80) AS query_preview FROM pg_stat_activity WHERE state = 'idle in transaction' AND query_start < NOW() - INTERVAL '5 minutes' ORDER BY query_start;
Configure Transaction Timeouts
Add to your PostgreSQL configuration:
- -c - idle_in_transaction_session_timeout=300000 # 5 minutes - -c - statement_timeout=60000 # 1 minute default
This automatically terminates sessions that remain idle in a transaction for over 5 minutes, preventing autovacuum from being blocked indefinitely.
Reducing Dead Tuple Generation
Prevention is better than cleanup. These strategies reduce dead tuple accumulation:
Use Fillfactor for Hot Update Tables
Tables with frequent updates to the same rows benefit from reduced fillfactor:
ALTER TABLE user_presence SET (fillfactor = 70);
This reserves 30% of each page for updates, enabling PostgreSQL's HOT (Heap-Only Tuple) updates that don't create index entries for dead tuples.
Batch Updates When Possible
Instead of:
// Bad: 100 separate updates
for (const user of users) {
await supabase.from('users').update({ last_seen: new Date() }).eq('id', user.id)
}
Use:
// Better: Single batched upsert
await supabase.from('users').upsert(
users.map(u => ({ id: u.id, last_seen: new Date() })),
{ onConflict: 'id' }
)
Manual Vacuum Operations
Sometimes you need to run VACUUM manually, especially after bulk operations or when recovering from bloat.
Standard Vacuum (Non-Blocking)
-- Specific table VACUUM (VERBOSE, ANALYZE) public.large_table; -- All tables VACUUM (VERBOSE, ANALYZE);
Full Vacuum (Blocking, Use Carefully)
VACUUM FULL reclaims all dead space but locks the table exclusively:
-- Only during maintenance windows VACUUM FULL public.bloated_table;
For production systems, prefer pg_repack:
-- Non-blocking table repack CREATE EXTENSION IF NOT EXISTS pg_repack; -- Run from command line: -- pg_repack -d your_database -t public.bloated_table
Monitoring with Supascale
Managing autovacuum across multiple self-hosted Supabase projects becomes complex quickly. Supascale provides centralized monitoring and management for your self-hosted instances, including:
- Database health dashboards showing dead tuple percentages
- Automated backups via S3-compatible storage that capture consistent snapshots
- One-click project deployment with sensible default configurations
While you'll still need to tune autovacuum based on your specific workload, having visibility across all your projects in one place makes proactive maintenance manageable.
Autovacuum Tuning Checklist
Before deploying to production, verify:
- [ ]
track_counts = on(required for autovacuum to work) - [ ] Global
autovacuum_vacuum_scale_factorreduced to 0.05-0.1 - [ ] High-write tables have per-table autovacuum settings
- [ ]
idle_in_transaction_session_timeoutconfigured - [ ]
log_autovacuum_min_durationset to capture slow vacuums - [ ] Monitoring in place for dead tuple percentage and XID age
- [ ] Regular review of
pg_stat_user_tablesfor bloat patterns
Further Reading
- PostgreSQL Performance Tuning for Self-Hosted Supabase - Comprehensive performance optimization
- Connection Pooling for Self-Hosted Supabase - Managing connection overhead
- Monitoring Self-Hosted Supabase - Building observability dashboards
- Supabase Documentation: Performance Tuning - Official guidance
Autovacuum isn't glamorous work, but getting it right is the difference between a database that stays fast under load and one that gradually degrades until it stops accepting writes. For self-hosted Supabase deployments, this is operational responsibility that shifts from Supabase's managed infrastructure to you—but with the right monitoring and configuration, it's entirely manageable.
