Building Real-Time Collaborative Apps with Self-Hosted Supabase

Learn to implement presence tracking, broadcast messaging, and live collaboration features for self-hosted Supabase deployments.

Cover Image for Building Real-Time Collaborative Apps with Self-Hosted Supabase

Building collaborative features like live cursors, typing indicators, and real-time presence used to require complex infrastructure. With self-hosted Supabase, you can implement these multiplayer experiences while maintaining full control over your data and infrastructure.

Supabase Realtime provides three mechanisms for pushing updates to clients: Broadcast for low-latency pub/sub messaging, Presence for tracking connected users and their state, and Postgres Changes for database-driven subscriptions. This guide focuses on the first two—the building blocks of real-time collaboration.

Understanding the Realtime Architecture

Supabase Realtime is an Elixir server built with Phoenix Framework. It listens to Postgres changes via logical replication and broadcasts updates through WebSockets. For self-hosted deployments, this runs as part of your Docker Compose stack alongside the other Supabase services.

The architecture splits into three distinct features:

Broadcast lets any connected client send messages to a channel. All other clients on that channel receive the message instantly—regardless of geographic location. A client in Singapore can message a client in New York with sub-100ms latency.

Presence maintains a shared state of all connected clients. Each client publishes a small payload (cursor position, user status, typing state), and Supabase keeps a merged view accessible to everyone on the channel.

Postgres Changes subscribes to database INSERT, UPDATE, and DELETE events. While powerful, Supabase currently recommends Broadcast for fan-out at scale, using Postgres Changes primarily for simpler use cases.

Setting Up Realtime for Self-Hosted Supabase

Before implementing collaborative features, verify your Realtime service is properly configured. If you're starting fresh, check the environment variables guide for the complete configuration reference.

The key environment variables for Realtime include:

# docker-compose.yml or .env
REALTIME_DB_HOST=db
REALTIME_DB_PORT=5432
REALTIME_DB_NAME=postgres
REALTIME_DB_USER=supabase_admin
REALTIME_DB_PASSWORD=${POSTGRES_PASSWORD}

# Important for production
REALTIME_IP_VERSION=IPv4
REALTIME_SECURE_CHANNELS=true

For production self-hosted deployments, you'll also want to configure the WAL settings to prevent disk space issues:

-- Run in Postgres
ALTER SYSTEM SET max_slot_wal_keep_size = '1GB';
SELECT pg_reload_conf();

This prevents the Realtime server from causing unbounded WAL growth—a common issue that can crash self-hosted instances during high activity.

Implementing Presence: Who's Online Right Now

Presence answers the fundamental question of collaborative apps: who else is here? Here's how to implement it.

First, initialize the Supabase client pointing to your self-hosted instance:

import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  'https://your-domain.com', // Your self-hosted URL
  'your-anon-key'
)

Now create a presence channel:

interface UserPresence {
  user_id: string
  username: string
  cursor: { x: number; y: number }
  online_at: string
}

const channel = supabase.channel('room-1', {
  config: {
    presence: {
      key: currentUser.id, // Unique identifier for this client
    },
  },
})

// Track your own presence
channel.subscribe(async (status) => {
  if (status === 'SUBSCRIBED') {
    await channel.track({
      user_id: currentUser.id,
      username: currentUser.name,
      cursor: { x: 0, y: 0 },
      online_at: new Date().toISOString(),
    })
  }
})

Listen for presence changes from other users:

channel.on('presence', { event: 'sync' }, () => {
  const presenceState = channel.presenceState<UserPresence>()
  
  // presenceState is a map of presence_key -> array of payloads
  const onlineUsers = Object.entries(presenceState).map(([key, presences]) => ({
    key,
    ...presences[0], // Latest presence for this user
  }))
  
  updateOnlineUsersList(onlineUsers)
})

channel.on('presence', { event: 'join' }, ({ key, newPresences }) => {
  console.log(`${newPresences[0].username} joined`)
})

channel.on('presence', { event: 'leave' }, ({ key, leftPresences }) => {
  console.log(`${leftPresences[0].username} left`)
})

For live cursors, update presence whenever the cursor moves (with throttling):

import { throttle } from 'lodash'

const updateCursor = throttle(async (x: number, y: number) => {
  await channel.track({
    user_id: currentUser.id,
    username: currentUser.name,
    cursor: { x, y },
    online_at: new Date().toISOString(),
  })
}, 50) // Update at most every 50ms

document.addEventListener('mousemove', (e) => {
  updateCursor(e.clientX, e.clientY)
})

Implementing Broadcast: Instant Messaging

Broadcast is your tool for ephemeral, low-latency communication. Unlike Postgres Changes, broadcast messages aren't persisted—they're fire-and-forget.

Perfect use cases include typing indicators, live notifications, collaborative editing operations, and game state synchronization.

Here's a typing indicator implementation:

const channel = supabase.channel('chat-room-1')

// Send typing status
function startTyping() {
  channel.send({
    type: 'broadcast',
    event: 'typing',
    payload: {
      user_id: currentUser.id,
      username: currentUser.name,
      is_typing: true,
    },
  })
}

function stopTyping() {
  channel.send({
    type: 'broadcast',
    event: 'typing',
    payload: {
      user_id: currentUser.id,
      username: currentUser.name,
      is_typing: false,
    },
  })
}

// Listen for typing status from others
channel
  .on('broadcast', { event: 'typing' }, ({ payload }) => {
    if (payload.user_id !== currentUser.id) {
      updateTypingIndicator(payload.username, payload.is_typing)
    }
  })
  .subscribe()

For collaborative document editing, broadcast operational transforms:

// When user makes an edit
function onDocumentChange(operation: Operation) {
  // Apply locally first
  applyOperation(operation)
  
  // Broadcast to others
  channel.send({
    type: 'broadcast',
    event: 'document-op',
    payload: {
      user_id: currentUser.id,
      operation,
      timestamp: Date.now(),
    },
  })
}

// Receive operations from others
channel.on('broadcast', { event: 'document-op' }, ({ payload }) => {
  if (payload.user_id !== currentUser.id) {
    applyOperation(payload.operation)
  }
})

Authorization and Security

For production deployments, you'll want to secure your channels. Supabase Realtime supports authorization through RLS policies on the realtime.messages table.

First, enable Realtime Authorization in your configuration. Then create policies:

-- Allow authenticated users to access channels matching their organization
CREATE POLICY "Users can access org channels"
ON realtime.messages
FOR ALL
USING (
  auth.uid() IS NOT NULL
  AND (
    -- Public channels
    channel LIKE 'public:%'
    -- Or channels matching user's organization
    OR channel LIKE 'org:' || (
      SELECT organization_id::text 
      FROM profiles 
      WHERE id = auth.uid()
    ) || ':%'
  )
);

When subscribing to authorized channels, use the private option:

const channel = supabase.channel('org:123:room-1', {
  config: {
    private: true, // Requires valid auth token
  },
})

The client must be authenticated for authorized channels to work:

// Ensure user is authenticated first
const { data: { session } } = await supabase.auth.getSession()

if (session) {
  const channel = supabase.channel('private-room', {
    config: { private: true },
  })
  // Now channel access is governed by RLS policies
}

Scaling Considerations for Self-Hosted

When self-hosting Realtime for collaborative features, keep these production considerations in mind.

Connection Limits: Each WebSocket connection consumes server resources. The Realtime service maintains a replication slot with Postgres, and the connection count affects both memory and CPU usage. Monitor your deployment and scale horizontally if needed.

WAL Management: High-frequency presence updates generate WAL traffic. The max_slot_wal_keep_size setting prevents unbounded disk growth, but it means old WAL segments get deleted—potentially causing replication to restart if the Realtime service falls too far behind.

Sticky Sessions: If you're running multiple Realtime instances behind a load balancer, configure session affinity. WebSocket connections need to stay on the same server for their lifetime.

For horizontal scaling, see the high availability guide which covers Realtime-specific considerations.

Monitoring Realtime in Production

Self-hosted deployments don't get the Supabase Dashboard's built-in Realtime metrics. Set up your own monitoring:

-- Check replication slot status
SELECT slot_name, active, restart_lsn, confirmed_flush_lsn
FROM pg_replication_slots
WHERE slot_name LIKE 'realtime%';

-- Monitor WAL lag
SELECT pg_current_wal_lsn() - confirmed_flush_lsn AS lag_bytes
FROM pg_replication_slots
WHERE slot_name LIKE 'realtime%';

For application-level monitoring, track channel subscription counts and message throughput through your existing observability setup.

Putting It Together: A Collaborative Whiteboard

Here's a minimal example combining Presence and Broadcast for a collaborative whiteboard:

const supabase = createClient(SUPABASE_URL, SUPABASE_ANON_KEY)

interface CursorState {
  x: number
  y: number
  color: string
}

const channel = supabase.channel('whiteboard-room', {
  config: {
    presence: { key: currentUser.id },
  },
})

// Track cursor position via Presence
const trackCursor = throttle(async (x: number, y: number) => {
  await channel.track({
    x, y,
    color: currentUser.color,
  })
}, 30)

// Broadcast drawing strokes
function onDraw(stroke: Stroke) {
  channel.send({
    type: 'broadcast',
    event: 'stroke',
    payload: { stroke, userId: currentUser.id },
  })
}

// Handle incoming events
channel
  .on('presence', { event: 'sync' }, () => {
    const state = channel.presenceState<CursorState>()
    renderCursors(state)
  })
  .on('broadcast', { event: 'stroke' }, ({ payload }) => {
    renderStroke(payload.stroke, payload.userId)
  })
  .subscribe()

Common Pitfalls and Solutions

Problem: Presence updates stop working after browser tab goes to background.

Browsers throttle JavaScript in background tabs, preventing heartbeats. Enable the Web Worker option:

const channel = supabase.channel('room', {
  config: {
    presence: { key: userId },
    worker: true, // Use Web Worker for heartbeats
  },
})

Problem: "tenant_not_found_in_host" errors with reverse proxy.

Ensure your reverse proxy (Nginx, Caddy, Traefik) properly forwards WebSocket connections. Check your reverse proxy configuration for WebSocket-specific headers.

Problem: Replication lag causing message delays.

Monitor your Postgres WAL lag. If it consistently grows, your Realtime server may be underpowered or your message volume exceeds single-node capacity.

What Supascale Makes Easier

Configuring Realtime properly involves coordinating multiple environment variables, ensuring proper reverse proxy setup for WebSockets, and monitoring replication health. Supascale handles this automatically—WebSocket routing, SSL termination, and health monitoring are configured out of the box.

For teams building collaborative features, this means focusing on your application logic rather than infrastructure debugging. The one-time purchase model means you get unlimited real-time channels across all your projects without per-connection pricing surprises.

Further Reading