Self-Hosting Guide
This guide covers deploying your own Wisp.place instance. Wisp.place consists of three services: the main backend (handles OAuth, uploads, domains), the firehose service (watches the AT Protocol firehose and populates the cache), and the hosting service (serves cached sites). See the Architecture Guide for a detailed breakdown of how these services work together.
Prerequisites
Section titled “Prerequisites”- PostgreSQL database (14 or newer)
- Bun runtime for the main backend and firehose service
- Node.js (18+) for the hosting service
- Caddy (optional, for custom domain TLS)
- Domain name for your instance
- S3-compatible storage (optional, recommended for production — Cloudflare R2, MinIO, etc.)
- Redis (optional, for real-time cache invalidation between services)
Architecture Overview
Section titled “Architecture Overview”┌──────────────────────────┐ ┌──────────────────────────┐ ┌──────────────────────────┐│ Main Backend (:8000) │ │ Firehose Service │ │ Hosting Service (:3001) ││ - OAuth authentication │ │ - Watches AT firehose │ │ - Tiered cache (mem/ ││ - Site upload/manage │ │ - Downloads blobs │ │ disk/S3) ││ - Domain registration │ │ - Writes to S3/disk │ │ - Content serving ││ - Admin panel │ │ - Publishes invalidation │ │ - Redirect handling │└──────────────────────────┘ └──────────────────────────┘ └──────────────────────────┘ │ │ │ │ │ │ S3/Disk │ Redis pub/sub │ └────────┬───────────────┘ └─────────────────────┘ ▼┌─────────────────────────────────────────┐│ PostgreSQL Database ││ - User sessions ││ - Domain mappings ││ - Site metadata │└─────────────────────────────────────────┘Database Setup
Section titled “Database Setup”Create a PostgreSQL database for Wisp.place:
createdb wispThe schema is automatically created on first run. Tables include:
oauth_states,oauth_sessions,oauth_keys- OAuth flowdomains- Wisp subdomains (*.yourdomain.com)custom_domains- User custom domains with DNS verificationsites- Site metadata cachecookie_secrets- Session signing keys
Main Backend Setup
Section titled “Main Backend Setup”Environment Variables
Section titled “Environment Variables”Create a .env file or set these environment variables:
# RequiredDATABASE_URL="postgres://user:password@localhost:5432/wisp"BASE_DOMAIN="wisp.place" # Your domain (without protocol)DOMAIN="https://wisp.place" # Full domain with protocolCLIENT_NAME="Wisp.place" # OAuth client name
# OptionalNODE_ENV="production" # production or developmentPORT="8000" # Default: 8000Installation
Section titled “Installation”# Install dependenciesbun install
# Development mode (with hot reload)bun run dev
# Production modebun run start
# Or compile to binarybun run build./serverThe backend will:
- Initialize the database schema
- Generate OAuth keys (stored in DB)
- Start DNS verification worker (checks custom domains every 10 minutes)
- Listen on port 8000
First-Time Admin Setup
Section titled “First-Time Admin Setup”On first run, you’ll be prompted to create an admin account:
No admin users found. Create one now? (y/n):Or create manually:
bun run scripts/create-admin.tsAdmin panel is available at https://yourdomain.com/admin
Firehose Service Setup
Section titled “Firehose Service Setup”The firehose service watches the AT Protocol firehose for site changes and pre-populates the cache. It is write-only — it never serves requests to users.
Environment Variables
Section titled “Environment Variables”# RequiredDATABASE_URL="postgres://user:password@localhost:5432/wisp"
# S3 storage (recommended for production)S3_BUCKET="wisp-sites"S3_REGION="auto"S3_ENDPOINT="https://your-account.r2.cloudflarestorage.com"S3_ACCESS_KEY_ID="..."S3_SECRET_ACCESS_KEY="..."S3_METADATA_BUCKET="wisp-metadata" # Optional, recommended
# Redis (for notifying hosting service of changes)REDIS_URL="redis://localhost:6379"
# FirehoseFIREHOSE_URL="wss://jetstream2.us-east.bsky.network/subscribe"FIREHOSE_CONCURRENCY=5 # Max parallel event processing
# OptionalCACHE_DIR="./cache/sites" # Fallback if S3 not configuredInstallation
Section titled “Installation”cd firehose-service
# Install dependenciesbun install
# Production modebun run start
# With backfill (one-time bulk sync of all existing sites)bun run start -- --backfillThe firehose service will:
- Connect to the AT Protocol firehose (Jetstream)
- Filter for
place.wisp.fsandplace.wisp.settingsevents - Download blobs, decompress, and rewrite HTML paths
- Write files to S3 (or disk)
- Publish cache invalidation events to Redis
Hosting Service Setup
Section titled “Hosting Service Setup”The hosting service is a read-only CDN that serves cached sites through a three-tier storage system (memory, disk, S3).
Environment Variables
Section titled “Environment Variables”# RequiredDATABASE_URL="postgres://user:password@localhost:5432/wisp"BASE_HOST="wisp.place" # Same as main backend
# Tiered storageHOT_CACHE_SIZE=104857600 # Hot tier: 100 MB (memory, LRU)HOT_CACHE_COUNT=500 # Max items in hot tier
WARM_CACHE_SIZE=10737418240 # Warm tier: 10 GB (disk, LRU)WARM_EVICTION_POLICY="lru" # lru, fifo, or sizeCACHE_DIR="./cache/sites" # Warm tier directory
# S3 cold tier (same bucket as firehose service, read-only)S3_BUCKET="wisp-sites"S3_REGION="auto"S3_ENDPOINT="https://your-account.r2.cloudflarestorage.com"S3_ACCESS_KEY_ID="..."S3_SECRET_ACCESS_KEY="..."S3_METADATA_BUCKET="wisp-metadata"
# Redis (receive cache invalidation from firehose service)REDIS_URL="redis://localhost:6379"
# OptionalPORT="3001" # Default: 3001Installation
Section titled “Installation”cd hosting-service
# Install dependenciesnpm install
# Development modenpm run dev
# Production modenpm run startThe hosting service will:
- Initialize tiered storage (hot → warm → cold)
- Subscribe to Redis for cache invalidation events
- Serve sites on port 3001
Cache Behavior
Section titled “Cache Behavior”Files are cached across three tiers with automatic promotion:
- Hot (memory): Fastest, limited by
HOT_CACHE_SIZE. Evicted on restart. - Warm (disk): Fast local reads at
CACHE_DIR. Survives restarts. - Cold (S3): Shared source of truth, populated by firehose service.
On a cache miss at all tiers, the hosting service fetches directly from the user’s PDS and promotes the file into the appropriate tiers.
Without S3: Disk acts as both warm and cold tier. The hosting service still works — it just relies on on-demand fetching instead of pre-populated S3 cache.
Reverse Proxy Setup
Section titled “Reverse Proxy Setup”Caddy Configuration
Section titled “Caddy Configuration”Caddy handles TLS, on-demand certificates for custom domains, and routing:
{ on_demand_tls { ask http://localhost:8000/api/domain/registered }}
# Wisp subdomains and DNS hash routing*.dns.wisp.place *.wisp.place { reverse_proxy localhost:3001}
# Main web interface and APIwisp.place { reverse_proxy localhost:8000}
# Custom domains (on-demand TLS)https:// { tls { on_demand } reverse_proxy localhost:3001}Nginx Alternative
Section titled “Nginx Alternative”# Main backendserver { listen 443 ssl http2; server_name wisp.place;
ssl_certificate /path/to/cert.pem; ssl_certificate_key /path/to/key.pem;
location / { proxy_pass http://localhost:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }}
# Hosting serviceserver { listen 443 ssl http2; server_name *.wisp.place sites.wisp.place;
ssl_certificate /path/to/wildcard-cert.pem; ssl_certificate_key /path/to/wildcard-key.pem;
location / { proxy_pass http://localhost:3001; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; }}Note: Custom domain TLS requires dynamic certificate provisioning. Caddy’s on-demand TLS is the easiest solution.
OAuth Configuration
Section titled “OAuth Configuration”Wisp.place uses AT Protocol OAuth. Your instance needs to be publicly accessible for OAuth callbacks.
Required endpoints:
/.well-known/atproto-did- Returns your DID for lexicon resolution/oauth-client-metadata.json- OAuth client metadata/jwks.json- OAuth signing keys
These are automatically served by the backend.
DNS Configuration
Section titled “DNS Configuration”For your main domain:
wisp.place A YOUR_SERVER_IP*.wisp.place A YOUR_SERVER_IP*.dns.wisp.place A YOUR_SERVER_IPsites.wisp.place A YOUR_SERVER_IPOr use CNAME records if you’re behind a CDN:
wisp.place CNAME your-server.example.com*.wisp.place CNAME your-server.example.comCustom Domain Verification
Section titled “Custom Domain Verification”Users can add custom domains via DNS TXT records:
_wisp.example.com TXT did:plc:abc123xyz...The DNS verification worker checks these every 10 minutes. Trigger manually:
curl -X POST https://yourdomain.com/api/admin/verify-dnsProduction Checklist
Section titled “Production Checklist”Before going live:
- PostgreSQL database configured with backups
-
DATABASE_URLset with secure credentials -
BASE_DOMAINandDOMAINconfigured correctly - Admin account created
- Reverse proxy (Caddy/Nginx) configured
- DNS records pointing to your server
- TLS certificates configured
- Hosting service cache directory has sufficient space
- Firewall allows ports 80/443
- Process manager (systemd, pm2) configured for auto-restart
Monitoring
Section titled “Monitoring”Health Checks
Section titled “Health Checks”Main backend:
curl https://yourdomain.com/api/healthHosting service:
curl http://localhost:3001/healthThe services log to stdout. View with your process manager:
# systemdjournalctl -u wisp-backend -fjournalctl -u wisp-hosting -f
# pm2pm2 logs wisp-backendpm2 logs wisp-hostingAdmin Panel
Section titled “Admin Panel”Access observability metrics at https://yourdomain.com/admin:
- Recent logs
- Error tracking
- Performance metrics
- Cache statistics
Scaling Considerations
Section titled “Scaling Considerations”- Multiple hosting instances: Run multiple hosting services behind a load balancer — each has its own hot/warm tiers but shares the S3 cold tier and Redis invalidation
- Separate databases: Split read/write with replicas
- CDN: Put Cloudflare or Bunny in front for global caching
- S3 cold tier: Shared storage across all hosting instances (Cloudflare R2, MinIO, AWS S3)
- Redis: Required for real-time cache invalidation between firehose and hosting services at scale
Security Notes
Section titled “Security Notes”- Use strong cookie secrets (auto-generated and stored in DB)
- Keep dependencies updated:
bun update,npm update - Enable rate limiting in reverse proxy
- Set up fail2ban for brute force protection
- Regular database backups
- Monitor logs for suspicious activity
Updates
Section titled “Updates”To update your instance:
# Pull latest codegit pull
# Update dependenciesbun installcd hosting-service && npm install && cd ..
# Restart services# (The database schema updates automatically)Support
Section titled “Support”For issues and questions:
- Check the documentation
- Review Tangled issues
- Join the Bluesky community
License
Section titled “License”Wisp.place is MIT licensed. You’re free to host your own instance and modify it as needed.