TVL Platform - Deployment Views
Summary
This document describes the physical deployment architecture, infrastructure components, networking, and operational characteristics for each environment (local, staging, production).
Deployment Diagram
Environment Specifications
Local Development Environment
Infrastructure
- Frontend: Local Vite dev server (port 3000)
- Backend: Local Node.js server (port 4000)
- Database: Supabase CLI local Postgres (Docker) or remote dev project
- Redis: Local Redis (Docker) via docker-compose.yml
- Workers: Local processes started with npm run worker:dev
Configuration
# docker-compose.yml (simplified)
services:
  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    command: redis-server --appendonly yes
  postgres:
    image: supabase/postgres:15
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: postgres
Environment Variables (.env.local)
# Database
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/tvl_dev
# Redis
REDIS_URL=redis://localhost:6379
# Supabase
SUPABASE_URL=http://localhost:54321
SUPABASE_ANON_KEY=<local_anon_key>
# External Services (test mode)
STRIPE_SECRET_KEY=sk_test_...
HOSTAWAY_API_KEY=test_key
SENDGRID_API_KEY=test_key
# Observability (disabled)
OTEL_ENABLED=false
SENTRY_DSN=
Tooling
- Supabase CLI: supabase start(local dev stack)
- Database Migrations: supabase db pushornpm run migrate
- Seed Data: npm run seed
- Code Generation: npm run generate(Prisma or Supabase types)
Staging Environment
Infrastructure
Frontend (Vercel)
- Domain: staging.tvl.app
- Deployment: Preview deployment per PR, main branch → staging
- Plan: Vercel Hobby (free)
- Region: Auto (nearest edge)
Backend (Railway/Fly)
- Domain: api.staging.tvl.app
- Deployment: Automatic on merge to stagingbranch
- Plan: Railway Hobby ($5/mo) or Fly.io Machines
- Instances: 1 API server, 1 worker (all-in-one)
- Resources: 512MB RAM, 0.5 vCPU
- Region: us-east-1
Database (Supabase)
- Project: tvl-staging
- Plan: Supabase Free tier
- Resources: 500MB database, 1GB bandwidth
- Region: us-east-1
- Backups: Daily automatic
Redis (Upstash)
- Project: tvl-staging
- Plan: Free tier (10k commands/day)
- Region: us-east-1
- Persistence: Enabled
Configuration
Environment Variables (Railway/Fly secrets):
NODE_ENV=staging
DATABASE_URL=<supabase_connection_string>
REDIS_URL=<upstash_redis_url>
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_SERVICE_KEY=<service_key>
STRIPE_SECRET_KEY=sk_test_...
HOSTAWAY_API_KEY=<staging_api_key>
SENDGRID_API_KEY=<api_key>
GOOGLE_OAUTH_CLIENT_ID=<staging_client_id>
GOOGLE_OAUTH_CLIENT_SECRET=<secret_manager_ref>
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.grafana.net
SENTRY_DSN=<staging_dsn>
Access Control
- Database: IP allowlist (Railway/Fly IPs + dev team IPs)
- Redis: Password-protected, TLS-only
- API: Public (no IP restriction), JWT auth required
- Admin Users: Seeded staging accounts
Data Management
- Seed Data: Fake properties, bookings, users via seed script
- Refresh: Weekly reset of staging data
- PII: No real PII in staging; generated data only
Production Environment
Infrastructure
Frontend (Vercel)
- Domain: app.tvl.com(custom domain)
- Deployment: Automatic on merge to mainbranch
- Plan: Vercel Pro ($20/mo) - commercial license, priority support
- CDN: Vercel Edge + Cloudflare (optional)
- Region: Global edge network
Backend (Railway/Fly)
- Domain: api.tvl.com
- Deployment: Automatic on merge to mainwith health check gates
- Plan: Railway Pro ($20/mo) or Fly.io Dedicated
- Instances:
- 2x API servers (redundancy)
- 2x Sync workers
- 1x Payment worker
- 1x Notification worker
 
- Resources (per instance): 1GB RAM, 1 vCPU
- Region: us-east-1(primary),us-west-2(future)
- Autoscaling: Enabled (2-10 instances based on CPU/memory)
Database (Supabase)
- Project: tvl-production
- Plan: Supabase Pro ($25/mo) - dedicated CPU, higher limits
- Resources: 8GB database, 250GB bandwidth, connection pooler
- Region: us-east-1
- Backups: Daily automated + point-in-time recovery (PITR)
- High Availability: Read replicas (future)
Redis (Upstash)
- Project: tvl-production
- Plan: Pay-as-you-go ($0.20 per 100k commands)
- Region: us-east-1
- Persistence: Enabled (AOF)
- Eviction Policy: noeviction
Configuration
Environment Variables (Railway/Fly secrets + Secret Manager):
NODE_ENV=production
DATABASE_URL=<supabase_connection_string_pooler>
REDIS_URL=<upstash_redis_url>
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_SERVICE_KEY=<secret_manager://tvl/supabase_service_key>
STRIPE_SECRET_KEY=<secret_manager://tvl/stripe_live_key>
HOSTAWAY_API_KEY=<secret_manager://tvl/hostaway_api_key>
SENDGRID_API_KEY=<secret_manager://tvl/sendgrid_api_key>
GOOGLE_OAUTH_CLIENT_ID=<production_client_id>
GOOGLE_OAUTH_CLIENT_SECRET=<secret_manager://tvl/google_oauth_secret>
OTEL_ENABLED=true
OTEL_EXPORTER_OTLP_ENDPOINT=https://otlp.grafana.net
OTEL_SERVICE_NAME=tvl-api
SENTRY_DSN=<production_dsn>
SENTRY_ENVIRONMENT=production
LOG_LEVEL=info
RATE_LIMIT_MAX_REQUESTS=1000
RATE_LIMIT_WINDOW_MS=60000
Access Control
- Database: Private network + connection pooler; RLS enforced
- Redis: Password + TLS, private network
- API: Public HTTPS, JWT auth required, rate limiting enabled
- Admin Access: VPN or IP allowlist for database direct access
Monitoring & Alerts
- Uptime Monitoring: UptimeRobot or Pingdom (1-minute checks)
- Alerting:
- PagerDuty for critical alerts (API down, payment failures)
- Email for warnings (queue delays, high error rate)
 
- Dashboards: Grafana (metrics, traces, logs)
- Error Tracking: Sentry with alert rules
Backup & Disaster Recovery
- Database Backups: Daily automated + PITR (7-day retention)
- Redis Snapshots: Daily (Upstash managed)
- Code Backups: Git repository (GitHub)
- Configuration Backups: Secret Manager + version control
Recovery Time Objective (RTO): < 1 hour Recovery Point Objective (RPO): < 24 hours (daily backups)
Networking & Security
TLS/SSL
- Frontend: HTTPS enforced (Vercel automatic)
- Backend API: HTTPS enforced (Let's Encrypt via Railway/Fly)
- Database: TLS-encrypted connections
- Redis: TLS-enabled
Firewall Rules
- API Server: Public HTTPS (443), HTTP redirects to HTTPS
- Database: Private network, connection pooler only
- Redis: Private network, TLS-only
DNS Configuration
- app.tvl.com → CNAME to Vercel
- api.tvl.com → A record to Railway/Fly IP
- staging.tvl.app → CNAME to Vercel preview
Rate Limiting
- API Gateway: 1000 req/min per tenant (token bucket algorithm)
- Webhook Endpoints: 100 req/min per source IP
- External APIs:
- Hostaway: 100 req/min per tenant (enforced by connector)
- Stripe: No hard limit (best practices: < 100 req/sec)
 
Deployment Process
Continuous Integration (GitHub Actions)
# .github/workflows/ci.yml (simplified)
name: CI
on:
  push:
    branches: [main, staging]
  pull_request:
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
      - run: npm ci
      - run: npm run lint
      - run: npm run test
      - run: npm run build
  deploy-staging:
    needs: test
    if: github.ref == 'refs/heads/staging'
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to Vercel (staging)
        run: vercel deploy --env staging
      - name: Deploy to Railway (staging)
        run: railway up --environment staging
  deploy-production:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to Vercel (production)
        run: vercel deploy --prod
      - name: Deploy to Railway (production)
        run: railway up --environment production
      - name: Run database migrations
        run: npm run migrate:prod
      - name: Health check
        run: curl -f https://api.tvl.com/health || exit 1
Database Migrations
Migration Strategy: Forward-only migrations with Supabase CLI
- Development: Create migration locally (supabase migration new add_column)
- Review: PR review includes SQL migration files
- Staging: Auto-apply on merge to staging
- Production: Manual approval in Supabase dashboard → run migration
Rollback: Not supported (only forward migrations); write compensating migration if needed.
Blue-Green Deployment (Future)
For zero-downtime deployments:
- Deploy new version to "green" environment
- Run health checks
- Switch traffic from "blue" to "green"
- Keep "blue" running for 10 minutes for rollback
- Decomission "blue"
Not implemented in MVP (Railway/Fly handle this automatically).
Scaling Strategy
Vertical Scaling (First Step)
- Increase Railway/Fly instance size (1GB → 2GB RAM)
- Upgrade Supabase plan (Pro → Team)
- Cost: ~$50-100/mo
Horizontal Scaling (Second Step)
- Add more API server instances (2 → 5)
- Add dedicated worker instances per job type
- Enable Supabase read replicas for analytics
- Cost: ~$200-500/mo
Geographic Scaling (Future)
- Deploy API servers in us-west-2,eu-west-1
- Multi-region database (Supabase Enterprise)
- CDN already global (Vercel/Cloudflare)
- Cost: ~$1000+/mo
Validation & Alternatives
Hosting Decisions
✅ Agree: Vercel for frontend
- Alternative: Netlify, AWS Amplify
- Trade-off: Vercel has best DX and performance
✅ Agree: Railway or Fly.io for backend
- Alternative: AWS ECS, Google Cloud Run, Render
- Trade-off: Railway/Fly are simpler than AWS but less mature
⚠️ Consider: Single region vs. multi-region from day 1
- Current: Single region (us-east-1)
- Alternative: Multi-region for redundancy
- Recommendation: Start single-region, add regions based on latency requirements
✅ Agree: Supabase for Postgres
- Alternative: AWS RDS, Google Cloud SQL, self-hosted
- Trade-off: Supabase has integrated auth and RLS; RDS more flexible but complex
Known Gaps & Assumptions
Assumptions
- Single-region deployment sufficient for MVP (US-based customers)
- Railway/Fly uptime > 99.5% (no formal SLA)
- Supabase Free/Pro tier sufficient for < 1000 tenants
- No need for Kubernetes (managed platforms handle scaling)
Gaps
- No disaster recovery tested (need DR drill)
- No load testing completed (performance under load unknown)
- No geo-redundancy (single region failure = downtime)
- No automated rollback mechanism
Mitigation
- Schedule quarterly DR drill
- Run load tests before production launch (k6 or Artillery)
- Document manual rollback procedure
- Monitor error rates and latency continuously
Sources
- meta/research-log.md
- docs/01-architecture/logical-architecture.md
- docs/00-overview/platform-overview.md