Skip to main content

ADR-0036: Redis for Application Caching

Status

Accepted - 2025-01-26


Context

TVL Platform needs fast data caching for properties, availability, pricing with multi-tenant isolation and TTL-based expiry.


Decision

Redis (Upstash) for application-level caching.

Rationale

  1. Already Using Redis: For event bus (ADR-0031), sessions, rate limiting
  2. Fast: <1ms latency for cache hits
  3. Multi-Tenant Safe: Key namespace isolation
  4. TTL Support: Auto-expiry (no manual cleanup)
  5. Managed: Upstash (serverless, pay-per-request)

Alternatives Considered

Alternative 1: In-Memory Cache (Node.js)

Rejected - Lost on restart, doesn't work across replicas

Alternative 2: Memcached

Rejected - No persistence, fewer data structures than Redis

Alternative 3: PostgreSQL Materialized Views

Rejected - Slower refresh, complex invalidation


Implementation

1. Redis Client Setup

// src/cache/redis.ts
import Redis from 'ioredis';

export const redis = new Redis(process.env.REDIS_URL, {
maxRetriesPerRequest: 3,
retryStrategy(times) {
const delay = Math.min(times * 50, 2000);
return delay;
},
});

redis.on('error', (error) => {
logger.error({ error }, 'Redis connection error');
});

redis.on('connect', () => {
logger.info('Redis connected');
});

2. Cache Wrapper

// src/cache/cacheWrapper.ts
export async function getOrSet<T>(
key: string,
ttl: number,
fetchFn: () => Promise<T>
): Promise<T> {
// Try cache first
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached);
}

// Cache miss - fetch from source
const data = await fetchFn();

// Store in cache
await redis.setex(key, ttl, JSON.stringify(data));

return data;
}

// Usage
const properties = await getOrSet(
`org:${orgId}:properties`,
3600, // 1 hour TTL
async () => {
return await db.select().from(properties).where(eq(properties.orgId, orgId));
}
);

3. Multi-Tenant Key Namespacing

// src/cache/keys.ts
export const CacheKeys = {
// Properties
properties: (orgId: string) => `org:${orgId}:properties`,
property: (orgId: string, propertyId: string) => `org:${orgId}:property:${propertyId}`,

// Availability
availability: (orgId: string, propertyId: string, date: string) =>
`org:${orgId}:availability:${propertyId}:${date}`,

// Pricing
pricing: (orgId: string, propertyId: string) => `org:${orgId}:pricing:${propertyId}`,

// Sessions
session: (sessionId: string) => `session:${sessionId}`,

// Rate limiting
rateLimit: (orgId: string, endpoint: string) => `ratelimit:${orgId}:${endpoint}`,
};

// Usage
const key = CacheKeys.property(orgId, propertyId);
const property = await redis.get(key);

Cache Patterns

Pattern 1: Cache-Aside (Lazy Loading)

// Read
export async function getProperty(orgId: string, propertyId: string) {
const key = CacheKeys.property(orgId, propertyId);

// Try cache
const cached = await redis.get(key);
if (cached) {
return JSON.parse(cached);
}

// Cache miss - fetch from DB
const property = await db.query.properties.findFirst({
where: and(eq(properties.orgId, orgId), eq(properties.id, propertyId)),
});

// Store in cache (1 hour TTL)
if (property) {
await redis.setex(key, 3600, JSON.stringify(property));
}

return property;
}

// Write (invalidate cache)
export async function updateProperty(orgId: string, propertyId: string, data: any) {
// Update database
const updated = await db
.update(properties)
.set(data)
.where(and(eq(properties.orgId, orgId), eq(properties.id, propertyId)))
.returning();

// Invalidate cache
await redis.del(CacheKeys.property(orgId, propertyId));
await redis.del(CacheKeys.properties(orgId)); // Also invalidate list

return updated[0];
}

Pattern 2: Write-Through (Immediate Caching)

export async function createProperty(orgId: string, data: CreatePropertyInput) {
// Write to database
const [property] = await db.insert(properties).values({
orgId,
...data,
}).returning();

// Immediately cache
await redis.setex(
CacheKeys.property(orgId, property.id),
3600,
JSON.stringify(property)
);

// Invalidate list cache
await redis.del(CacheKeys.properties(orgId));

return property;
}

Pattern 3: Read-Through (Automatic Fetching)

export async function getCachedProperty(orgId: string, propertyId: string) {
return await getOrSet(
CacheKeys.property(orgId, propertyId),
3600,
async () => {
return await db.query.properties.findFirst({
where: and(eq(properties.orgId, orgId), eq(properties.id, propertyId)),
});
}
);
}

TTL Strategy

Data TypeTTLRationale
Properties1 hourRarely change
Availability5 minutesBooking updates
Pricing15 minutesDynamic pricing changes
Sessions24 hoursUser session lifetime
API responses1 minuteExternal API calls
Rate limit counters1 minuteRolling window
export const CacheTTL = {
PROPERTIES: 3600, // 1 hour
AVAILABILITY: 300, // 5 minutes
PRICING: 900, // 15 minutes
SESSION: 86400, // 24 hours
API_RESPONSE: 60, // 1 minute
RATE_LIMIT: 60, // 1 minute
};

Cache Warming

On Application Startup

// src/cache/warmup.ts
export async function warmCache() {
logger.info('Warming cache...');

// Load most-accessed properties for each org
const topOrgs = await getTopOrganizations(10);

for (const org of topOrgs) {
const properties = await db.query.properties.findMany({
where: eq(properties.orgId, org.id),
limit: 100,
});

for (const property of properties) {
await redis.setex(
CacheKeys.property(org.id, property.id),
3600,
JSON.stringify(property)
);
}
}

logger.info('Cache warmed');
}

// Call on startup
await warmCache();

Monitoring

Cache Hit Rate

// src/cache/metrics.ts
export async function recordCacheHit(key: string) {
await redis.incr('metrics:cache:hits');
}

export async function recordCacheMiss(key: string) {
await redis.incr('metrics:cache:misses');
}

export async function getCacheHitRate(): Promise<number> {
const hits = parseInt(await redis.get('metrics:cache:hits') || '0');
const misses = parseInt(await redis.get('metrics:cache:misses') || '0');
const total = hits + misses;
return total > 0 ? hits / total : 0;
}

Dashboard Metrics

Cache Performance:
- Hit rate (target: >80%)
- Miss rate
- Average latency (cache vs DB)
- Evictions per minute
- Memory usage

Consequences

Positive

  • Fast Reads: <1ms cache hits
  • Reduced DB Load: 50-80% fewer DB queries
  • Multi-Tenant Safe: Namespaced keys
  • Auto Cleanup: TTL-based expiry

Negative

  • Stale Data: Up to TTL duration
  • Redis Dependency: Single point of failure
  • Memory Limits: Upstash free tier (256MB)

Mitigations

  • Use short TTLs for frequently changing data
  • Use Upstash with replication (99.99% SLA)
  • Monitor memory usage, upgrade tier if needed
  • Implement cache stampede prevention (see ADR-0038)

Validation Checklist

  • Redis client configured with retry strategy
  • Multi-tenant key namespacing implemented
  • TTL strategy defined per data type
  • Cache invalidation on writes
  • Cache hit rate monitoring
  • Memory usage alerts

References