Cache Strategy in System Design

Cache strategies in system design: Cache-Aside, Read-Through, Write-Through, Write-Behind, plus layering (local + distributed), invalidation (TTL, active), consistency tradeoffs. This article explains each and when to use it with a reference table.

Overview

  • Cache-Aside: App reads cache, writes DB, updates cache; on miss, load from DB and write to cache. Most common.
  • Read-Through: Cache layer proxies read; on miss, cache loads from DB. App is unaware.
  • Write-Through: Write updates both cache and DB; good consistency, higher write latency.
  • Write-Behind: Write updates cache first, async to DB; high throughput, risk of data loss.
  • Layering: Local (Caffeine) + distributed (Redis); local is fast; distributed is shared.

Example

Example 1: Strategy comparison

StrategyReadWriteConsistencyComplexity
Cache-AsideCache first, then DBDB first, then invalidate cacheEventualLow
Read-ThroughCache proxies--Medium
Write-Through-Write cache + DBStrongMedium
Write-Behind-Cache first, async DBWeakHigh

Example 2: Choice

  • Read-heavy, eventual consistency OK → Cache-Aside + TTL. Write-heavy, strong consistency → Write-Through or no cache. High write throughput, accept loss → Write-Behind (use with care).

Example 3: Multi-level cache

  • Request → local cache → Redis → DB. Local TTL short (e.g. 1 min), Redis long (e.g. 1 h); local miss falls back to Redis.

Example 4: Failure modes

  • Penetration: Miss on non-existent key → cache empty, DB hit. Mitigation: cache empty value.
  • Breakdown: Hot key expires → many DB hits. Mitigation: single-flight, lock.
  • Avalanche: Many keys expire together → DB overload. Mitigation: TTL jitter.

Core Mechanism / Behavior

  • Cache-Aside: App owns read/write logic; invalidate on write is simple and robust.
  • Write-Through: Cache and DB updated together; write failure can leave inconsistency if not atomic.
  • Write-Behind: Async write; risk of loss if process crashes before flush.

Key Rules

  • Invalidate on write: After write, invalidate cache; next read repopulates. Avoids cache-DB mismatch from failed write-through.
  • Mitigate penetration, breakdown, avalanche: Empty value cache, single-flight, TTL jitter.
  • Monitor: Hit rate, latency, DB load; tune TTL and strategy by metrics.

What's Next

See Cache-Aside, Caching Pitfalls. See Redis, Hot Key for distributed cache design.