How to Cache Ecommerce Data Without Serving Stale Prices or Stock
Caching is one of the fastest ways to improve ecommerce performance — and one of the easiest ways to destroy customer trust. A user who adds an item to their cart at $49, only to be charged $79 at checkout, will not come back. A "Add to Cart" button on a product that’s been out of stock for three hours is a support ticket waiting to happen.
This guide covers how to cache the right data, with the right TTLs, and the right invalidation strategies — so you get the speed benefits without the accuracy failures.
Why Ecommerce Caching Is Different
Most caching guides treat staleness as an acceptable tradeoff. For ecommerce, certain data cannot be stale:
- Prices — subject to promotions, tax rules, and currency changes
- Stock levels — especially for limited inventory or flash sales
- Discount codes — can expire or hit redemption limits mid-session
- Shipping costs — depend on carrier rates and destination rules
Other data can tolerate more staleness:
- Product descriptions and images
- Category hierarchies and navigation
- Reviews and ratings
- Recommended products
The core discipline is knowing which bucket each piece of data belongs to — and applying a different caching strategy to each.
Step 1: Classify Your Data by Staleness Tolerance
Before writing any cache logic, build a data classification table for your store. Here’s a starting template:
| Data Type | Staleness Tolerance | Recommended TTL | Invalidation Trigger |
|---|---|---|---|
| Product images | High | 24h–7 days | Asset republish |
| Product description | Medium | 1–4 hours | Content update |
| Category/navigation | Medium | 1–2 hours | Catalogue change |
| Price (standard) | Low | 5–15 minutes | Price rule change |
| Price (promotional) | Very low | 1–2 minutes | Promotion event |
| Stock level | Very low | 30–60 seconds | Order placed / stock update |
| Discount code validity | None | Do not cache | Always live check |
| Cart totals | None | Do not cache | Always compute live |
Rule of thumb: If a stale value can cause a financial discrepancy or a broken promise to the customer, do not cache it — or use a write-through strategy with immediate invalidation.
Step 2: Use a Layered Cache Architecture
A single cache layer creates a single point of failure and a single TTL for everything. Instead, use three layers with different responsibilities.
Request
│
▼
┌─────────────────────┐
│ CDN / Edge Cache │ ← Static assets, rendered category pages
└─────────────────────┘
│ miss
▼
┌─────────────────────┐
│ Application Cache │ ← Product data, price lists, inventory snapshots
│ (Redis / Memcached)│
└─────────────────────┘
│ miss
▼
┌─────────────────────┐
│ Origin / Database │ ← Ground truth: live prices, real stock counts
└─────────────────────┘
Layer 1: CDN / Edge Cache
Cache fully-rendered HTML for category pages and product pages with no personalisation. Use surrogate keys (supported by Fastly, Cloudflare, and AWS CloudFront) so you can invalidate all pages that reference a specific product when that product changes.
Do not cache at CDN level:
- Any page that shows a logged-in user’s price (B2B tiered pricing)
- Any page that reflects cart state
- Any page that displays real-time stock ("Only 2 left!")
Use Vary headers and cache fragments carefully. A misconfigured Vary: Cookie header can accidentally bypass your CDN cache for all logged-in users.
Layer 2: Application Cache (Redis)
This is your workhorse layer. Cache resolved price lists, inventory snapshots, product attribute sets, and anything that requires joining multiple database tables.
# Example: price lookup with short TTL and cache-aside pattern
def get_price(product_id: str, customer_group: str) -> Decimal:
cache_key = f"price:{product_id}:{customer_group}"
cached = redis.get(cache_key)
if cached:
return Decimal(cached)
price = db.query_price(product_id, customer_group)
redis.setex(cache_key, ttl=60, value=str(price)) # 60 second TTL
return price
Use namespaced keys (price:, stock:, product:) so you can flush entire categories during bulk updates without wiping unrelated cache entries.
Layer 3: Origin / Database
Never cache at this layer — it is your source of truth. Any caching here should be handled by your database’s own query cache or read replicas, not by application logic.
Step 3: Implement Event-Driven Cache Invalidation
TTL-based expiry is a safety net, not a strategy. For price and stock data, you need event-driven invalidation — the cache is cleared the moment the source data changes.
The pattern: publish on write, invalidate on consume
Price update in ERP / PIM
│
▼
Message broker
(Kafka / SQS / Redis Pub/Sub)
│
▼
Cache invalidation service
│
├── Delete Redis key: price:{product_id}:*
└── Purge CDN surrogate key: product-{product_id}
This approach means your cache never serves a stale price more than a few seconds after a price change is committed — regardless of your TTL settings.
What to publish
Your price/inventory update events should include enough context to invalidate precisely:
{
"event": "price_updated",
"product_id": "SKU-12345",
"affected_customer_groups": ["retail", "wholesale"],
"effective_at": "2026-03-08T10:00:00Z"
}
Precise events = precise invalidation. Avoid "nuke everything" cache clears during peak traffic — they cause thundering herd problems where every cache miss hits your database simultaneously.
Step 4: Prevent Thundering Herd on Cache Miss
When a popular cache key expires, hundreds of concurrent requests can hit your database at the same time. This is the thundering herd problem, and it can take down a database during a flash sale.
Solution A: Probabilistic early expiry (jitter)
Add random jitter to your TTLs so cache entries for similar items don’t all expire simultaneously:
import random
BASE_TTL = 60 # seconds
jitter = random.randint(0, 10)
redis.setex(cache_key, ttl=BASE_TTL + jitter, value=data)
Solution B: Request coalescing (single-flight)
Ensure only one request recomputes a cache miss while others wait:
# Using a distributed lock to prevent concurrent recomputation
lock_key = f"lock:{cache_key}"
if redis.set(lock_key, "1", nx=True, ex=5): # nx = only set if not exists
# This request won the lock — recompute and repopulate
value = db.fetch(...)
redis.setex(cache_key, ttl=60, value=value)
redis.delete(lock_key)
else:
# Another request is recomputing — wait briefly and retry
time.sleep(0.05)
value = redis.get(cache_key)
Solution C: Stale-while-revalidate
Serve the stale cached value immediately while recomputing asynchronously in the background. This is ideal for product descriptions and navigation — not for prices where accuracy matters.
Step 5: Design the Checkout Path to Always Use Live Data
No matter how well your product pages are cached, the checkout flow must use live data for every price and stock calculation.
Non-negotiable live checks at checkout:
- Re-price the cart on checkout initiation — fetch current prices, reapply promotions, recalculate tax
- Reserve stock at order confirmation, not at "Add to Cart" — use a short reservation window (e.g., 15 minutes) and release it if the order isn’t completed
- Validate discount codes live at the point of application — check expiry, usage limits, and eligibility in real time
- Re-validate shipping costs at the final step — carrier rates can change during a long checkout session
A common pattern for stock reservation:
Customer clicks "Place Order"
│
▼
Attempt to reserve stock (decrement with floor check)
│
┌─────┴──────┐
│ Success │ Failure (stock = 0)
│ │
▼ ▼
Process Return "Item no longer available"
payment before payment is attempted
Never attempt to charge a customer for an item you haven’t already confirmed is available.
Step 6: Monitor Cache Health in Production
Cache problems are silent failures. Add monitoring to catch them before customers do.
Metrics to track:
| Metric | What It Tells You | Alert Threshold |
|---|---|---|
| Cache hit rate | Overall cache effectiveness | < 80% warrants investigation |
| Stale read rate | How often TTL-expired data is served | Track trend, alert on spikes |
| Invalidation lag | Time between data change and cache clear | > 30s for price data is a problem |
| Cache eviction rate | Cache is too small or keys too large | High eviction = resize or prune |
| Price discrepancy events | Cart price ≠ confirmed order price | Any occurrence needs investigation |
Set up a canary check: Every 60 seconds, fetch a known product’s price from cache and from the database. If they differ by more than your acceptable tolerance, fire an alert. This gives you an early warning before customers see the inconsistency.
Step 7: Handle Flash Sales and Peak Events Differently
Flash sales break standard caching assumptions. Prices change on a timer, stock depletes in seconds, and traffic spikes 10–100x.
Pre-warm your cache before the event starts — populate price and product data into Redis in the minutes before go-live, so the first wave of traffic doesn’t hit cold cache.
Use a dedicated inventory service for high-velocity stock updates. A Redis counter with atomic decrement (DECR) is far more performant than a database row lock under concurrent load:
# Atomic stock decrement — returns remaining stock after decrement
remaining = redis.decr(f"stock:{product_id}")
if remaining < 0:
# Oversold — revert and reject
redis.incr(f"stock:{product_id}")
raise OutOfStockError()
Degrade gracefully under load. If your cache layer becomes unavailable, your fallback should be a simplified response ("Check availability at checkout") rather than hammering your database with uncached requests.
Production Checklist
- [ ] Data classification table complete — every data type has an assigned TTL and invalidation strategy
- [ ] Layered cache architecture in place (CDN + application cache + origin)
- [ ] Namespaced Redis keys for targeted invalidation
- [ ] Event-driven invalidation for price and stock changes
- [ ] TTL jitter applied to prevent thundering herd
- [ ] Checkout path verified to use live data only
- [ ] Stock reservation logic implemented at order confirmation
- [ ] Discount code validation is always live
- [ ] Cache hit rate, invalidation lag, and price discrepancy monitoring active
- [ ] Flash sale runbook written and tested
Summary
Fast and accurate ecommerce caching is not about choosing one or the other — it’s about applying the right strategy to the right data. Cache your static content aggressively. Apply short TTLs and event-driven invalidation for prices and stock. Never cache cart totals or discount code validity. And build your checkout path to treat live data as non-negotiable.
The teams that get this right don’t just have faster stores — they have stores customers trust.
Want a review of your ecommerce caching architecture? Book a free consultation with Simplico.
Get in Touch with us
Related Posts
- The $1M Enterprise Software Myth: How Open‑Source + AI Are Replacing Expensive Corporate Platforms
- 电商数据缓存实战:如何避免展示过期价格与库存
- AI驱动的遗留系统现代化:将机器智能集成到ERP、SCADA和本地化部署系统中
- AI-Driven Legacy Modernization: Integrating Machine Intelligence into ERP, SCADA, and On-Premise Systems
- The Price of Intelligence: What AI Really Costs
- 为什么你的 RAG 应用在生产环境中会失败(以及如何修复)
- Why Your RAG App Fails in Production (And How to Fix It)
- AI 时代的 AI-Assisted Programming:从《The Elements of Style》看如何写出更高质量的代码
- AI-Assisted Programming in the Age of AI: What *The Elements of Style* Teaches About Writing Better Code with Copilots
- AI取代人类的迷思:为什么2026年的企业仍然需要工程师与真正的软件系统
- The AI Replacement Myth: Why Enterprises Still Need Human Engineers and Real Software in 2026
- NSM vs AV vs IPS vs IDS vs EDR:你的企业安全体系还缺少什么?
- NSM vs AV vs IPS vs IDS vs EDR: What Your Security Architecture Is Probably Missing
- AI驱动的 Network Security Monitoring(NSM)
- AI-Powered Network Security Monitoring (NSM)
- 使用开源 + AI 构建企业级系统
- How to Build an Enterprise System Using Open-Source + AI
- AI会在2026年取代软件开发公司吗?企业管理层必须知道的真相
- Will AI Replace Software Development Agencies in 2026? The Brutal Truth for Enterprise Leaders
- 使用开源 + AI 构建企业级系统(2026 实战指南)













