How to Cache Ecommerce Data Without Serving Stale Prices or Stock
Caching is one of the fastest ways to improve ecommerce performance — and one of the easiest ways to destroy customer trust. A user who adds an item to their cart at $49, only to be charged $79 at checkout, will not come back. A "Add to Cart" button on a product that’s been out of stock for three hours is a support ticket waiting to happen.
This guide covers how to cache the right data, with the right TTLs, and the right invalidation strategies — so you get the speed benefits without the accuracy failures.
Why Ecommerce Caching Is Different
Most caching guides treat staleness as an acceptable tradeoff. For ecommerce, certain data cannot be stale:
- Prices — subject to promotions, tax rules, and currency changes
- Stock levels — especially for limited inventory or flash sales
- Discount codes — can expire or hit redemption limits mid-session
- Shipping costs — depend on carrier rates and destination rules
Other data can tolerate more staleness:
- Product descriptions and images
- Category hierarchies and navigation
- Reviews and ratings
- Recommended products
The core discipline is knowing which bucket each piece of data belongs to — and applying a different caching strategy to each.
Step 1: Classify Your Data by Staleness Tolerance
Before writing any cache logic, build a data classification table for your store. Here’s a starting template:
| Data Type | Staleness Tolerance | Recommended TTL | Invalidation Trigger |
|---|---|---|---|
| Product images | High | 24h–7 days | Asset republish |
| Product description | Medium | 1–4 hours | Content update |
| Category/navigation | Medium | 1–2 hours | Catalogue change |
| Price (standard) | Low | 5–15 minutes | Price rule change |
| Price (promotional) | Very low | 1–2 minutes | Promotion event |
| Stock level | Very low | 30–60 seconds | Order placed / stock update |
| Discount code validity | None | Do not cache | Always live check |
| Cart totals | None | Do not cache | Always compute live |
Rule of thumb: If a stale value can cause a financial discrepancy or a broken promise to the customer, do not cache it — or use a write-through strategy with immediate invalidation.
Step 2: Use a Layered Cache Architecture
A single cache layer creates a single point of failure and a single TTL for everything. Instead, use three layers with different responsibilities.
Request
│
▼
┌─────────────────────┐
│ CDN / Edge Cache │ ← Static assets, rendered category pages
└─────────────────────┘
│ miss
▼
┌─────────────────────┐
│ Application Cache │ ← Product data, price lists, inventory snapshots
│ (Redis / Memcached)│
└─────────────────────┘
│ miss
▼
┌─────────────────────┐
│ Origin / Database │ ← Ground truth: live prices, real stock counts
└─────────────────────┘
Layer 1: CDN / Edge Cache
Cache fully-rendered HTML for category pages and product pages with no personalisation. Use surrogate keys (supported by Fastly, Cloudflare, and AWS CloudFront) so you can invalidate all pages that reference a specific product when that product changes.
Do not cache at CDN level:
- Any page that shows a logged-in user’s price (B2B tiered pricing)
- Any page that reflects cart state
- Any page that displays real-time stock ("Only 2 left!")
Use Vary headers and cache fragments carefully. A misconfigured Vary: Cookie header can accidentally bypass your CDN cache for all logged-in users.
Layer 2: Application Cache (Redis)
This is your workhorse layer. Cache resolved price lists, inventory snapshots, product attribute sets, and anything that requires joining multiple database tables.
# Example: price lookup with short TTL and cache-aside pattern
def get_price(product_id: str, customer_group: str) -> Decimal:
cache_key = f"price:{product_id}:{customer_group}"
cached = redis.get(cache_key)
if cached:
return Decimal(cached)
price = db.query_price(product_id, customer_group)
redis.setex(cache_key, ttl=60, value=str(price)) # 60 second TTL
return price
Use namespaced keys (price:, stock:, product:) so you can flush entire categories during bulk updates without wiping unrelated cache entries.
Layer 3: Origin / Database
Never cache at this layer — it is your source of truth. Any caching here should be handled by your database’s own query cache or read replicas, not by application logic.
Step 3: Implement Event-Driven Cache Invalidation
TTL-based expiry is a safety net, not a strategy. For price and stock data, you need event-driven invalidation — the cache is cleared the moment the source data changes.
The pattern: publish on write, invalidate on consume
Price update in ERP / PIM
│
▼
Message broker
(Kafka / SQS / Redis Pub/Sub)
│
▼
Cache invalidation service
│
├── Delete Redis key: price:{product_id}:*
└── Purge CDN surrogate key: product-{product_id}
This approach means your cache never serves a stale price more than a few seconds after a price change is committed — regardless of your TTL settings.
What to publish
Your price/inventory update events should include enough context to invalidate precisely:
{
"event": "price_updated",
"product_id": "SKU-12345",
"affected_customer_groups": ["retail", "wholesale"],
"effective_at": "2026-03-08T10:00:00Z"
}
Precise events = precise invalidation. Avoid "nuke everything" cache clears during peak traffic — they cause thundering herd problems where every cache miss hits your database simultaneously.
Step 4: Prevent Thundering Herd on Cache Miss
When a popular cache key expires, hundreds of concurrent requests can hit your database at the same time. This is the thundering herd problem, and it can take down a database during a flash sale.
Solution A: Probabilistic early expiry (jitter)
Add random jitter to your TTLs so cache entries for similar items don’t all expire simultaneously:
import random
BASE_TTL = 60 # seconds
jitter = random.randint(0, 10)
redis.setex(cache_key, ttl=BASE_TTL + jitter, value=data)
Solution B: Request coalescing (single-flight)
Ensure only one request recomputes a cache miss while others wait:
# Using a distributed lock to prevent concurrent recomputation
lock_key = f"lock:{cache_key}"
if redis.set(lock_key, "1", nx=True, ex=5): # nx = only set if not exists
# This request won the lock — recompute and repopulate
value = db.fetch(...)
redis.setex(cache_key, ttl=60, value=value)
redis.delete(lock_key)
else:
# Another request is recomputing — wait briefly and retry
time.sleep(0.05)
value = redis.get(cache_key)
Solution C: Stale-while-revalidate
Serve the stale cached value immediately while recomputing asynchronously in the background. This is ideal for product descriptions and navigation — not for prices where accuracy matters.
Step 5: Design the Checkout Path to Always Use Live Data
No matter how well your product pages are cached, the checkout flow must use live data for every price and stock calculation.
Non-negotiable live checks at checkout:
- Re-price the cart on checkout initiation — fetch current prices, reapply promotions, recalculate tax
- Reserve stock at order confirmation, not at "Add to Cart" — use a short reservation window (e.g., 15 minutes) and release it if the order isn’t completed
- Validate discount codes live at the point of application — check expiry, usage limits, and eligibility in real time
- Re-validate shipping costs at the final step — carrier rates can change during a long checkout session
A common pattern for stock reservation:
Customer clicks "Place Order"
│
▼
Attempt to reserve stock (decrement with floor check)
│
┌─────┴──────┐
│ Success │ Failure (stock = 0)
│ │
▼ ▼
Process Return "Item no longer available"
payment before payment is attempted
Never attempt to charge a customer for an item you haven’t already confirmed is available.
Step 6: Monitor Cache Health in Production
Cache problems are silent failures. Add monitoring to catch them before customers do.
Metrics to track:
| Metric | What It Tells You | Alert Threshold |
|---|---|---|
| Cache hit rate | Overall cache effectiveness | < 80% warrants investigation |
| Stale read rate | How often TTL-expired data is served | Track trend, alert on spikes |
| Invalidation lag | Time between data change and cache clear | > 30s for price data is a problem |
| Cache eviction rate | Cache is too small or keys too large | High eviction = resize or prune |
| Price discrepancy events | Cart price ≠ confirmed order price | Any occurrence needs investigation |
Set up a canary check: Every 60 seconds, fetch a known product’s price from cache and from the database. If they differ by more than your acceptable tolerance, fire an alert. This gives you an early warning before customers see the inconsistency.
Step 7: Handle Flash Sales and Peak Events Differently
Flash sales break standard caching assumptions. Prices change on a timer, stock depletes in seconds, and traffic spikes 10–100x.
Pre-warm your cache before the event starts — populate price and product data into Redis in the minutes before go-live, so the first wave of traffic doesn’t hit cold cache.
Use a dedicated inventory service for high-velocity stock updates. A Redis counter with atomic decrement (DECR) is far more performant than a database row lock under concurrent load:
# Atomic stock decrement — returns remaining stock after decrement
remaining = redis.decr(f"stock:{product_id}")
if remaining < 0:
# Oversold — revert and reject
redis.incr(f"stock:{product_id}")
raise OutOfStockError()
Degrade gracefully under load. If your cache layer becomes unavailable, your fallback should be a simplified response ("Check availability at checkout") rather than hammering your database with uncached requests.
Production Checklist
- [ ] Data classification table complete — every data type has an assigned TTL and invalidation strategy
- [ ] Layered cache architecture in place (CDN + application cache + origin)
- [ ] Namespaced Redis keys for targeted invalidation
- [ ] Event-driven invalidation for price and stock changes
- [ ] TTL jitter applied to prevent thundering herd
- [ ] Checkout path verified to use live data only
- [ ] Stock reservation logic implemented at order confirmation
- [ ] Discount code validation is always live
- [ ] Cache hit rate, invalidation lag, and price discrepancy monitoring active
- [ ] Flash sale runbook written and tested
Summary
Fast and accurate ecommerce caching is not about choosing one or the other — it’s about applying the right strategy to the right data. Cache your static content aggressively. Apply short TTLs and event-driven invalidation for prices and stock. Never cache cart totals or discount code validity. And build your checkout path to treat live data as non-negotiable.
The teams that get this right don’t just have faster stores — they have stores customers trust.
Want a review of your ecommerce caching architecture? Book a free consultation with Simplico.
Get in Touch with us
Related Posts
- 弹性无人机蜂群设计:具备安全通信的无领导者容错网状网络
- Designing Resilient Drone Swarms: Leaderless-Tolerant Mesh Networks with Secure Communications
- NumPy广播规则详解:为什么`(3,)`和`(3,1)`行为不同——以及它何时会悄悄给出错误答案
- NumPy Broadcasting Rules: Why `(3,)` and `(3,1)` Behave Differently — and When It Silently Gives Wrong Answers
- 关键基础设施遭受攻击:从乌克兰电网战争看工业IT/OT安全
- Critical Infrastructure Under Fire: What IT/OT Security Teams Can Learn from Ukraine’s Energy Grid
- LM Studio代码开发的系统提示词工程:`temperature`、`context_length`与`stop`词详解
- LM Studio System Prompt Engineering for Code: `temperature`, `context_length`, and `stop` Tokens Explained
- LlamaIndex + pgvector: Production RAG for Thai and Japanese Business Documents
- simpliShop:专为泰国市场打造的按需定制多语言电商平台
- simpliShop: The Thai E-Commerce Platform for Made-to-Order and Multi-Language Stores
- ERP项目为何失败(以及如何让你的项目成功)
- Why ERP Projects Fail (And How to Make Yours Succeed)
- Payment API幂等性设计:用Stripe、支付宝、微信支付和2C2P防止重复扣款
- Idempotency in Payment APIs: Prevent Double Charges with Stripe, Omise, and 2C2P
- Agentic AI in SOC Workflows: Beyond Playbooks, Into Autonomous Defense (2026 Guide)
- 从零构建SOC:Wazuh + IRIS-web 真实项目实战报告
- Building a SOC from Scratch: A Real-World Wazuh + IRIS-web Field Report
- 中国品牌出海东南亚:支付、物流与ERP全链路集成技术方案
- 再生资源工厂管理系统:中国回收企业如何在不知不觉中蒙受损失













