Server-Side Caching vs Client-Side Caching
Server Side Caching
Definition: Server-side caching involves storing frequently accessed data on the server. When a client requests data, the server first checks its cache. If the data is present (cache hit), it is served from the cache; otherwise, the server processes the request and may cache the result for future requests.
It caches the dynamic content generated on the server (e.g., full HTML pages, database query results, computed API responses).
Characteristics
Location: Cache resides on the server infrastructure (application server, database layer, or distributed cache).
Control: Fully managed by the server—client is unaware of caching details.
Types:
Database query caching – Stores results of common queries.
Page caching – Stores pre-rendered HTML pages.
Object caching – Stores computed objects (e.g., user sessions, product details) in memory (e.g., Redis, Memcached).
Examples
Database query results:
A product catalog request hits the server cache instead of running a DB query repeatedly.
Full HTML page caching:
A news site stores rendered article pages, returning cached pages instantly for repeated views
Pros
✅ Reduced latency: Faster response times for clients.
✅ Lower backend load: Fewer database hits, reduced computation.
✅ Scales better: Handles high traffic with fewer resources.
Cons
⚠️ Extra resource usage: Consumes RAM or disk space on the server.
⚠️ Cache invalidation complexity: Must update or invalidate cache when underlying data changes to avoid stale data.
⚠️ Potential inconsistency: Clients may receive outdated results if cache is not properly refreshed.
Client Side Caching:
Definition
Client-side caching stores data locally on the client’s device (e.g., browser cache, mobile app storage) to reduce the need for repeated requests to the server, resulting in faster load times and offline access. Stores static resources (CSS, JS, images) for faster rendering on the user’s device.
Characteristics
Location: Cache is stored on the user’s device (browser, mobile, or desktop app).
Control: Primarily controlled by the client.
Types:
Browser cache: Images, JavaScript, CSS, fonts, and static assets.
Application cache: User data or session data in local storage.
Examples
Browser caching:
When a user first visits a website, static resources (CSS, JS, images) are downloaded and cached locally.
On subsequent visits, the browser loads them from cache, avoiding new network requests.
Mobile app caching:
A weather app stores the latest forecast locally.
If opened offline, the cached data is displayed until the app fetches fresh data.
Pros
✅ Reduced network traffic: Fewer server round-trips improve response times.
✅ Offline access: Users can view cached data even when disconnected.
✅ Improved user experience: Faster page and app loading times.
Cons
⚠️ Storage limitations: Restricted by browser or device storage quota.
⚠️ Stale data risk: Cached data may become outdated if not properly invalidated or refreshed from the server.
Key Differences
Cache Location
Server Side: Cache is maintained on the server (RAM, disk, or distributed cache like Redis).
Client Cache: Cache is stored on the user’s device (browser cache, mobile app storage).
Data Scope
Server Side: Benefits all users—cached data is shared across multiple client requests.
Client Cache: Benefits only the individual user—each client maintains its own cache.
Data Freshness
Server Side: Centrally managed, allowing easier invalidation or updates to keep data fresh.
Client Cache: Harder to control—can serve stale data if client cache isn’t refreshed properly.
✅ Key Takeaway:
Server-side caching: Best for shared, dynamic data used by many users (reduces server load).
Client-side caching: Best for static or user-specific data, improving speed and offline experience.
Both can be combined for optimal performance. Server avoids recomputing pages.
Client avoids repeated downloads of unchanged files.
How do you invalidate client side and server side caching?
Client Side:
A) TTL
B) Stale while Revalidate
Header example:
Cache-Control: max-age=0, stale-while-revalidate=60
c) VERSIONING URL’s : After every deployment, it changes the version.
the most reliable “invalidation” for static assets)
For JS/CSS/images:
/app.js?v=42 or /app.42a9c.js
When you deploy a new version, the URL changes → client is forced to fetch new content, even if it cached the old one for a year.
This is the standard method for front-end builds.
Important:
Static assets (JS/CSS/images): cache aggressively + versioned URLs
Public read APIs (catalog/config): max-age
Server Side:
A) TTL
B) VERSIONING:
Instead of deleting, you bump a version:
Cache key becomes: user:123:v17
When user updates, version increments to v18
Readers now automatically use new key
Effect
Old keys naturally expire by TTL
Useful when
One update invalidates thousands of derived keys (large fan-out)
C) WRTIE THROUGH CACHE
D) Event-driven invalidation (pub/sub)
What you do
After DB update, publish an event like UserUpdated(123)
Cache layer or services subscribe and delete/update relevant keys
Why server-side invalidation is easier than client-side
Because you can:
Directly delete/update keys
Do it immediately after a write
Apply it consistently across your infra
For client, No direct access: you can’t “reach into” every browser/app cache to evict a specific entry. it is not in direct access like cetralized server.
Client-side: you can only influence behavior via policies (TTL/ETag/versioning), not force-delete reliably.