Effect Caching: When to Recalculate vs. Reuse
Static shadow. Doesn't change. Regenerated 60 times per second anyway.
Profile the renderer: 40% of frame time spent rendering unchanged shadows.
This is wasteful.
The Problem
Effects like drop shadows and blurs are expensive:
void renderShadow(const Shape& shape, const ShadowParams& params) {
// 1. Render shape to offscreen buffer (5ms)
// 2. Apply blur filter (10ms)
// 3. Composite result (2ms)
// Total: 17ms per shadow
}
For static shadows (shape doesn't change, shadow params don't change), we're wasting 17ms per frame recalculating the same result.
Render at 60fps: 17ms × 60 = 1020ms of CPU time per second for one static shadow.
First Attempt: Frame-Based Caching
Cache shadow results for one frame:
std::map shadowCache;
SkImage getShadow(const Shape& shape) {
if (!shadowCache.contains(shape.id)) {
shadowCache[shape.id] = renderShadow(shape);
}
return shadowCache[shape.id];
}
// Clear cache each frame
void onFrameEnd() {
shadowCache.clear();
}
This prevented rendering the same shadow multiple times within one frame, but still recalculated it every frame.
For static content, we're still wasting 59 out of 60 renders.
Second Attempt: Persistent Cache Without Invalidation
Keep the cache across frames:
std::map shadowCache; // Persistent
SkImage getShadow(const Shape& shape) {
if (!shadowCache.contains(shape.id)) {
shadowCache[shape.id] = renderShadow(shape);
}
return shadowCache[shape.id];
}
This cached forever. But when the shape did change (animation, user edit), the shadow didn't update.
We needed cache invalidation.
The Solution: Content-Hash Based Caching
Cache based on content hash, not just ID:
struct ShadowCacheKey {
uint64_t shapeHash;
float blurRadius;
SkColor color;
SkPoint offset;
bool operator==(const ShadowCacheKey& other) const {
return shapeHash == other.shapeHash &&
blurRadius == other.blurRadius &&
color == other.color &&
offset == other.offset;
}
};
std::map> shadowCache;
sk_sp getShadow(const Shape& shape, const ShadowParams& params) {
ShadowCacheKey key = {
shape.computeHash(),
params.blur,
params.color,
params.offset
};
if (!shadowCache.contains(key)) {
shadowCache[key] = renderShadow(shape, params);
}
return shadowCache[key];
}
Now the cache automatically invalidates when any parameter changes. If the shape geometry changes, computeHash() returns a different value, triggering a cache miss and regeneration.
The Hash Function
Computing a fast, stable hash:
uint64_t Shape::computeHash() const {
uint64_t hash = 0;
// Hash vertex positions
for (const auto& vertex : vertices) {
hash = hash * 31 + std::hash()(vertex.x);
hash = hash * 31 + std::hash()(vertex.y);
}
// Hash segment topology
for (const auto& segment : segments) {
hash = hash * 31 + segment.fromVertex;
hash = hash * 31 + segment.toVertex;
}
return hash;
}
Fast to compute (~microseconds for typical shapes), stable across identical geometry.
Dirty Tracking Optimization
For even better performance, track which shapes changed:
class Shape {
bool fDirty = true;
void setVertexPos(uint32_t id, float x, float y) {
vertices[id] = {x, y};
fDirty = true; // Mark dirty on modification
}
uint64_t getHash() {
if (fDirty) {
fCachedHash = computeHash();
fDirty = false;
}
return fCachedHash;
}
private:
uint64_t fCachedHash = 0;
};
Now hash computation is lazy—only recalculated when the shape actually changes.
Cache Size Management
Unbounded caches grow forever. Add an LRU (Least Recently Used) eviction policy:
struct CacheEntry {
sk_sp image;
uint64_t lastAccessTime;
};
std::map shadowCache;
const size_t kMaxCacheSize = 100;
void evictOldEntries() {
if (shadowCache.size() > kMaxCacheSize) {
// Remove 20% oldest entries
// ... eviction logic ...
}
}
Keeps memory usage bounded while retaining hot entries.
Results
Effect caching for static shadows:
Before: 17ms per shadow × 60fps = 1020ms/sec CPU time After: 17ms once, then <0.1ms/frame for cached result
100× speedup for static effects.
For animated content, the cache naturally invalidates each frame (geometry changes → hash changes → cache miss). No penalty for dynamic content.
The caching system is ~100 lines:
- Hash-based cache keys (30 lines)
- Dirty tracking (20 lines)
- LRU eviction (50 lines)
Cache invalidation is the hard problem. Content hashing solves it by making the cache key semantically meaningful—when content changes, the key changes automatically.
Read next: CanvasKit Build Flags: The Partial Compilation That Wasn't - Module dependencies and build system lies.