CacheKit Docs

High-performance cache policies and supporting data structures.

View the Project on GitHub OxidizeLabs/cachekit

Designing high-performance caches in Rust is a multi-disciplinary problem: data structures, memory layout, concurrency, workload modeling, and systems-level performance all matter. The points below reflect what moves the needle in practice across systems, services, and libraries.

1. Workload First, Policy Second

Cache policy only matters relative to workload.

Identify access patterns:

Measure:

Choose policies accordingly:

Never design a “general purpose” cache first; design for the workload you expect.

2. Memory Layout Matters More Than Algorithms

In a cache, memory layout often dominates policy.

Prefer:

Avoid:

Techniques:

Cache misses caused by your own data structure are as bad as upstream misses.

3. Concurrency Strategy Is Core Design, Not a Wrapper

Locking strategy shapes everything.

Options:

Rust-specific notes:

4. Avoid Per-Operation Allocation

Allocations kill throughput.

Pre-allocate:

Reuse:

Use:

Avoid:

If malloc shows up in your flamegraph, your cache is already slow.

5. Eviction Must Be Predictable and Cheap

Eviction is the critical slow path.

O(1) eviction is the goal.

Avoid unbounded tree walks or scans in eviction paths.

Maintain:

Be careful with:

Eviction cost must be comparable to lookup cost, not orders of magnitude higher.

6. Metrics Are Not Optional

You cannot tune what you do not measure.

Track at least:

Expose:

Metrics should guide design decisions, not justify them afterward.

7. Separate Policy From Storage

Design in layers:

Related docs:

This makes:

8. Beware of “Nice” Rust APIs in Hot Paths

Ergonomics often cost performance.

Avoid in critical loops:

Prefer:

You can wrap fast internals in nice APIs at the edges.

9. Scans Are the Enemy of Caches

In scan-heavy workloads:

Large sequential reads destroy LRU-style caches.

Solutions:

If you ignore scans, your cache will look great in microbenchmarks and terrible in production.

10. Benchmark Like a System, Not a Library

Do not rely on random key benchmarks.

Use:

Measure:

A cache that is 5% faster on random keys but 50% worse under scans is a bad cache.

11. Rust-Specific Pitfalls

Arc is expensive in hot paths.

Borrow checker can push you toward indirection—fight it with:

Beware of:

Rust can be as fast as C, but only if you design like a systems programmer, not a library author.

12. Design for Failure Modes

Ask:

Add:

A cache that collapses under stress is worse than no cache.

Bottom Line

High-performance caches are not about clever algorithms—they are about:

In Rust, your main enemy is not safety—it is abstraction overhead and accidental allocation. Design from the metal upward, then wrap it in something pleasant to use.