High-performance cache policies and supporting data structures.
Feature: policy-mfu
Evict the entry with the highest access frequency, opposite of LFU.
MFU maintains frequency counters for all entries and evicts the one that has been accessed most frequently. This is counterintuitive for most caching scenarios but useful in specific cases where high-frequency items are less likely to be needed next.
MfuCore)┌─────────────────────────────────────────────────────────────┐
│ MfuCore<K, V> │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ HashMapStore<K, V> (value storage) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ frequencies: HashMap<K, u64> (freq tracking) │ │
│ └──────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────────────────┐ │
│ │ freq_heap: BinaryHeap<(u64, K)> (max-heap) │ │
│ │ │ │
│ │ top → (15, page_1) ← highest frequency │ │
│ │ / \ │ │
│ │ (7, page_3) (3, page_2) │ │
│ └──────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
| Operation | Complexity | Description |
|---|---|---|
new(capacity) |
O(1) | Create cache |
insert(k, v) |
O(log n) | Insert entry, may evict max freq |
get(&k) |
O(log n) | Get value, increment frequency |
contains(&k) |
O(1) | Check if key exists |
len() |
O(1) | Number of entries |
pop_mfu() |
O(log n)* | Remove and return highest freq item |
peek_mfu() |
O(n) | View highest freq item |
frequency(&k) |
O(1) | Get frequency count for key |
* Amortized; may skip stale heap entries
⚠️ Warning: MFU performs poorly for typical caching workloads!
// LFU uses min-heap (with Reverse wrapper)
freq_heap: BinaryHeap<Reverse<(u64, K)>> // Min-heap
top → (1, A) // Evict lowest frequency
// MFU uses max-heap (no Reverse wrapper)
freq_heap: BinaryHeap<(u64, K)> // Max-heap
top → (99, A) // Evict highest frequency
Like HeapLfuCache, MfuCore uses lazy invalidation:
get, increment frequency and push new (freq, key) to heapconst HEAP_REBUILD_FACTOR: usize = 3;
// When stale entries exceed 3x live entries, rebuild
if freq_heap.len() > store.len() * HEAP_REBUILD_FACTOR {
rebuild_heap(); // Reconstruct from current frequencies
}
| Aspect | LFU (Least) | MFU (Most) |
|---|---|---|
| Eviction | Lowest frequency | Highest frequency |
| Data struct | Min-heap | Max-heap |
| Typical use | General caching (keep hot) | Burst detection (evict hot) |
| Performance | Good for most workloads | Poor for most workloads |
| Aspect | LRU | MFU |
|---|---|---|
| Tracks | Recency (time-based) | Frequency (count-based) |
| Eviction | Least recently used | Most frequently used |
| Complexity | O(1) all operations | O(log n) insert/get |
| Scan resist | No | Partially (evicts scans) |
use cachekit::policy::mfu::MfuCore;
// Create MFU cache
let mut cache = MfuCore::new(3);
// Insert items
cache.insert(1, 100);
cache.insert(2, 200);
cache.insert(3, 300);
// Create burst on item 1 (temporary high frequency)
for _ in 0..50 {
cache.get(&1);
}
// Item 1 now has freq 51, others have freq 1
println!("Freq 1: {:?}", cache.frequency(&1)); // 51
println!("Freq 2: {:?}", cache.frequency(&2)); // 1
println!("Freq 3: {:?}", cache.frequency(&3)); // 1
// Insert new item - evicts item 1 (highest frequency)
cache.insert(4, 400);
assert!(!cache.contains(&1)); // Burst item evicted
assert!(cache.contains(&2)); // Low freq retained
assert!(cache.contains(&3)); // Low freq retained
assert!(cache.contains(&4)); // New item
Per entry overhead:
- HashMap entry: ~32 bytes (key + Arc pointer)
- Frequency map: ~24 bytes (key + u64)
- Heap entries: ~24 bytes × (1-3) (with stale entries)
Total: ~80-104 bytes per cache entry
// Default: rebuild when stale entries exceed 3x live entries
const HEAP_REBUILD_FACTOR: usize = 3;
MFU is not a general-purpose policy. Consider it only when:
For most workloads, use LFU, LRU, S3-FIFO, or SLRU instead.