High-performance cache policies and supporting data structures.
This directory contains automatically generated benchmark reports for the cachekit project.
docs/benchmarks/
├── latest/ # Latest benchmark run
│ ├── index.md # Human-readable report
│ └── results.json # Raw data
└── v*.*.*/ # Release snapshots (future)
├── index.md
└── results.json
latest/index.md in your editor or browserlatest/results.jsonbench-support crate# Run benchmarks and generate docs
./scripts/update_benchmark_docs.sh
# 1. Run benchmarks
cargo bench --bench runner
# 2. Render docs
cargo run --package bench-support --bin render_docs -- \
target/benchmarks/<run-id>/results.json \
docs/benchmarks/latest
./scripts/update_benchmark_docs.sh --skip-bench
The generated index.md contains:
For tagged releases (e.g., v0.2.0), create a snapshot:
cargo run --package bench-support --bin render_docs -- \
target/benchmarks/<run-id>/results.json \
docs/benchmarks/v0.2.0
This preserves historical performance data for comparison.
See .github/workflows/ for automated benchmark publishing (future).
❌ No benchmark results found in target/benchmarks/
Solution: Run cargo bench --bench runner first
Check that the JSON artifact is valid:
python3 -m json.tool target/benchmarks/<run-id>/results.json > /dev/null
The script always uses the latest results.json by timestamp. To use a specific run:
cargo run --package bench-support --bin render_docs -- \
target/benchmarks/<specific-run-id>/results.json \
docs/benchmarks/latest
The JSON schema is defined in bench-support/src/json_results.rs:
When adding new benchmark cases:
benches/runner.rs)render_docs.rsFor more details, see Benchmark Quick Start