sirupsen/napkin-math
Techniques and numbers for estimating system's performance from first-principles
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 7w ago
- ✓10 active contributors
- ✓MIT licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Single-maintainer risk — top contributor 80% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/sirupsen/napkin-math)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/sirupsen/napkin-math on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: sirupsen/napkin-math
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sirupsen/napkin-math shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 7w ago
- 10 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Single-maintainer risk — top contributor 80% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live sirupsen/napkin-math
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/sirupsen/napkin-math.
What it runs against: a local clone of sirupsen/napkin-math — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in sirupsen/napkin-math | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 78 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sirupsen/napkin-math. If you don't
# have one yet, run these first:
#
# git clone https://github.com/sirupsen/napkin-math.git
# cd napkin-math
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of sirupsen/napkin-math and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sirupsen/napkin-math(\\.git)?\\b" \\
&& ok "origin remote is sirupsen/napkin-math" \\
|| miss "origin remote is not sirupsen/napkin-math (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "src/main.rs" \\
&& ok "src/main.rs" \\
|| miss "missing critical file: src/main.rs"
test -f "Cargo.toml" \\
&& ok "Cargo.toml" \\
|| miss "missing critical file: Cargo.toml"
test -f "benches/napkin_math.rs" \\
&& ok "benches/napkin_math.rs" \\
|| miss "missing critical file: benches/napkin_math.rs"
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
test -f "benches/benchmarks/memory_read.rs" \\
&& ok "benches/benchmarks/memory_read.rs" \\
|| miss "missing critical file: benches/benchmarks/memory_read.rs"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 78 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~48d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sirupsen/napkin-math"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
napkin-math is a reference toolkit for performance estimation using first-principles calculations, providing empirical latency and throughput numbers (memory reads, network, disk I/O, crypto hashing, serialization) measured on modern hardware (Intel Xeon 6985P, GCP c4-standard-48-lssd, March 2026), plus Rust/Go benchmarks to validate these estimates locally. Polyglot monorepo: Rust benchmarks in benches/ (benchmarks/, napkin_math.rs) using criterion, Go reference implementation in go/ (main.go, main_test.go), newsletter archive in newsletter/ (Jupyter notebooks, Ruby scripts, Gnuplot graphs for individual problem walkthroughs). Core numbers live in the README table, not code—the code validates them via benchmarking.
👥Who it's for
Backend engineers and systems architects who need to estimate system capacity and costs without detailed profiling—e.g., answering 'how much will logging at 100k RPS cost in storage?' or 'can we handle 1 GB/s through this network link?'
🌱Maturity & risk
Actively maintained and production-ready for reference purposes. The repo has comprehensive GitHub Actions CI (rust.yml), multiple language implementations (Rust primary with 52k LOC, Go fallback with 15k LOC), and periodic re-validation on fresh GCP instances (last noted March 2026). However, it's not a versioned library but a curated numbers table—stability is high but scope is narrow.
Low operational risk as a reference resource. Single maintainer (sirupsen) means no bus factor for code reviews, but the project is intentionally lightweight and has no runtime dependencies on the core numbers table. Rust dependencies are stable and well-audited (jemallocator, rand, sha2, crc32fast). The only maintenance burden is keeping hardware-measured numbers current as CPU architectures evolve.
Active areas of work
The project is in maintenance mode—actively validating numbers on fresh hardware and publishing problems via newsletter (most recent in archive: 14-syncing, 20-compound-vs-combining-indexes). Benchmarks are being refreshed (benches/ shows memory_read.rs and compressed_memory_read.rs under active test harness). No active development of new performance primitives visible, focus is on accuracy of existing estimates.
🚀Get running
Clone, build Rust benchmarks, and inspect the reference table:
git clone https://github.com/sirupsen/napkin-math.git
cd napkin-math
cargo build --release
cargo bench --bench napkin_math
Or run the Go reference: cd go && go test -v
Daily commands:
cargo bench --bench napkin_math runs all criterion benchmarks; individual benchmarks like cargo bench --bench memory_read show latency/throughput for sequential memory access. For Go: go run ./go/main.go. Newsletter problems include Ruby scripts (time.rb, populate.rb) paired with Jupyter notebooks for iterative estimation and validation.
🗺️Map of the codebase
src/main.rs— Entry point for the napkin-math CLI tool; all performance estimation calculations and benchmarking logic originates hereCargo.toml— Defines all dependencies including criterion benchmarking, jemalloc allocator, and system-level I/O libraries critical to performance measurementbenches/napkin_math.rs— Primary benchmark harness using criterion; demonstrates how to measure and validate napkin math estimates against real hardwareREADME.md— Explains the project's goal of estimating system performance from first-principles and references the canonical numbers tablebenches/benchmarks/memory_read.rs— Core benchmark measuring memory read latency; foundational performance number used throughout the project
🧩Components & responsibilities
- CLI Driver (main.rs) (Rust, clap) — Parses command-line arguments, orchestrates benchmark execution, and displays results
- Failure mode: Malformed arguments or missing hardware capabilities cause graceful error messages; does not validate system prerequisites
- Criterion Benchmark Runner (criterion crate, jemalloc, core_affinity) — Executes individual benchmarks with statistical rigor, manages warm-up phases, collects samples, and computes confidence intervals
- Failure mode: High measurement variance causes larger confidence intervals; criterion may recommend more samples or longer run times
- Memory Read Benchmark (Rust unsafe, page_size crate) — Directly measures memory latency by allocating buffers and sequential access patterns
- Failure mode: Compiler optimization may eliminate loop body (prevented by black_box); CPU caches may skew results if working set doesn't overflow L3
- Linux I/O Benchmarks (rio crate, libc, Linux only) — Measures disk and async I/O latencies using kernel-level io_uring APIs
- Failure mode: Fails gracefully on non-Linux systems (conditional compilation); requires kernel support for io_uring
- Educational Newsletter Problems (Jupyter notebooks, Ruby, Gnuplot) — Provides interactive scenarios for practicing napkin math estimation (syncing, database indexing, caching)
- Failure mode: CSV data may be outdated if hardware assumptions change; solutions assume specific database/cache implementations
🔀Data flow
Developer→CLI (cargo bench)— Invokes benchmark suite with optional filtersCLI→Criterion Benchmark Runner— Initializes benchmark group and registers individual benchmarksCriterion→Hardware (CPU, memory, I/O)— Executes measurement code with CPU affinity and warm-up; collects timing samplesHardware→undefined— undefined
🛠️How to make changes
Add a new performance benchmark
- Create a new benchmark file in benches/benchmarks/ (e.g., benches/benchmarks/disk_read.rs) (
benches/benchmarks/disk_read.rs) - Add module declaration to benches/benchmarks/mod.rs (
benches/benchmarks/mod.rs) - Register the benchmark in benches/napkin_math.rs using criterion's group!() macro (
benches/napkin_math.rs) - Use the pattern from benches/benchmarks/memory_read.rs: criterion black_box() for preventing compiler optimization (
benches/benchmarks/memory_read.rs)
Add a new napkin math problem to the newsletter
- Create a new directory under newsletter/ with the problem number and title (e.g., newsletter/21-caching-strategy/) (
newsletter/21-caching-strategy/) - Add a Jupyter notebook (.ipynb) with the problem statement, hints, and solution walkthrough (
newsletter/21-caching-strategy/21.ipynb) - Include CSV data or Ruby test files (test.rb) demonstrating the calculation (
newsletter/21-caching-strategy/test.rb) - Generate graphs using gnuplot if visualizing results (see newsletter/14-syncing/ for examples) (
newsletter/21-caching-strategy/graph.gnuplot)
Validate a napkin math estimate with a benchmark
- Write the performance measurement code in a new benchmark file following the pattern in benches/benchmarks/memory_read.rs (
benches/benchmarks/memory_read.rs) - Configure the release profile in Cargo.toml for accurate measurements (ensure overflow-checks = false and debug = true) (
Cargo.toml) - Run the benchmark with: cargo bench --bench napkin_math (
benches/napkin_math.rs) - Compare measured results to first-principles estimates documented in README.md (
README.md)
🔧Why these technologies
- Rust + criterion crate — Rust provides zero-cost abstractions and minimal runtime overhead; criterion gives statistically rigorous benchmarking with variance tracking to validate napkin math estimates against hardware reality
- jemalloc allocator — Provides consistent memory allocation behavior across different systems, reducing variance in benchmarks caused by default allocator differences
- core_affinity + libc — Allows pinning benchmarks to specific CPU cores, eliminating scheduler noise and ensuring reproducible single-threaded performance measurements
- rio (async I/O on Linux) — Enables low-level I/O benchmarking on Linux using io_uring for accurate disk/network performance characterization
- Jupyter notebooks + Ruby test harness — Provides interactive, reproducible problem-solving environments for educational content; Ruby scripts allow realistic database query optimization experiments
⚖️Trade-offs already made
-
Benchmark in release mode with debug=true and overflow-checks=false
- Why: Provides compiler optimizations matching production while retaining debug symbols for profiling; disabling overflow checks prevents artificial slowdowns in tight loops
- Consequence: Measurements are more realistic but require careful code review to prevent undetected integer overflows; developers must validate math manually
-
Separate Go implementation alongside Rust
- Why: Demonstrates that napkin math estimates are language-agnostic and validates the approach across ecosystems
- Consequence: Doubles maintenance burden; Go version may lag behind Rust version if not actively synchronized
-
CPU affinity and warm-up phases required for benchmarks
- Why: Minimizes variance from scheduler preemption and cold cache effects to get stable, reproducible numbers
- Consequence: Benchmarks only represent sustained, steady-state performance—not realistic cold-start or contention scenarios
🚫Non-goals (don't propose these)
- Does not provide distributed system performance modeling (single-host focus only)
- Does not account for network latency or storage I/O on non-Linux platforms (rio is Linux-only)
- Does not model concurrent workload interference or multi-tenant scenarios
- Does not provide real-time production monitoring—benchmarks are offline characterization only
- Does not handle authentication, security, or compliance concerns
🪤Traps & gotchas
Benchmarks must run on Linux for rio (async I/O) tests; the code has #[cfg(target_os = "linux")] guards. CPU affinity (core_affinity crate) is used to pin threads—results will vary on oversubscribed systems or with CPU frequency scaling enabled (disable with cpufreq-set if available). The numbers table is intentionally rounded for memorization, not precision—exact latencies will drift with microarchitectural changes (spectre patches, new microcode). Newsletter problems require Ruby 2.x and Gnuplot installed to replicate the iterative graphs.
🏗️Architecture
💡Concepts to learn
- Memory Bandwidth vs. Latency Trade-off — The README shows 20 GiB/s single-thread sequential reads (50 μs per 1 MiB) vs. 3 GiB/s random reads (300 μs per 1 MiB)—understanding this gap is critical to napkin-math estimations and drives cache-aware algorithm design.
- System Call Overhead & Context Switching — Napkin-math measures system calls at 300 ns and context switches at 1-10 μs (depending on L3 cache); these are hard limits on request batching and I/O multiplexing strategies.
- Serialization/Deserialization Throughput — The table claims 1 GiB/s for 'fast serialization' (e.g., protobuf, msgpack); knowing this is a bottleneck vs. memory I/O (20 GiB/s) informs choice between binary and text formats and compression strategies.
- L3 Cache Hit Rates & NUMA Effects — Threaded memory throughput (200 GiB/s) assumes cache-coherent access; napkin-math measurements on 48-vCPU systems with 24 physical cores directly probe NUMA and contention effects that break these numbers.
- Zone Latency vs. Cross-Region Latency — Napkin-math distinguishes same-zone (100 μs), VPC (100 μs), and outside VPC (300 ms) network latencies; this 3000x spread determines whether an operation can be synchronous or must batch/cache.
- Write Amplification & Throughput Measurement Under Load — The compressed_memory_read.rs benchmark and the 14-syncing newsletter problem both tackle the gap between theoretical throughput and achievable throughput under realistic serialization/compression overhead.
- Criterion Criterion & Stable Benchmarking on Commodity Hardware — Napkin-math uses criterion harness with core affinity pinning and profile overrides (jemalloc, overflow-checks disabled) to achieve reproducibility on GCP instances; essential for validating that reported numbers hold across runs.
🔗Related repos
google/leveldb— Foundational LSM tree implementation; napkin-math numbers for SSD I/O and write amplification directly apply to understanding leveldb performance.facebook/rocksdb— Production key-value store where napkin-math estimates (sequential SSD 8 GiB/s, context switch 1 μs) are used to reason about compaction overhead and throughput tuning.brendangregg/perf-tools— Companion toolkit for validating napkin-math estimates on real systems; provides flamegraph and latency profiling scripts to measure the actual operations in napkin-math's table.twitter-archive/commons— Example codebase where these napkin-math estimates were historically applied to reason about Twitter's distributed systems architecture (queuing, caching, replication).sirupsen/hwacha— Sirupsen's companion project for hardware performance measurement automation; aligns with the napkin-math goal of empirical, reproducible numbers on modern hardware.
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add Go integration tests for go/main.go with CI coverage
The repo has a Go implementation (go/main.go and go/main_test.go) but the GitHub Actions workflow (.github/workflows/rust.yml) only covers Rust. Go code lacks automated testing in CI, creating risk of regressions. Adding a Go test step to the workflow and expanding go/main_test.go would ensure parity between Rust and Go implementations.
- [ ] Expand go/main_test.go with additional test cases matching Rust benchmark scenarios
- [ ] Update .github/workflows/rust.yml to add a Go test job that runs 'go test ./...'
- [ ] Verify both Rust and Go produce consistent napkin-math estimates for the same scenarios
- [ ] Document any known differences between implementations in README.md
Create benchmark comparison suite between Rust and Go implementations
The repo has parallel Rust (benches/) and Go (go/) implementations but no comparative benchmarking. This makes it impossible to validate that both versions produce equivalent performance estimates. A new benchmark file comparing identical scenarios across languages would add credibility and help maintain consistency.
- [ ] Create benches/benchmarks/go_comparison.rs or a standalone script comparing Go output with Rust
- [ ] Add documentation in README.md explaining how to run comparative benchmarks
- [ ] Document performance deltas between implementations (if any) in BENCHMARKS.md
- [ ] Link results from newsletter problems to their corresponding benchmark scenarios for reproducibility
Extract and document platform-specific utilities from src/main.rs into separate modules
The src/main.rs likely contains mixed concerns including platform-specific logic (note the [target.'cfg(target_os = "linux")'.dependencies] for rio). Creating separate modules for Linux-specific I/O, memory operations, and cross-platform abstractions would improve maintainability and make it easier for contributors to extend platform support.
- [ ] Audit src/main.rs to identify Linux-specific code paths (especially around rio usage)
- [ ] Create src/platform/mod.rs, src/platform/linux.rs, and src/platform/common.rs modules
- [ ] Move platform-conditional logic from main.rs into appropriate module files
- [ ] Document platform support and missing implementations (e.g., macOS, Windows) in README.md or CONTRIBUTING.md
🌿Good first issues
- Add latency benchmarks for bitpacking operations (the bitpacking crate is imported but no benchmark exists in benches/); propose a benchmark file benches/benchmarks/bitpacking.rs that measures compression ratio and throughput for the claim in the numbers table.
- Expand the Go reference implementation in go/main.go to cover all operations in the README table (currently sparse)—add benchmarks for hashing (SHA2), serialization, and random memory access to achieve parity with the Rust suite.
- Create a Jupyter notebook in newsletter/ that walks through estimating the cost of a real-world system (e.g., data warehouse with 10M rows/sec ingestion) using the numbers from README, following the 14-syncing problem structure as a template.
⭐Top contributors
Click to expand
Top contributors
- @sirupsen — 80 commits
- @dependabot-preview[bot] — 5 commits
- @mble — 3 commits
- @pviotti — 3 commits
- @lastnameswayne — 2 commits
📝Recent commits
Click to expand
Recent commits
aae5832— readme: add updates for object storage latency (sirupsen)1e00117— Merge pull request #38 from lastnameswayne/lastnameswayne/fix-url (sirupsen)9e8d9b9— cloudping.co/ (Address comment) (lastnameswayne)b5ea941— Fix https://www.cloudping.info/ link in README (lastnameswayne)97a9256— blob: add PUT block size (sirupsen)ba9322f— add CAS (sirupsen)a51b925— readme: update object storage (sirupsen)de89e92— readme: update random memory (sirupsen)046dd31— fresh run (sirupsen)f1edabb— ekm (sirupsen)
🔒Security observations
This Rust project has moderate security concerns
- High · Outdated and Vulnerable Dependencies —
Cargo.toml. Multiple dependencies use wildcard versions () or significantly outdated versions, making it impossible to ensure security patches are applied. Notably: 'failure = ""' (unmaintained crate), 'page_size = ""', 'redis = ""', 'clap = ""', 'jemallocator = ""', 'libc = ""', 'regex = ""', 'crc32fast = ""', 'mysql = ""', 'criterion = ""', 'core_affinity = ""', and 'bitpacking = "0.5"' are all using permissive version constraints that could pull in vulnerable versions. Fix: Pin all dependencies to specific versions using semantic versioning constraints (e.g., 'dependency = "0.4.0"' instead of '*'). Run 'cargo audit' regularly to identify known vulnerabilities. Replace 'failure' crate with modern error handling (anyhow or thiserror). Use 'cargo update' cautiously with locked dependencies. - High · Unmaintained Error Handling Crate —
Cargo.toml - failure = "*". The 'failure' crate is no longer maintained and has been deprecated by the Rust ecosystem. It may contain unpatched security vulnerabilities and is not recommended for new code. Fix: Replace 'failure' with actively maintained alternatives such as 'anyhow' or 'thiserror'. These provide modern error handling with better security practices. - Medium · Database Connection Security - MySQL —
Cargo.toml - mysql = "*"; src/main.rs (not provided). The mysql crate dependency is present but there is no visible connection string validation, TLS/SSL configuration enforcement, or secure credential management pattern in the codebase structure. Connection parameters may be passed unsafely. Fix: Ensure MySQL connections use TLS/SSL encryption. Never hardcode database credentials. Use environment variables or secure secret management. Validate and sanitize all connection parameters. Use parameterized queries to prevent SQL injection. - Medium · Redis Connection Security —
Cargo.toml - redis = "*"; src/main.rs (not provided). Redis dependency is included without visible security configuration. Redis connections may lack authentication, encryption, or access controls, making the system vulnerable to unauthorized access. Fix: Enforce Redis authentication with strong credentials. Use TLS/SSL for encrypted connections. Restrict Redis network access to authorized hosts only. Never expose Redis to untrusted networks. Use environment variables for credentials. - Low · Release Build Without Debug Symbols Stripping —
Cargo.toml - [profile.release]. The release profile has 'debug = true' enabled, which keeps debug symbols in production binaries. While useful for debugging, this increases binary size and could potentially leak information about the codebase structure. Fix: Set 'debug = false' for production releases to reduce binary size and remove unnecessary symbols. Keep 'debug = true' only for development/staging builds if needed for debugging. - Low · Overflow Checks Disabled in Release —
Cargo.toml - [profile.release] overflow-checks = false. The release profile has 'overflow-checks = false', which disables integer overflow protection. While this provides performance benefits, it could mask integer overflow bugs that might have security implications. Fix: Enable overflow checks ('overflow-checks = true') in release builds to catch integer overflow vulnerabilities. If performance is critical, perform thorough testing and use 'wrapping_' or 'checked_' methods explicitly where overflow is expected and handled. - Low · Permissive Linux-Only Dependency —
Cargo.toml - [target.'cfg(target_os = "linux")'.dependencies]. 'rio = "0.9.3"' is specified as a Linux-only dependency. Version 0.9.3 is outdated (released ~2020). While outdated dependencies may have unpatched vulnerabilities, the specific risk is lower for this particular package. Fix: Update rio to the latest stable version. Verify no known CVEs exist for the current version using 'cargo audit'.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.