google/benchmark
A microbenchmark support library
Healthy across the board
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 2d ago
- ✓28+ active contributors
- ✓Apache-2.0 licensed
Show 3 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 60% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/google/benchmark)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/google/benchmark on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: google/benchmark
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/google/benchmark shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 2d ago
- 28+ active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 60% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live google/benchmark
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/google/benchmark.
What it runs against: a local clone of google/benchmark — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in google/benchmark | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of google/benchmark. If you don't
# have one yet, run these first:
#
# git clone https://github.com/google/benchmark.git
# cd benchmark
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of google/benchmark and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "google/benchmark(\\.git)?\\b" \\
&& ok "origin remote is google/benchmark" \\
|| miss "origin remote is not google/benchmark (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "include/benchmark/benchmark.h" \\
&& ok "include/benchmark/benchmark.h" \\
|| miss "missing critical file: include/benchmark/benchmark.h"
test -f "src/benchmark.cc" \\
&& ok "src/benchmark.cc" \\
|| miss "missing critical file: src/benchmark.cc"
test -f "src/benchmark_runner.cc" \\
&& ok "src/benchmark_runner.cc" \\
|| miss "missing critical file: src/benchmark_runner.cc"
test -f "include/benchmark/state.h" \\
&& ok "include/benchmark/state.h" \\
|| miss "missing critical file: include/benchmark/state.h"
test -f "src/benchmark_register.cc" \\
&& ok "src/benchmark_register.cc" \\
|| miss "missing critical file: src/benchmark_register.cc"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/google/benchmark"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
google/benchmark is a C++ microbenchmarking library that enables developers to measure the performance of code snippets with statistical rigor. It provides a familiar unit-test-like API (via BENCHMARK macro) to register and run performance tests, automatically handling iterations, warmup, statistical analysis, and cross-platform timing collection. Core capability: precise wall-clock and optional CPU counter measurement of small code regions with built-in statistical filtering and result reporting. Dual-build monorepo: primary C++ library in src/ (inferred from language breakdown), CMake-first configuration (CMakeLists.txt + cmake/ directory with feature checks and module finders), Bazel support via BUILD.bazel and bazel/ config. Python bindings isolated in bindings/python/ with dedicated benchmark.cc wrapper. Documentation in docs/, examples presumably in examples/. CI workflows in .github/workflows/ cover multiple configurations.
👥Who it's for
C++ library maintainers, systems programmers, and performance engineers who need to validate algorithmic improvements or detect performance regressions in critical code paths. Also used by framework/compiler developers (e.g. Abseil, Chromium teams) to track performance across commits.
🌱Maturity & risk
Highly mature and production-ready. The project has comprehensive CI/CD coverage (Bazel, CMake, sanitizers, clang-tidy workflows), active GitHub Actions pipelines, well-organized documentation (docs/user_guide.md, docs/platform_specific_build_instructions.md), and Python bindings (bindings/python/). Official Google project with established contributor guidelines (CONTRIBUTING.md) and issue templates. Actively maintained with multiple build system support (CMake + Bazel).
Low risk for adoption. No vendored dependencies visible in the file list (uses CMake module system for optional tools like PFM, LLVM). Well-documented with clear version requirements (C++17 to build, C++11 usage compatible). Main consideration: performance measurement is environment-sensitive (CPU frequency scaling, thermal throttling, OS jitter) — results require careful interpretation and platform isolation, documented in docs/.
Active areas of work
Active development with multi-platform support expansion: recent workflows for perfcounters (build-and-test-perfcounters.yml), libcxx testing (libcxx-setup.sh), minimum CMake version compatibility (build-and-test-min-cmake.yml), and Python wheel distribution (wheels.yml). Security focus evident (OSSF scorecard, pre-commit config). Bazel module system adoption visible (MODULE.bazel, WORKSPACE.bzlmod).
🚀Get running
git clone https://github.com/google/benchmark.git
cd benchmark
cmake -E make_directory build
cd build
cmake ..
make
./bin/benchmark_test # Run tests
Alternatively with Bazel: bazel build //:benchmark and bazel test //test:...
Daily commands:
CMake: cd build && cmake .. && make && ./bin/benchmark_example (example assumes examples/ directory). Bazel: bazel run //example:benchmark_example. Python bindings: cd bindings/python && python -m pytest or python example.py per bindings/python/example.py. All workflows use --verbose flags for CI transparency.
🗺️Map of the codebase
include/benchmark/benchmark.h— Primary public API header defining the macros (BENCHMARK, BENCHMARK_F, etc.) and core types that all benchmark code uses.src/benchmark.cc— Core implementation of benchmark state management, runner orchestration, and result collection—the engine driving all measurements.src/benchmark_runner.cc— Manages benchmark execution loop, timing collection, and statistics computation—critical for understanding measurement accuracy.include/benchmark/state.h— Defines the State class passed to every benchmark function; essential for controlling iteration counts, parameters, and measurements.src/benchmark_register.cc— Handles benchmark registration and fixture lifecycle, enabling the declarative benchmark definition system.include/benchmark/reporter.h— Output format abstraction for results (JSON, CSV, console); necessary for understanding how measurements are reported.CMakeLists.txt— Primary build configuration; required reading for understanding compilation, platform detection, and feature flags.
🛠️How to make changes
Add a new benchmark function
- Include the benchmark header in your source file (
include/benchmark/benchmark.h) - Define a void function taking benchmark::State& parameter and use the BENCHMARK macro (
include/benchmark/benchmark.h) - Wrap timed code in
for (auto _ : state)loop; optionally use state.SkipWithError(), state.SetIterationTime(), or state.counters[] (include/benchmark/state.h) - Build executable with CMake or Bazel; benchmark registration happens automatically via static initialization (
CMakeLists.txt)
Add a custom reporter format
- Subclass benchmark::BenchmarkReporter and override methods like ReportRuns(), ReportHeader(), Finalize() (
include/benchmark/reporter.h) - Implement serialization logic in Finalize() to format results (JSON, CSV, etc.) (
include/benchmark/reporter.h) - Register the reporter in your benchmark_main() or programmatic runner by passing it to benchmark::RunSpecifiedBenchmarks() (
src/benchmark_main.cc)
Add a benchmark fixture with setup/teardown
- Create a class inheriting from benchmark::Fixture (
include/benchmark/registration.h) - Implement SetUp() and TearDown() virtual methods for per-iteration or per-benchmark lifecycle (
include/benchmark/registration.h) - Use BENCHMARK_F(FixtureName, test_name) macro instead of BENCHMARK to bind the fixture (
include/benchmark/benchmark.h)
Add a custom counter or metric
- Inside your benchmark function, reference
state.counters["metric_name"](include/benchmark/state.h) - Assign values or use Counter operations (increment, set) to track custom metrics (
include/benchmark/counter.h) - Configure counter flags (e.g., kIsRate, kInvert) to control how it appears in reports (
include/benchmark/counter.h)
🔧Why these technologies
- CMake + Bazel dual build system — Enables compatibility with diverse C++ build environments (embedded, large monorepos, traditional projects); Bazel for remote execution and caching.
- C++11+ with feature detection — Balances modern code idioms with backward compatibility; CMAKE feature checks adapt to compiler capabilities (std::chrono, std::regex variants).
- Python bindings (pybind11) — Allows data scientists and Python tooling to leverage the same measurement engine without reimplementing timing logic.
- Platform-specific timing (chrono, cycle counters, perf counters) — Maximizes measurement accuracy on different OSes (Linux perf, Windows, macOS) and CPU architectures.
- Static registration via macros — Zero runtime overhead for benchmark discovery; compilation generates code that auto-registers all BENCHMARK() declarations at link time.
⚖️Trade-offs already made
-
Synchronous, single-threaded main benchmark loop
- Why: Simplifies reproducibility and variance reduction; threading complexity could obscure true performance.
- Consequence: Multi-threaded benchmarks must be explicitly coded; library does not parallelize across benchmarks by default.
-
Automatic iteration count estimation based on wall-clock time
- Why: Ensures consistent runtime across diverse machines; balances precision against total test time.
- Consequence: Slower code gets fewer iterations; may miss rare variance patterns in very fast code.
-
Fixture-based setup/teardown rather than parameterized factories
- Why: Matches C++ testing idiom (Google Test); familiar to C++ engineers.
- Consequence: Setup cost is per-iteration; heavy setup should be moved outside the timing loop manually.
-
Reporter abstraction (pluggable output format)
- Why: Decouples measurement from reporting; enables tool integration (CI/CD, dashboards).
- Consequence: JSON/CSV export adds small serialization overhead; custom reporters must implement full interface.
🚫Non-goals (don't propose these)
- Not a profiler (does not capture call graphs or memory allocation profiles).
- Not a stress tester (designed for short-running microbenchmarks, not sustained load simulation).
- Not a regression detector (does not ship built-in baseline comparison or alerting; external tools integrate results).
- Not a real-time system (timing jitter is acceptable; not suitable for hard-realtime constraints).
- Not Python-first (Python bindings are secondary; C++ is the primary and most feature-complete interface).
🪤Traps & gotchas
Environment sensitivity: Benchmarks are non-deterministic across runs due to OS scheduling, CPU turbo, and thermal effects — results require multiple runs and statistical filtering (implemented in library but user must understand variance). CMake feature detection: Compiler-specific flags tested via cmake/CXXFeatureCheck.cmake; older/non-standard compilers may silently disable features. Python bindings version mismatch: bindings/python/ requires compatible pybind11 version; mismatch causes silent test failures in wheels.yml. Bazel module resolution: MODULE.bazel and WORKSPACE.bzlmod may conflict if both present; Bazel 7+ prefers MODULE.bazel. PFM library availability: Performance counter measurement (via FindPFM.cmake) is Linux-specific and optional; Windows/macOS builds silently degrade to wall-clock only.
🏗️Architecture
💡Concepts to learn
- Statistical filtering and outlier rejection — Microbenchmark results contain noise from OS context switches and cache effects; google/benchmark automatically removes outliers using percentile truncation and reports median/mean/stddev — understanding this prevents misinterpreting single noisy runs as regressions
- Hardware performance counters (PMU events) — Wall-clock timing alone misses CPU-level phenomena (cache misses, branch mispredictions); google/benchmark optionally integrates Linux PMU (Performance Monitoring Unit) via PFM library (cmake/Modules/FindPFM.cmake) to measure these events
- Escape analysis and compiler optimization barriers — Naive benchmarks are optimized away by compilers; google/benchmark uses DoNotOptimize() and ClobberMemory() barriers (defined in benchmark/benchmark.h, inferred) to force code execution while preventing constant folding
- Iteration auto-scaling — Code runs too fast to measure accurately (<1μs); google/benchmark automatically increases iteration count until timing reaches stable millisecond range — users see --benchmark_min_time flag controlling this behavior
- Cross-platform timer abstraction — Precise timing differs per OS (POSIX clock_gettime, Windows QueryPerformanceCounter, macOS mach_absolute_time); google/benchmark abstracts these via conditional compilation, ensuring consistent microsecond-level accuracy across Linux/Windows/macOS
- pybind11 C++ Python bindings — Python users (data scientists, scripting-focused developers) need to benchmark C++ code; bindings/python/benchmark.cc exposes the C++ registration and execution harness to Python via pybind11, enabling
python -c 'import google_benchmark' - Bazel and CMake dual-build support — Different organizations standardize on different build systems; google/benchmark supports both (BUILD.bazel for Bazel monorepos, CMakeLists.txt for traditional CMake projects) — understanding the separation prevents accidental caching and configuration issues between systems
🔗Related repos
google/googletest— Complementary Google testing framework; benchmark uses GoogleTest internally (cmake/GoogleTest.cmake) and shares similar registration macro patterns for test/benchmark organizationabseil/abseil-cpp— Major consumer of google/benchmark; Abseil utilities (containers, synchronization primitives) benchmarked heavily in production using this librarynanobench/nanobench— Direct alternative microbenchmarking library for C++; lighter-weight, header-only, but less portable than google/benchmark across platformsgoogle/perfetto— Complementary performance profiler; while benchmark measures single-threaded code snippets, Perfetto traces multi-process/thread system behavior over timegoogle/gvisor— Consumer of benchmark for measuring container runtime performance; demonstrates production use case for microbenchmarking infrastructure across the Google ecosystem
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive CI workflow for Python bindings on multiple Python versions
The repo has bindings/python/google_benchmark/ with Python bindings, and while there's a test_bindings.yml workflow, there's no dedicated workflow testing against multiple Python versions (3.8-3.12) and platforms. Given numpy==2.4.4 and scipy==1.17.1 dependencies are listed, ensuring compatibility across Python versions is critical for the bindings package.
- [ ] Review existing .github/workflows/test_bindings.yml to understand current coverage
- [ ] Create a new workflow .github/workflows/test-python-bindings-matrix.yml that tests against Python 3.8, 3.9, 3.10, 3.11, 3.12
- [ ] Add tests for numpy and scipy compatibility in the matrix (since they're listed as dependencies)
- [ ] Test on ubuntu-latest, macos-latest, and windows-latest to catch platform-specific issues
- [ ] Reference the matrix workflow in main CI documentation
Add unit tests for cmake/CXXFeatureCheck.cmake feature detection logic
The cmake/ directory contains critical compiler feature detection scripts (CXXFeatureCheck.cmake, AddCXXCompilerFlag.cmake, steady_clock.cpp, std_regex.cpp, pthread_affinity.cpp, thread_safety_attributes.cpp) that determine build behavior, but there's no visible test suite validating these detection scripts work correctly across compilers. This is high-risk code that silently fails.
- [ ] Create tests/cmake/ directory for CMake feature detection tests
- [ ] Add test_feature_detection.cmake that validates CXXFeatureCheck.cmake detects supported/unsupported features
- [ ] Add integration tests in .github/workflows/ that run CMake configuration with various compiler flags (e.g., clang, gcc, MSVC) and verify correct features are detected
- [ ] Test regex detection (std_regex.cpp, posix_regex.cpp, gnu_posix_regex.cpp) across platforms
- [ ] Add test for pthread_affinity.cpp detection on Unix-like systems
Add performance regression detection workflow using historical benchmark results
As a microbenchmark library, google/benchmark should eat its own dog food by tracking performance regressions in its own codebase. The repo lacks a workflow that stores benchmark results and alerts on performance degradation between commits, which is critical for a benchmarking tool to maintain credibility.
- [ ] Create .github/workflows/benchmark-regression-check.yml that runs core benchmarks from the codebase
- [ ] Implement storage of benchmark results (using GitHub artifacts or external storage) to track historical performance
- [ ] Add comparison logic that alerts when performance degrades by >5% vs. previous runs on the same platform
- [ ] Run on ubuntu-latest and macos-latest to catch platform-specific regressions
- [ ] Include benchmarks for src/ core functionality (reference BUILD.bazel and CMakeLists.txt for benchmark targets)
- [ ] Document results in a PR comment or workflow summary for contributor visibility
🌿Good first issues
- Add C++20 comparison benchmark example: Create docs/examples/cpp20_features.cc demonstrating spaceship operator <=> performance vs traditional comparison chains, with documented results. Touches: cmake/CXXFeatureCheck.cmake (to verify C++20 support) and examples/ directory.
- Expand platform_specific_build_instructions.md for Apple Silicon: docs/platform_specific_build_instructions.md is incomplete for M1/M2 native builds. Add section covering Rosetta2 vs native arm64 builds, PFM limitations on macOS, and CMake invocation flags. Reference: cmake/Modules/FindPFM.cmake and existing .github/workflows/ macOS entries.
- Add Python binding docstring tests: bindings/python/init.py and bindings/python/benchmark.cc export classes with minimal docstrings. Add Python doctest examples demonstrating fixture setup, custom metric registration, and result filtering. Low risk: isolated to bindings/python/ and Python test CI (workflows/test_bindings.yml).
⭐Top contributors
Click to expand
Top contributors
- @dependabot[bot] — 60 commits
- @dmah42 — 9 commits
- @shreejaykurhade — 2 commits
- @gruenich — 2 commits
- @5kng — 2 commits
📝Recent commits
Click to expand
Recent commits
dea73ef— docs: Add comprehensive command-line options documentation (#2187) (jlaportebot)ff773f8— BENCHMARK_ENABLE_WERROR=Off should also disable -pedantic-errors (#2183) (ednolan)f43ed80— Docs: JSON output format (#2181) (shreejaykurhade)ec0ce39— Document naming benchmark arguments (#2180) (shreejaykurhade)b808118— Replace deprecated _ReadWriteBarrier with std::atomic_signal_fence in MSVC path (#2177) (qorexdevs)33e9abf— Fix thread safety attribute detection failing to link in CMake check (#2178) (anishesg)96c7ddc— In PerfCountersTest.MultiThreaded, serialize worker threads (#2175) (musicinmybrain)221171c— Bump actions/upload-artifact from 7.0.0 to 7.0.1 (#2168) (dependabot[bot])af64526— Bump astral-sh/setup-uv from 8.0.0 to 8.1.0 (#2171) (dependabot[bot])1eceef8— Bump actions/cache from 5.0.4 to 5.0.5 (#2169) (dependabot[bot])
🔒Security observations
The google/benchmark repository demonstrates good security practices with active CI/CD pipelines (including sanitizers and OSSF scorecards), but has minor concerns around dependency management and Python bindings exposure. The codebase lacks a SECURITY.md file for responsible disclosure. Overall security posture is above average for an open-source project, with most vulnerabilities being informational or low-severity. No critical secrets, SQL injection risks, or severe misconfigurations were detected in the visible file structure.
- Medium · Outdated NumPy Dependency —
Dependencies/Package file (numpy == 2.4.4). The dependency file specifies numpy == 2.4.4. While this is a recent version, it's important to monitor NumPy for security updates. NumPy has had historical vulnerabilities (CVE-2021-41496, CVE-2021-34141). The pinned version should be regularly reviewed and updated. Fix: Establish a dependency update policy. Regularly check for security advisories on numpy versions. Consider using tools like Dependabot (already present in .github/dependabot.yml) to track updates. Ensure CI/CD runs security scanning on dependencies. - Low · SciPy Dependency with Potential Vulnerabilities —
Dependencies/Package file (scipy == 1.17.1). scipy == 1.17.1 is pinned in the dependencies. While this is a recent version, SciPy should be monitored for security vulnerabilities. Historical issues have been reported (e.g., CVE-2023-25399). Fix: Keep SciPy updated to the latest patch version. Monitor security advisories from the SciPy project. Consider using automated dependency scanning tools. - Low · Python Bindings Exposure —
bindings/python/google_benchmark/. The codebase includes Python bindings (bindings/python/google_benchmark/benchmark.cc) that expose C++ benchmark functionality to Python. This creates a potential attack surface if the Python module is distributed via PyPI without proper security review. Fix: Ensure the Python bindings undergo security review before distribution. Validate all inputs from Python to C++ boundary. Consider signing Python wheels. Document security considerations for Python users. - Low · Build System Configuration Review —
cmake/, bazel/, BUILD.bazel, CMakeLists.txt. Multiple build systems are present (CMake, Bazel) with various toolchain configurations (cmake/llvm-toolchain.cmake, cmake/AddCXXCompilerFlag.cmake). Complex build configurations can introduce security misconfigurations. Fix: Regularly audit build configurations. Use security-focused compiler flags (e.g., -fstack-protector-strong, -D_FORTIFY_SOURCE=2). Document security-relevant build options. Test with sanitizers (already present in workflows/sanitizer.yml). - Low · Workflow File Permissions —
.github/workflows/. Multiple GitHub Actions workflows are present (.github/workflows/) that execute code. If workflow files can be modified by contributors, there's a potential for supply chain attacks via workflow manipulation. Fix: Restrict write permissions for workflow files. Use branch protection rules to require reviews. Consider pinning GitHub Actions to specific commit SHAs rather than tags. Audit all workflow modifications. - Low · Missing SECURITY.md File —
Repository root. No SECURITY.md file is visible in the repository root. This file is important for security researchers to report vulnerabilities responsibly. Fix: Create a SECURITY.md file with clear instructions for responsible vulnerability disclosure. Include contact information and expected response times.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.