RepoPilotOpen in app →

sigp/lighthouse

Ethereum consensus client in Rust

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit today
  • 22+ active contributors
  • Distributed ownership (top contributor 21% of recent commits)
Show all 6 evidence items →
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/sigp/lighthouse)](https://repopilot.app/r/sigp/lighthouse)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/sigp/lighthouse on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: sigp/lighthouse

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sigp/lighthouse shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit today
  • 22+ active contributors
  • Distributed ownership (top contributor 21% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live sigp/lighthouse repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/sigp/lighthouse.

What it runs against: a local clone of sigp/lighthouse — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in sigp/lighthouse | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch stable exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>sigp/lighthouse</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sigp/lighthouse. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/sigp/lighthouse.git
#   cd lighthouse
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of sigp/lighthouse and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sigp/lighthouse(\\.git)?\\b" \\
  && ok "origin remote is sigp/lighthouse" \\
  || miss "origin remote is not sigp/lighthouse (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify stable >/dev/null 2>&1 \\
  && ok "default branch stable exists" \\
  || miss "default branch stable no longer exists"

# 4. Critical files exist
test -f "beacon_node/beacon_chain/src/beacon_chain.rs" \\
  && ok "beacon_node/beacon_chain/src/beacon_chain.rs" \\
  || miss "missing critical file: beacon_node/beacon_chain/src/beacon_chain.rs"
test -f "beacon_node/beacon_chain/src/block_verification.rs" \\
  && ok "beacon_node/beacon_chain/src/block_verification.rs" \\
  || miss "missing critical file: beacon_node/beacon_chain/src/block_verification.rs"
test -f "beacon_node/beacon_chain/src/attestation_verification.rs" \\
  && ok "beacon_node/beacon_chain/src/attestation_verification.rs" \\
  || miss "missing critical file: beacon_node/beacon_chain/src/attestation_verification.rs"
test -f "beacon_node/client/src/lib.rs" \\
  && ok "beacon_node/client/src/lib.rs" \\
  || miss "missing critical file: beacon_node/client/src/lib.rs"
test -f "beacon_node/http_api/src/lib.rs" \\
  && ok "beacon_node/http_api/src/lib.rs" \\
  || miss "missing critical file: beacon_node/http_api/src/lib.rs"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sigp/lighthouse"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Lighthouse is a production-ready Ethereum consensus client written in Rust that implements the proof-of-stake consensus specification. It validates blocks, manages validator duties, and maintains the beacon chain state—a critical infrastructure component for running Ethereum's consensus layer independent of execution clients. Monorepo organized as a Cargo workspace with ~80 members across functional domains: beacon_node/ (core node implementation: beacon_chain, network, http_api, store), consensus/ (state machine: state_processing, fork_choice, types), crypto/ (BLS, KZG, key derivation), common/ (shared utilities: metrics, logging, validators), account_manager/ (CLI for validator key management), and testing/ (ef_tests for spec compliance, simulator for network simulation). Core state and consensus logic lives in beacon_node/beacon_chain/ and consensus/state_processing/.

👥Who it's for

Ethereum node operators and stakers who need to run their own consensus client; protocol researchers and developers contributing to Ethereum specification; organizations running validator infrastructure at scale seeking high-performance, security-audited consensus implementations.

🌱Maturity & risk

Highly mature and production-ready. Lighthouse is deployed on Ethereum mainnet with thousands of validators, has undergone multiple external security audits, maintains comprehensive CI/CD via GitHub Actions (test-suite.yml, nightly-tests.yml, docker.yml), and is actively developed with both stable and unstable branches. The codebase shows 11.8M lines of Rust with a 2-branch release model (stable/unstable) consistent with long-term maintenance.

Standard open source risks apply.

Active areas of work

Active development on both stable and unstable branches. Recent activity visible in workflow configs shows ongoing work on reproducible Docker builds, test-suite optimization via nextest (config/nextest.toml), local-testnet orchestration, and continuous fuzzing. The nightly-tests and release workflows indicate regular release cadence and regression testing.

🚀Get running

Clone and build: git clone https://github.com/sigp/lighthouse.git && cd lighthouse && cargo build --release. The Makefile and .cargo/config.toml indicate cargo is the primary build tool. For development on unstable branch: git checkout unstable. See Lighthouse Book (lighthouse-book.sigmaprime.io) for network configuration before running.

Daily commands: After cargo build --release, run the beacon node: ./target/release/lighthouse beacon_node --network mainnet (or testnet like goerli/sepolia). The node exposes HTTP API on port 5052 by default. For validator operations: ./target/release/lighthouse account_manager validator create to set up staking keys. See .github/workflows/local-testnet.yml for multi-node setup examples using the simulator.

🗺️Map of the codebase

  • beacon_node/beacon_chain/src/beacon_chain.rs — Core BeaconChain struct and orchestration logic—all consensus operations flow through here.
  • beacon_node/beacon_chain/src/block_verification.rs — Block validation and verification pipeline; critical for consensus security.
  • beacon_node/beacon_chain/src/attestation_verification.rs — Attestation validation logic; core to consensus security and finality.
  • beacon_node/client/src/lib.rs — Main client initialization and wiring; entry point for beacon node startup.
  • beacon_node/http_api/src/lib.rs — HTTP API handler; primary interface for external consumers (validators, monitoring tools).
  • beacon_node/beacon_chain/src/fork_choice_signal.rs — Fork choice integration; critical for consensus rules and chain selection.
  • beacon_node/store/src/lib.rs — Persistent state and block storage abstraction; foundation for chain state.

🛠️How to make changes

Add a new Beacon HTTP API endpoint

  1. Define the request/response types in beacon_node/http_api/src/ following existing patterns (e.g., in route handlers) (beacon_node/http_api/src/lib.rs)
  2. Add a handler function that accesses BeaconChain state via the context passed to the handler (beacon_node/http_api/src/lib.rs)
  3. Register the route in the router setup, following the existing route conventions (e.g., /eth/v1/beacon/...) (beacon_node/http_api/src/lib.rs)
  4. If querying chain state, call methods on the BeaconChain instance (e.g., chain.canonical_head, chain.get_block(...)) in your handler (beacon_node/beacon_chain/src/beacon_chain.rs)

Add a new block or attestation verification check

  1. Identify which verification pipeline to extend: block verification or attestation verification (beacon_node/beacon_chain/src/block_verification.rs)
  2. Define the check as a separate function or method in the verification module, returning a Result (beacon_node/beacon_chain/src/block_verification.rs)
  3. Add the check to the existing verification sequence in verify_block_internal() or the attestation pipeline (beacon_node/beacon_chain/src/block_verification.rs)
  4. Add corresponding error variants to the errors module to report failures (beacon_node/beacon_chain/src/errors.rs)

Add a new validator operation (create, import, recover)

  1. Define the operation subcommand in account_manager/src/validator/mod.rs using clap derive macros (account_manager/src/validator/mod.rs)
  2. Create a new handler file (e.g., new_operation.rs) following the pattern of create.rs or import.rs (account_manager/src/validator/create.rs)
  3. Implement the operation logic, using ValidatorStore and key management APIs for state persistence (account_manager/src/validator/create.rs)
  4. Wire the handler in the validator mod.rs match statement to invoke your operation (account_manager/src/validator/mod.rs)

Extend beacon chain metrics or observability

  1. Add Prometheus metric definitions (counters, gauges, histograms) in beacon_node/beacon_chain/src/metrics.rs (beacon_node/beacon_chain/src/metrics.rs)
  2. Emit the metric at relevant points in the beacon chain (e.g., on block processing in beacon_chain.rs) (beacon_node/beacon_chain/src/beacon_chain.rs)
  3. Verify metrics are exported via the HTTP metrics endpoint (beacon_node/http_metrics/src/lib.rs)

🔧Why these technologies

  • Rust — Memory safety without garbage collection; essential for consensus-critical client where crashes or memory corruption could cause chain finality loss.
  • Libp2p (via lighthouse_network) — Battle-tested multi-protocol P2P stack supporting Ethereum's gossip and request-response requirements; handles peer management and reputation.
  • RocksDB (via store module) — High-performance embedded key-value store suitable for persistent chain state and block history at scale.
  • Tokio async runtime — Efficient async I/O for concurrent block processing, networking, and HTTP server without thread overhead.
  • Ethereum JSON-RPC (via execution_layer) — Standard interface to execution clients (Geth, Erigon, etc.) via Engine API for payload execution and state root validation.

⚖️Trade-offs already made

  • Single-threaded tokio runtime per client instance

    • Why: Simplifies state synchronization and avoids complex locking patterns in consensus-critical paths.
    • Consequence: Cannot leverage multi-core parallelism for I/O; horizontal scaling requires multiple beacon node instances.
  • Gossip-based block and attestation dissemination (no persistent mempool)

    • Why: Reduces DoS surface and matches Ethereum consensus layer design; validators batch attestations locally.
    • Consequence: Less sophisticated transaction prioritization; execution layer (EL) handles mempool separately.
  • Full validation before fork choice application

    • Why: Ensures only valid blocks affect consensus; prevents fork choice poisoning attacks.
    • Consequence: Increased latency in block processing; network delays can impact liveness under high load.
  • Slashing protection as optional off-chain database (not consensus-enforced)

    • Why: Validators own slashing protection; lighthouse can't prevent slashing if used incorrectly or with multiple validators per key.
    • Consequence: User error risk; requires careful key management and database migration strategies.

🚫Non-goals (don't propose these)

  • Does not validate Ethereum execution transactions—delegates to execution client via Engine API.
  • Does not implement sharding (Danksharding/EIP-4844) beyond minimal data blob verification; full KZG commitment checks deferred to execution layer.
  • Does not

🪤Traps & gotchas

  1. Network config is critical: Running on wrong network (mainnet vs. sepolia vs. custom testnet) causes desynchronization; see common/eth2_network_config/ for required chain specs. 2. Slashing protection is mandatory: Running the same validator key on two nodes simultaneously triggers slashing; account_manager/src/validator/slashing_protection.rs manages the slashing database—must be backed up. 3. Execution client requirement: Lighthouse is consensus-only; you must run a paired execution client (Geth, Erigon, etc.) on the same machine; beacon_node/execution_layer/ handles the Engine API connection. 4. Database migration risk: Upgrading Lighthouse versions can trigger store schema migrations; always backup ~/.lighthouse/ before upgrading. 5. Time sync is critical: Beacon chain assumes tight system clock synchronization (NTP); clock drift > 10 seconds causes missed duties.

🏗️Architecture

💡Concepts to learn

  • Beacon Chain — The core consensus layer Lighthouse implements; understanding slots, epochs, validators, and justification/finalization is essential to navigate the codebase's abstractions
  • Fork Choice (LMD-GHOST + Proto-Array) — Lighthouse's fork_choice crate implements this algorithm to select the canonical chain from competing blocks; it's non-obvious and critical to get right for consensus safety
  • BLS Signature Aggregation — Validators sign attestations with BLS; multiple signatures are aggregated to reduce gossip bandwidth; the crypto/bls crate handles this, essential for understanding network scalability
  • State Transition Function (STF) — Lighthouse's consensus/state_processing/ implements the deterministic STF that transforms state when a new block is processed; any divergence from spec breaks consensus
  • Slashing Protection — Prevents double-signing (Byzantine attack); Lighthouse maintains a slashing database in account_manager; misunderstanding this causes validator penalties and stake loss
  • Finality (Casper FFG) — Consensus nodes must track justified and finalized epochs; these determine when blocks are irreversible; the beacon_chain crate's finality logic underpins chain safety
  • RocksDB Persistent State — Lighthouse's beacon_node/store crate persists canonical state and blocks in RocksDB; understanding key encoding and state snapshots is vital for debugging sync issues and state corruption
  • ethereum/consensus-specs — The source-of-truth Ethereum 2.0 specification (Python) that consensus/state_processing/ and consensus/types/ implement; Lighthouse tracks spec versions precisely
  • sigp/milagro_bls — BLS signature library (Rust fork of Milagro) used by crypto/bls/ for validator signing—maintained by same Sigma Prime team
  • prysmaticlabs/prysm — Alternative Ethereum consensus client (Go-based); shares identical consensus rules but different architecture; comparison point for protocol correctness
  • status-im/nimbus-eth2 — Another mature consensus client (Nim-based) with similar functionality; interoperability testing across three clients (Lighthouse, Prysm, Nimbus) ensures Ethereum robustness

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for beacon_node/http_api endpoint security

The http_api module handles critical REST endpoints for the consensus client. There are no visible test files in the file structure for the http_api crate. Adding integration tests covering authentication, rate limiting, and input validation for endpoints would catch security regressions early and improve robustness.

  • [ ] Create beacon_node/http_api/tests/ directory with integration test structure
  • [ ] Add tests for common attack vectors (malformed JSON, oversized payloads, missing required fields)
  • [ ] Add tests for all major endpoints (beacon blocks, attestations, validators) with valid/invalid inputs
  • [ ] Reference existing test patterns from beacon_node/beacon_chain/benches and consensus/state_processing tests
  • [ ] Ensure tests run in the test-suite.yml GitHub Actions workflow

Add missing unit tests for crypto/eth2_keystore KeyStore encryption/decryption edge cases

The eth2_keystore module handles sensitive validator keys and keystore operations critical to security. The file structure shows the module exists but no visible test subdirectory. Adding edge case tests for password handling, encryption formats, and key derivation would prevent security regressions.

  • [ ] Create crypto/eth2_keystore/tests/ directory for integration tests
  • [ ] Add tests for invalid/weak passwords, empty keystores, corrupted JSON structures
  • [ ] Add tests for EIP-2335 compliance (various keystore versions and formats)
  • [ ] Add tests for key recovery from corrupted/partially-readable keystores
  • [ ] Document test vectors used and reference against official eth2 keystore test vectors

Create missing documentation for slasher service configuration and operation

The slasher directory exists as a critical component but there is no visible README or configuration documentation. The slasher/service subdirectory suggests complex operational requirements. A comprehensive guide covering setup, configuration tuning, database maintenance, and monitoring would significantly improve contributor/operator experience.

  • [ ] Create slasher/README.md with overview of slasher responsibilities and architecture
  • [ ] Create slasher/service/CONFIGURATION.md documenting all configuration flags and tuning parameters
  • [ ] Add examples for common deployment scenarios (mainnet vs testnet, resource constraints)
  • [ ] Document the slasher database schema and migration procedures
  • [ ] Link from main README.md and CONTRIBUTING.md to slasher documentation

🌿Good first issues

  • Add typed error wrappers in beacon_node/beacon_chain/src/ for specific failure modes (currently errors are mostly string-based); improves debuggability and metrics for operators.
  • Expand HTTP API test coverage in beacon_node/http_api/src/ for edge cases (state before finalization, invalid validator indices); the test-suite.yml shows tests exist but gaps remain in error path validation.
  • Document the proto-array fork choice invariants in consensus/fork_choice/ with inline code comments and a design doc; the algorithm is correct but non-obvious to new contributors.
  • Add metrics to common/metrics/ for validator balance distribution and attestation miss rates; operators currently lack visibility into health beyond proposal failures.
  • Extract generic database trait from beacon_node/store/ to enable in-memory test backend; currently all tests use RocksDB, slowing test suite and making store logic harder to unit-test.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 176cce5 — Release v8.1.3 (michaelsproul)
  • 7ca91b8 — Bump c-kzg (michaelsproul)
  • dfd715b — Bump libp2p (michaelsproul)
  • e5facc2 — Bump yamux (michaelsproul)
  • 5d6f787 — Bump quinn (michaelsproul)
  • 91c2579 — Schedule Fulu fork for Gnosis mainnet (#9007) (dapplion)
  • c451ae7 — Use BTreeMap for state.validators pending updates (#9017) (michaelsproul)
  • e210533 — Scrap redundant docker builds on releases (#8999) (antondlr)
  • 8f9c1ca — Bump rustls and ignore unpatched version due to Warp (#9010) (michaelsproul)
  • 3deab9b — Release v8.1.2 (michaelsproul)

🔒Security observations

The Lighthouse Ethereum consensus client codebase demonstrates good security practices with a security-focused design, open-source licensing, and documented vulnerability reporting procedures. However, the Docker infrastructure has moderate security concerns primarily related to image pinning and container hardening. Key strengths include no detected hardcoded credentials, no obvious SQL injection or XSS vectors in the visible codebase, and a comprehensive workspace structure with proper dependency management. The workspace uses modern Rust practices which inherently provide memory safety benefits. Main recommendations focus on: (1) pinning Docker base images to specific digests for reproducibility, (2) implementing container hardening (non-root user execution), (3) strengthening git dependency verification mechanisms, and (4) regular security scanning of the extensive dependency tree. The project's active security policy and external audits suggest strong security posture overall.

  • Medium · Base Image Uses Ubuntu 22.04 with Potential Unpatched Vulnerabilities — Dockerfile, line: FROM ubuntu:22.04. The final Docker image uses 'ubuntu:22.04' without a specific digest or latest patch version. While the Dockerfile runs 'apt-get upgrade', using floating tags can introduce non-deterministic builds and potential exposure to newly discovered vulnerabilities if the base image is updated. Fix: Use a specific Ubuntu digest (e.g., ubuntu:22.04@sha256:...) to ensure reproducible builds and explicit control over base image updates. Regularly rebuild images with the latest patches.
  • Medium · Rust Builder Image Without Digest Pinning — Dockerfile, line: FROM rust:1.88.0-bullseye AS builder. The builder stage uses 'rust:1.88.0-bullseye' without a digest pin. While the version is specified, using a mutable tag allows the underlying image to change, potentially introducing different dependencies or vulnerabilities. Fix: Pin the Rust image to a specific digest: rust:1.88.0-bullseye@sha256:... to ensure reproducible builds and full control over the build environment.
  • Medium · Cargo Git Fetch Configuration Enables Network-Based Attacks — Dockerfile, lines: ARG CARGO_USE_GIT_CLI=true, ENV CARGO_NET_GIT_FETCH_WITH_CLI. The Dockerfile sets 'CARGO_USE_GIT_CLI=true' and 'CARGO_NET_GIT_FETCH_WITH_CLI=true'. While necessary for some dependencies, using git CLI for fetching dependencies increases the attack surface for MITM attacks if not using SSH keys with proper verification. Fix: Ensure all git dependencies use SSH keys with proper host key verification. Consider using git's 'ssh://...' URLs instead of 'https://...' for sensitive dependencies. Implement SBOM (Software Bill of Materials) scanning.
  • Low · Cache Mounts May Persist Corrupted or Poisoned Artifacts — Dockerfile, lines: --mount=type=cache,target=/usr/local/cargo/registry. The Dockerfile uses '--mount=type=cache' for cargo registry and target directories. While this improves build performance, cached artifacts could potentially be poisoned or corrupted, affecting multiple builds. Fix: Implement regular cache validation and consider periodic cache invalidation. Monitor for checksum mismatches in cached packages. Use 'docker build --no-cache' periodically in CI/CD pipelines.
  • Low · No Security Headers or Container Hardening Visible — Dockerfile (general). The Dockerfile does not implement container hardening best practices such as running as a non-root user, setting resource limits, or defining security capabilities. Fix: Add 'RUN useradd -m -u 1000 lighthouse && USER lighthouse' to run the container as non-root. Consider adding security context in Kubernetes deployments with capabilities dropped and read-only root filesystem where possible.
  • Low · Minimal Runtime Image Missing Security Utilities — Dockerfile, runtime stage. The final stage removes build tools and only includes essential packages (libssl-dev, ca-certificates). While this reduces attack surface, there are no security scanning tools or audit capabilities in the final image. Fix: Consider including minimal security tools (e.g., curl for health checks, openssl for certificate verification) as needed. Implement external security monitoring and logging.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · sigp/lighthouse — RepoPilot