getlago/lago
Open Source Metering and Usage Based Billing API ⭐️ Consumption tracking, Subscription management, Pricing iterations, Payment orchestration & Revenue analytics
Healthy across the board
weakest axiscopyleft license (AGPL-3.0) — review compatibility
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓17 active contributors
- ✓Distributed ownership (top contributor 32% of recent commits)
Show all 7 evidence items →Show less
- ✓AGPL-3.0 licensed
- ✓CI configured
- ✓Tests present
- ⚠AGPL-3.0 is copyleft — check downstream compatibility
What would change the summary?
- →Use as dependency Concerns → Mixed if: relicense under MIT/Apache-2.0 (rare for established libs)
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/getlago/lago)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/getlago/lago on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: getlago/lago
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/getlago/lago shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit today
- 17 active contributors
- Distributed ownership (top contributor 32% of recent commits)
- AGPL-3.0 licensed
- CI configured
- Tests present
- ⚠ AGPL-3.0 is copyleft — check downstream compatibility
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live getlago/lago
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/getlago/lago.
What it runs against: a local clone of getlago/lago — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in getlago/lago | Confirms the artifact applies here, not a fork |
| 2 | License is still AGPL-3.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of getlago/lago. If you don't
# have one yet, run these first:
#
# git clone https://github.com/getlago/lago.git
# cd lago
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of getlago/lago and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "getlago/lago(\\.git)?\\b" \\
&& ok "origin remote is getlago/lago" \\
|| miss "origin remote is not getlago/lago (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(AGPL-3\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"AGPL-3\\.0\"" package.json 2>/dev/null) \\
&& ok "license is AGPL-3.0" \\
|| miss "license drift — was AGPL-3.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "events-processor/main.go" \\
&& ok "events-processor/main.go" \\
|| miss "missing critical file: events-processor/main.go"
test -f "events-processor/processors/events_processor" \\
&& ok "events-processor/processors/events_processor" \\
|| miss "missing critical file: events-processor/processors/events_processor"
test -f "events-processor/cache/cache.go" \\
&& ok "events-processor/cache/cache.go" \\
|| miss "missing critical file: events-processor/cache/cache.go"
test -f "events-processor/config/kafka/kafka.go" \\
&& ok "events-processor/config/kafka/kafka.go" \\
|| miss "missing critical file: events-processor/config/kafka/kafka.go"
test -f "events-processor/models/event.go" \\
&& ok "events-processor/models/event.go" \\
|| miss "missing critical file: events-processor/models/event.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/getlago/lago"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Lago is an open-source metering and usage-based billing API written primarily in Go (217KB codebase) that ingests real-time consumption events, aggregates them across pricing dimensions, and generates automated invoices for subscription, usage, and hybrid pricing models. It provides REST API endpoints for consumption tracking, subscription management, and payment orchestration without vendor lock-in. Dual-process architecture: main application logic lives in the root Go modules, while events-processor/ (a separate Go service with its own Dockerfile, go.mod, and .air.toml for live-reload) handles real-time event ingestion from Kafka/Kinesis/SQS (connectors/ configs). Deployment is Docker-centric with docker-compose files for local (docker-compose.dev.yml), light (deploy/docker-compose.light.yml), and production (deploy/docker-compose.production.yml) environments.
👥Who it's for
SaaS founders and billing engineers building product-led companies who need to track arbitrary usage metrics (API calls, compute hours, etc.) and automatically bill customers with per-unit or tiered pricing—without being locked into proprietary billing platforms like Stripe Billing or Zuora.
🌱Maturity & risk
Production-ready and actively developed: the project has significant GitHub stars, is SOC 2 Type II certified, and serves enterprise customers (PayPal, Mistral AI, Groq). CI/CD pipelines are comprehensive (.github/workflows/ shows Docker builds, release automation, and event-processor tests). Recent activity visible in Go module versions (go 1.25.0, gorm 1.31.1, opentelemetry 1.43.0) indicates active maintenance.
Moderate complexity: 217KB of Go code is substantial, and the events-processor depends on 20+ external packages including Kafka (franz-go), gRPC, OpenTelemetry, and GORM—any security vulnerability in these chains requires prompt updates. The architecture spans Go (main processor), Docker multi-arch builds, and Kubernetes-ready deployment (docker-compose.production.yml), increasing operational surface area. Single repo houses core billing logic and event processors, so breaking changes ripple widely.
Active areas of work
The project is actively evolving its event processing pipeline (events-processor/ has recent Dockerfile.dev and .air.toml configuration), integrating observability (OpenTelemetry, Datadog APM via dd-trace-go), and expanding connector support (connectors/kinesis.yml, connectors/sqs.yml suggest multi-queue support). CI workflows auto-build multi-architecture Docker images and release processors independently, indicating a push toward distributed deployment.
🚀Get running
Clone and run the local stack: git clone https://github.com/getlago/lago.git && cd lago && docker-compose -f docker-compose.dev.yml up. For the events-processor specifically: cd events-processor && go mod download && air (using .air.toml for live reload). Check .env.development.default for required environment variables (database, Redis, Kafka endpoints).
Daily commands:
Local dev: docker-compose -f docker-compose.dev.yml up starts all services (API, PostgreSQL, Redis, Kafka emulator). Events-processor alone: cd events-processor && go run ./cmd/... (check Procfile for entry point). Production-like: docker-compose -f deploy/docker-compose.production.yml up with environment file deploy/.env.production.example. Logs and health checks configured in docker-compose files.
🗺️Map of the codebase
events-processor/main.go— Entry point for the events processor service; initializes Kafka consumer, database, cache, and tracing infrastructureevents-processor/processors/events_processor— Core event processing logic that handles consumption tracking and billing metric calculationsevents-processor/cache/cache.go— Central cache abstraction managing billable metrics, charges, subscriptions, and filters for in-memory lookupsevents-processor/config/kafka/kafka.go— Kafka producer and consumer configuration; critical for event ingestion pipelineevents-processor/models/event.go— Event domain model definition; foundation for all metering and usage-based billing calculationsevents-processor/config/database/database.go— Database connection pool and query execution; required for fetching billing configuration and persisting stateevents-processor/config/tracing/tracing.go— OpenTelemetry and DataDog tracing setup; enables observability across event processing pipeline
🧩Components & responsibilities
- Event Consumer (Kafka) (franz-go, Kafka) — Reads metering events from Kafka topic, deserializes, and routes to processor
- Failure mode: Consumer lag increases; events queued in Kafka; recovery via replay if processor recovers within retention window
- Cache Layer — Provides fast lookups for
🛠️How to make changes
Add a New Billable Metric Type
- Define the metric model in events-processor/models/billable_metrics.go with parsing logic (
events-processor/models/billable_metrics.go) - Add cache refresh logic in events-processor/cache/billable_metrics.go to load from database (
events-processor/cache/billable_metrics.go) - Extend event processing logic in events-processor/processors/events_processor to apply metric calculations (
events-processor/processors/events_processor) - Add unit tests following the pattern in events-processor/models/billable_metrics_test.go (
events-processor/models/billable_metrics_test.go)
Add a New Charge Filter or Dimension
- Define filter model in events-processor/models/charge_filters.go with matching logic (
events-processor/models/charge_filters.go) - Add cache layer in events-processor/cache/charge_filters.go to invalidate and refresh filters (
events-processor/cache/charge_filters.go) - Integrate filter evaluation into event processing within events-processor/processors/events_processor (
events-processor/processors/events_processor) - Write integration tests in events-processor/models/charge_filters_test.go (
events-processor/models/charge_filters_test.go)
Add Support for a New External Data Source
- Create config module in events-processor/config/ following patterns in kafka.go and database.go (
events-processor/config/database/database.go) - Add initialization in events-processor/main.go to create client and start background refresh goroutines (
events-processor/main.go) - Add cache invalidation hooks in events-processor/cache/consumer.go to listen for refresh events (
events-processor/cache/consumer.go) - Create unit tests with mocked external calls following patterns in events-processor/config/kafka/consumer_test.go (
events-processor/config/kafka/consumer_test.go)
🔧Why these technologies
- Go + Kafka — Low-latency, high-throughput event processing for metering at scale; Kafka provides durability and partitioning for distributed consumption
- PostgreSQL — Relational schema for subscription state, charges, and billing configuration; ACID guarantees for consistency
- Redis + In-Memory Cache — Sub-millisecond lookups for frequently accessed billing metadata (metrics, charges, subscriptions) to minimize DB queries under high event volume
- OpenTelemetry + DataDog — Distributed tracing and observability for event processing pipeline; critical for debugging latency and loss in a streaming system
- franz-go (Kafka client) — Pure Go client with native OpenTelemetry support and lower GC pressure than alternatives
⚖️Trade-offs already made
-
Separate events-processor service vs. monolithic billing engine
- Why: Decouples metering from invoice generation; allows independent scaling and deployment of consumption tracking
- Consequence: Requires eventual-consistency reconciliation between event stream and invoice state; adds operational complexity for failure recovery
-
Multi-layer caching (in-memory + Redis)
- Why: In-memory cache for lowest latency; Redis for cross-service consistency
- Consequence: Cache invalidation complexity; potential stale reads if TTL too long; requires careful monitoring of hit rates
-
Event sourcing pattern via Kafka
- Why: Provides audit trail and allows replay of metering calculations; decouples producers from processors
- Consequence: Requires idempotent processing logic; schema evolution challenges; storage overhead
🚫Non-goals (don't propose these)
- Real-time invoice generation (async processing acceptable)
- OLAP analytics (OLTP PostgreSQL used; separate data warehouse expected)
- Machine learning on usage patterns (events consumed downstream by other services)
- Multi-tenancy isolation at processor level (assumed to be handled by parent billing service)
🪤Traps & gotchas
- Environment variables: .env.development.default is not auto-sourced; load it manually or docker-compose will fail (check DATABASE_URL, REDIS_URL, KAFKA_BROKER_ADDRESSES). 2. Kafka/Redis required: docker-compose.dev.yml expects running Kafka and Redis; if you skip them, event ingestion silently fails. 3. Database migrations: PostgreSQL schema must be initialized before the API starts; check docker-entrypoint or migrations/ folder (not listed, but implied by GORM setup). 4. Go 1.25 requirement: events-processor/go.mod pins Go 1.25.0—older Go versions will fail; verify with
go version. 5. Multi-arch Docker builds: Release workflows use buildx (docker-build-multi-arch.yaml), which requires Docker Buildkit—local builds may differ from CI images.
🏗️Architecture
💡Concepts to learn
- Event-Driven Architecture — Lago's core pattern: users emit usage events (API calls, compute hours) that flow through Kafka→events-processor→aggregation→billing; understanding this async, decoupled model is essential to modify data flow.
- Token Bucket Rate Limiting — Usage-based billing relies on precise per-customer rate limiting (e.g., 'allow 1000 API calls/month'); Lago must implement or interface with token bucket algorithms to enforce quotas.
- Database Partitioning — docs/database_partitioning.md suggests Lago scales by horizontally partitioning billing data (e.g., by customer_id or time range); critical to understand before optimizing queries or adding new billing dimensions.
- gRPC & Protocol Buffers — Lago uses gRPC (google.golang.org/grpc v1.79.3) for inter-service communication and OpenTelemetry exporting; understanding gRPC's binary protocol and code generation is needed for adding new microservices.
- OpenTelemetry (OTel) — Lago integrates OpenTelemetry (v1.43.0) for distributed tracing and metrics collection; tracing usage events through Kafka→processor→database is essential for debugging production billing issues.
- Multi-Tenant Data Isolation — A billing system must guarantee customer A never sees customer B's usage or invoices; Lago's architecture must implement row-level security or schema partitioning per tenant.
- Idempotency in Billing — Usage events may arrive out-of-order or duplicate; idempotent processing ensures a duplicate event doesn't double-charge; critical when designing event handlers in events-processor.
🔗Related repos
stripe/stripe-billing— Commercial billing platform that Lago directly competes with; reference for feature parity (recurring billing, proration, tax handling).getlago/lago-python— Official Python SDK for Lago API—users of the Go API often need client libraries; maintained alongside this repo.openpgp-validator/openpgp-validator— Ecosystem validator for OpenPGP—Lago may use cryptographic signing for invoice integrity; understanding validation patterns helps with security implementation.grafana/loki— Log aggregation that pairs with Lago's observability setup (OpenTelemetry exports); many Lago deployments use Loki for centralized logging.prometheus/prometheus— Metrics database that receives OpenTelemetry data; essential for monitoring Lago's own billing pipeline health in production.
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for events-processor Kafka consumer error handling
The events-processor uses franz-go (Kafka client) but the test coverage appears incomplete for consumer failure scenarios. Given this is a critical component for consumption tracking in a billing system, adding comprehensive integration tests for partition rebalancing, consumer lag, and message processing failures would significantly improve reliability. This aligns with the existing test files in events-processor/cache/*_test.go but extends to the consumer orchestration layer.
- [ ] Review events-processor/cache/consumer.go and consumer_test.go to understand current coverage
- [ ] Create events-processor/integration_tests/ directory for Kafka integration tests
- [ ] Add tests for: partition rebalancing scenarios, poison pill messages, consumer lag metrics, and offset commit failures
- [ ] Use miniredis (already in go.mod) and existing test patterns to mock Kafka behavior
- [ ] Add CI workflow step in .github/workflows/events-processor-tests.yml to run integration tests
Add OpenTelemetry metrics exporters validation tests for events-processor
The events-processor has OpenTelemetry dependencies (otlp exporters, SDK v1.43.0) configured but there are no visible tests validating metric/trace export configuration correctness. For a billing system tracking consumption metrics, ensuring telemetry is properly configured is critical. Add tests validating metric exporter initialization, gRPC connection health, and metric collection from the Kafka consumer pipeline.
- [ ] Create events-processor/telemetry/exporters_test.go to test OTLP metric/trace exporter initialization
- [ ] Add tests validating: gRPC connection to OTLP endpoint, metric schema correctness, trace sampling configuration
- [ ] Create events-processor/telemetry/mocks.go to mock OTLP collector responses
- [ ] Document expected OTEL environment variables in docs/monitoring.md (currently mentions monitoring but lacks exporter config details)
- [ ] Add smoke test to .github/workflows/events-processor-tests.yml that validates exporters initialize without errors
Add database partitioning strategy tests for events table in events-processor
docs/database_partitioning.md exists but there are no visible tests validating the partitioning strategy in events-processor. Given the high-volume event ingestion patterns (consumption tracking), adding tests that verify correct partition selection and query performance characteristics is essential. This ensures the billing system can scale horizontally.
- [ ] Review docs/database_partitioning.md to understand the partitioning strategy
- [ ] Create events-processor/integration_tests/partitioning_test.go with tests for: correct partition selection based on tenant/time, partition pruning in queries, performance under high cardinality
- [ ] Add test helper in events-processor to create test partitions dynamically
- [ ] Add database setup/teardown logic using jackc/pgx (already in go.mod) with partition creation
- [ ] Document expected partition key patterns and query patterns in docs/database_partitioning.md with code examples from events-processor
🌿Good first issues
- Add unit tests for events-processor/internal/aggregation/ (if it exists): Kafka event deserialization and metric aggregation logic is critical and likely under-tested; adding tests with testify and go-sqlmock would catch regressions early.
- Document the event schema in docs/: The connectors/ folder shows multiple input formats (HTTP, Kinesis, SQS) but there's no canonical event schema or examples in docs/; creating docs/event_schema.md with JSON examples would help contributors.
- Extract Badger v4 usage into a separate adapter/cache/ package: Currently Badger is likely mixed into core logic (imported in go.mod); refactoring into an interface-based cache layer would enable easier testing and swapping backends.
⭐Top contributors
Click to expand
Top contributors
- @vincent-pochet — 32 commits
- @IxDay — 16 commits
- @rsempe — 12 commits
- @jdenquin — 9 commits
- @dependabot[bot] — 6 commits
📝Recent commits
Click to expand
Recent commits
c8b8c93— Fix/Fix broken links (#736) (alexrudall)9acd83e— fix(events-processor): Skip kafka commit when no record is commitable (#735) (toommz)d6aa3e6— Merge pull request #734 from getlago/misc/release-v1-45-2 (domenicofalco)f61e55a— chore(release): bump version to v1.45.2 (domenicofalco)731e18f— fix(events-processor): Fix producer keys limits (#733) (jdenquin)2d2ba86— chore(update): Use debian:trixie for events-processor (#691) (jdenquin)1954d4c— feat(connectors): Update kinesis config (#732) (jdenquin)a12752f— feat(connectors): Add Kinesis connector (#727) (jdenquin)fff5858— feat(cache): Add in memory cache (#639) (jdenquin)86af94f— chore(deps): bump github.com/jackc/pgx/v5 in /events-processor (#730) (dependabot[bot])
🔒Security observations
- High · Hardcoded Default Credentials in Configuration —
docker-compose.yml, .env.development.default, deploy/.env.*.example. The docker-compose.yml and .env.development.default files contain hardcoded default credentials including POSTGRES_PASSWORD=changeme and references to SECRET_KEY_BASE with placeholder values. These defaults are used in development but the presence of example files with weak credentials increases risk if not properly managed in production. Fix: Ensure all example files clearly mark credentials as REQUIRED to be changed. Use secrets management tools (Docker Secrets, Kubernetes Secrets, HashiCorp Vault) for production. Never commit actual secrets to the repository. - High · Incomplete/Truncated Security Configuration —
docker-compose.yml (x-backend-environment section). The docker-compose.yml backend environment variables are truncated at 'LAGO_RSA_PRI', suggesting sensitive RSA private key configuration is incomplete or cut off in the provided context. This could indicate incomplete secret management or exposure of cryptographic material. Fix: Verify that RSA private keys and other cryptographic material are stored securely using environment variable management or secrets management systems. Ensure keys are never logged or exposed in configuration files. - High · Potential SQL Injection via ORM —
events-processor/cache/ and main application database layer. The codebase uses GORM (gorm.io/gorm) with PostgreSQL driver. While GORM provides parameterized query protection by default, the presence of raw SQL query patterns in a billing system handling sensitive financial data requires careful review. The file structure indicates potential raw queries in multiple components. Fix: Audit all database queries to ensure parameterized queries are used exclusively. Avoid string concatenation for SQL construction. Use GORM's query builder methods. Implement code review processes specifically for database operations. - Medium · Dependency Version Mismatch and Inconsistency —
events-processor/go.mod (dependency versions). Multiple OpenTelemetry dependencies are pinned to v1.38.0 while the main SDK is at v1.43.0, creating potential compatibility issues. Additionally, DataDog agent dependencies reference development versions (0.75.0-devel, 0.72.3) which may contain unpatched vulnerabilities. Fix: Align all OpenTelemetry SDK and exporter versions to the same release. Remove development/devel version dependencies and use stable releases. Implement automated dependency scanning with tools like Dependabot or Snyk. - Medium · Missing Input Validation Framework —
events-processor/config/kafka/, events-processor/cache/. The events-processor uses franz-go for Kafka consumption and processes billing events. Without evidence of input validation middleware or schema validation, there is risk of processing malformed or malicious events that could corrupt billing data. Fix: Implement strict schema validation for all incoming Kafka events. Use protobuf or JSON Schema validation. Implement rate limiting and anomaly detection for event processing. Log and quarantine invalid events. - Medium · Insufficient Secrets in Environment Examples —
.env.development.default, deploy/.env.light.example, deploy/.env.production.example. The .env.development.default and deploy example files contain incomplete secret placeholders. The REDIS_PASSWORD is shown as empty string by default (REDIS_PASSWORD:-) which could lead to unprotected Redis instances in staging environments. Fix: Ensure all example files require explicit password configuration. Document that these are mandatory for production. Use environment variable validation at startup to fail fast if required secrets are missing. - Medium · Exposed Default Ports without Rate Limiting Mentioned —
docker/nginx.conf, docker-compose.yml. The docker-compose configuration exposes standard ports (3000 for API, 5432 for PostgreSQL, 6379 for Redis) without visible rate limiting, DDoS protection, or WAF configuration in the provided nginx configuration. Fix: Implement rate limiting in nginx for API endpoints. Enable DDoS protection (CloudFlare, AWS Shield). Restrict direct database and cache access to internal networks only. Implement network policies in Kubernetes deployments. - Medium · Missing Security Headers Configuration —
undefined. While nginx.conf is present, there is no visible evidence of security headers (CSP, X-Frame-Options, X-Content-Type-Options Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.