dockersamples/example-voting-app
Example distributed app composed of multiple containers for Docker, Compose, Swarm, and Kubernetes
Healthy across all four use cases
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓19 active contributors
- ✓Distributed ownership (top contributor 34% of recent commits)
- ✓Apache-2.0 licensed
Show 3 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Stale — last commit 2y ago
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/dockersamples/example-voting-app)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/dockersamples/example-voting-app on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: dockersamples/example-voting-app
Generated by RepoPilot · 2026-05-10 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/dockersamples/example-voting-app shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- 19 active contributors
- Distributed ownership (top contributor 34% of recent commits)
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Stale — last commit 2y ago
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live dockersamples/example-voting-app
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/dockersamples/example-voting-app.
What it runs against: a local clone of dockersamples/example-voting-app — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in dockersamples/example-voting-app | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 606 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of dockersamples/example-voting-app. If you don't
# have one yet, run these first:
#
# git clone https://github.com/dockersamples/example-voting-app.git
# cd example-voting-app
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of dockersamples/example-voting-app and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "dockersamples/example-voting-app(\\.git)?\\b" \\
&& ok "origin remote is dockersamples/example-voting-app" \\
|| miss "origin remote is not dockersamples/example-voting-app (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "docker-compose.yml" \\
&& ok "docker-compose.yml" \\
|| miss "missing critical file: docker-compose.yml"
test -f "vote/app.py" \\
&& ok "vote/app.py" \\
|| miss "missing critical file: vote/app.py"
test -f "result/server.js" \\
&& ok "result/server.js" \\
|| miss "missing critical file: result/server.js"
test -f "worker/Program.cs" \\
&& ok "worker/Program.cs" \\
|| miss "missing critical file: worker/Program.cs"
test -f "docker-stack.yml" \\
&& ok "docker-stack.yml" \\
|| miss "missing critical file: docker-stack.yml"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 606 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~576d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/dockersamples/example-voting-app"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
A reference distributed application demonstrating a complete microservices voting system across five containerized services: a Python Flask voting UI, Node.js results dashboard, .NET worker processing votes, Redis message queue, and PostgreSQL database. It shows how Docker Compose, Docker Swarm, and Kubernetes orchestrate these heterogeneous services with persistent storage and inter-service messaging. Multi-service monorepo with service-oriented organization: vote/ (Python Flask app), result/ (Node.js Express app), worker/ (.NET service, inferred from architecture), plus k8s-specifications/ containing five YAML deployment manifests (vote, result, worker, db, redis), docker-compose.yml for local dev, docker-stack.yml for Swarm, and healthchecks/ for readiness probes. No shared code—services are intentionally decoupled to demonstrate polyglot architecture.
👥Who it's for
DevOps engineers, Docker/Kubernetes learners, and platform architects evaluating container orchestration platforms. Specifically: developers learning how to compose multi-language microservices, teams evaluating Docker Swarm vs. Kubernetes deployments, and trainers needing a runnable, architecture-diagram example to teach distributed systems concepts.
🌱Maturity & risk
Actively maintained with solid foundational maturity. The repo includes GitHub Actions CI workflows for building three Docker images (vote, result, worker), health check scripts for Redis and Postgres, and Kubernetes YAML specifications. Absent is comprehensive automated testing (result/docker-compose.test.yml exists but no test coverage metrics visible). This is a teaching/reference project, not a production framework, so the bar is appropriately lower—it serves its purpose well.
Low architectural risk but moderate dependency management risk. The result service depends on 7 npm packages including Express 4.18.2 and socket.io 4.7.2 with no lock-file pinning visible in the top-level; the vote service (Python) relies on Flask (inferred from app.py); the .NET worker (in worker/ directory, not listed in file structure) likely has its own dependency graph. Single-maintainer maintenance risk is typical for Docker examples. Breaking changes are unlikely because this is a stable reference implementation, not an evolving framework.
Active areas of work
Active maintenance indicated by GitHub Actions workflows for multi-platform Docker builds (call-docker-build-vote.yaml, call-docker-build-result.yaml, call-docker-build-worker.yaml) and Dependabot configuration (.github/dependabot.yml). Recent activity likely focused on keeping base images and dependencies current, and ensuring compatibility across Docker Compose, Swarm, and Kubernetes deployment targets.
🚀Get running
Check README for instructions.
Daily commands:
Local dev: docker compose up → vote on http://localhost:8080, results on http://localhost:8081. Docker Swarm: docker swarm init then docker stack deploy --compose-file docker-stack.yml vote. Kubernetes: kubectl create -f k8s-specifications/ → vote on port 31000, results on port 31001 of any cluster node.
🗺️Map of the codebase
docker-compose.yml— Primary orchestration file defining all services (vote, result, worker, redis, db) and their networking; essential for understanding how the app runs locallyvote/app.py— Entry point for the voting UI service; implements the Flask web interface that users interact withresult/server.js— Entry point for the results display service; Node.js Express server that reads from PostgreSQL and serves real-time updates via Socket.IOworker/Program.cs— Worker service written in C# that consumes votes from Redis and persists them to PostgreSQL; critical for the data pipelinedocker-stack.yml— Docker Swarm deployment manifest; required for understanding production-grade orchestration beyond Composek8s-specifications— Kubernetes YAML specs directory showing how to deploy each service (vote, result, worker, db, redis) as separate Kubernetes resourcesREADME.md— Documents architecture, setup instructions, and deployment options across Docker Compose, Swarm, and Kubernetes
🛠️How to make changes
Add a new voting option
- Update the vote UI form with new option button/radio (
vote/templates/index.html) - Ensure vote/app.py POST handler captures the new option value and pushes it to Redis (
vote/app.py) - Worker/Program.cs already reads any vote from Redis generically, so no code change needed there (
worker/Program.cs) - Results service (result/server.js) already queries all rows from the votes table and streams via Socket.IO, so it auto-displays new options (
result/server.js) - Update the Angular chart config in result/views/app.js to render the new vote category if needed (
result/views/app.js)
Add a new deployment target (e.g., Docker Compose variant)
- Create a new docker-compose.*.yml file (e.g., docker-compose.prod.yml) based on docker-compose.yml (
docker-compose.yml) - Override environment variables, resource limits, volume mount paths, or service replicas as needed in your new file (
docker-compose.yml) - Run with: docker compose -f docker-compose.yml -f docker-compose.prod.yml up (
docker-compose.yml)
Migrate vote storage to a different database
- Modify worker/Program.cs to connect to the new database instead of PostgreSQL and adapt the INSERT query (
worker/Program.cs) - Modify result/server.js to query the new database instead of PostgreSQL (change pg connection string and query) (
result/server.js) - Update docker-compose.yml to replace or supplement the postgres service definition (
docker-compose.yml) - Update k8s-specifications/db-deployment.yaml and db-service.yaml if deploying to Kubernetes (
k8s-specifications/db-deployment.yaml)
Add health checks and monitoring
- Create or update health check scripts (similar to redis.sh, postgres.sh) for vote and result services (
healthchecks/redis.sh) - Add healthcheck directives to each Dockerfile and docker-compose.yml service definition (
docker-compose.yml) - Add liveness and readiness probes to each Kubernetes deployment spec (
k8s-specifications/vote-deployment.yaml)
🔧Why these technologies
- Python Flask (vote service) — Simple, lightweight web framework ideal for a basic form submission interface; fast to develop and deploy in a container
- Redis — In-memory queue for fast, reliable decoupling between vote submissions and persistence; enables the worker to process asynchronously
- PostgreSQL — Durable, relational database for persisting vote data with ACID guarantees; supports complex queries for results aggregation
- C# .NET Worker — Demonstrates multi-language support; robust background job processor with strong typing for consuming and inserting votes
- Node.js + Express + Socket.IO (result service) — Event-driven, non-blocking I/O for real-time WebSocket updates; Socket.IO broadcasts results to multiple browser clients efficiently
- Docker Compose, Swarm, and Kubernetes — Shows the same app deployable across three orchestration paradigms; demonstrates portability and vendor-neutral containerization
⚖️Trade-offs already made
-
Redis queue instead of direct database writes from vote service
- Why: Decouples the HTTP request (vote submission) from the slower persistence operation; enables independent scaling of vote and worker services
- Consequence: Adds complexity (extra service) and potential for vote loss if Redis crashes before worker persists; requires monitoring queue depth
-
Polling database in result service (~5s intervals) instead of change-data-capture
- Why: Simpler to implement and understand; works with any database; no CDC infrastructure required
- Consequence: Results lag by up to 5 seconds; higher database query load; not true real-time
-
Three separate services with distinct languages/frameworks
- Why: Demonstrates heterogeneous microservice architectures; showcases Docker's ability to compose disparate tech stacks
- Consequence: Increased operational complexity; more containers to monitor and debug; requires understanding Python, Node.js, and .NET
-
Kubernetes specs alongside Docker Compose and Swarm
- Why: Provides deployment flexibility; users can choose their orchestrator
- Consequence: Three separate manifests to maintain in sync; risk of drift between deployment modes
🚫Non-goals (don't propose these)
- Authentication and authorization—no user login or vote rate-limiting
- Real-time vote counting (results lag 5+ seconds due to polling)
- Horizontal scaling of PostgreSQL (single database instance)
- Multi-region or geo-distributed deployment
🪤Traps & gotchas
Redis connectivity: vote and worker services must reach Redis on the default Redis port (6379, defined in docker-compose.yml); if you override the service name in compose, vote and worker code won't find it. PostgreSQL initialization: first startup creates schema; the worker expects a 'votes' table with 'id' and 'vote' columns—if you alter the schema, worker inserts will fail silently. Client IP tracking: vote app (app.py) uses client IP to ensure one vote per browser; this breaks if you put a reverse proxy in front without X-Forwarded-For headers. Kubernetes port NodePorts: vote service uses port 31000 and result uses 31001—these are hardcoded in k8s-specifications/ YAML; if those ports are taken on your cluster nodes, deployments will fail. No persistent seed data: the seed-data/ service (Dockerfile and make-data.py) exists but is not wired into docker-compose.yml by default; if you want test votes on startup, you must add it to the compose file.
🏗️Architecture
💡Concepts to learn
- Message Queue (Redis pub/sub) — The vote service publishes votes to Redis, and the worker subscribes asynchronously; this decoupling is central to why the app survives worker restarts and scales horizontally
- Docker service discovery and DNS — Services reference each other by name (e.g., vote talks to 'redis' not 127.0.0.1); Docker's embedded DNS resolver enables this; understanding this is critical to modifying hostname references across docker-compose.yml, docker-stack.yml, and k8s-specifications/
- Container health checks and readiness probes — healthchecks/ scripts and Kubernetes readiness probes prevent the app from attempting to vote or process votes before Redis/Postgres are actually ready; without these, race conditions cause silent failures
- Docker volume mounts for data persistence — PostgreSQL uses a Docker volume (db-data) so votes aren't lost when the container restarts; understanding volume lifecycle is essential to adding new persistent services or debugging 'votes disappeared' bugs
- Overlay networks and service-to-service encryption (Docker Swarm) — docker-stack.yml deploys services to a Swarm overlay network; traffic between services is encrypted and routable across multiple hosts, which compose (local bridge network) does not provide
- Kubernetes Service and Deployment abstractions — k8s-specifications/ separates Deployment (pod template and replicas) from Service (stable DNS and load balancing); this separation allows scaling and rolling updates without breaking inter-service communication
- Polyglot microservices and language-agnostic inter-process communication — The app runs Python, Node.js, and .NET in the same cluster, communicating only via Redis and HTTP(S); this pattern forces loose coupling and is why adding new services requires no changes to existing ones
🔗Related repos
docker/awesome-compose— Curated collection of Compose examples covering similar multi-service patterns (databases, caches, web apps); this voting app is a flagship example within itkubernetes/examples— Official Kubernetes examples repository; contains similar multi-tier app deployments and can be used to compare Kubernetes idioms against the docker-swarm/compose patterns shown herecompose-spec/compose-spec— Reference specification for docker-compose and docker-stack YAML syntax; essential to understand the docker-compose.yml and docker-stack.yml files in this repomoby/moby— Docker Engine source code; needed only if debugging container networking, image builds, or health check behavior across compose/swarm/kubernetes targetskelseyhightower/kubernetes-the-hard-way— Tutorial that teaches Kubernetes fundamentals from scratch; provides deeper context for why k8s-specifications/ is structured as it is, compared to a Compose file
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration test workflows for vote and worker services
The repo has test infrastructure (result/tests/Dockerfile, result/tests/tests.sh) but no GitHub Actions workflows for vote (Python) and worker (.NET) services. The result service has call-docker-build-result.yaml but vote and worker lack corresponding test execution workflows. This ensures code quality across all three main services before merge.
- [ ] Create .github/workflows/call-docker-test-vote.yaml that builds and runs vote/tests (or add test framework if missing)
- [ ] Create .github/workflows/call-docker-test-worker.yaml that runs .NET unit tests from worker/Worker.csproj
- [ ] Add test script similar to result/tests/tests.sh for vote service (Python pytest or unittest)
- [ ] Add test script for worker service (.NET test runner)
- [ ] Reference these workflows in the existing call-docker-build-vote.yaml and call-docker-build-worker.yaml
Add health check scripts for vote and worker services
The repo includes healthchecks/postgres.sh and healthchecks/redis.sh but is missing health checks for the three application services (vote, worker, result). These are critical for Docker Compose HEALTHCHECK directives and Kubernetes liveness/readiness probes defined in k8s-specifications/. Adding them will improve production reliability.
- [ ] Create healthchecks/vote.sh that checks if the Flask app responds on port 8080 (e.g., curl http://localhost:8080)
- [ ] Create healthchecks/worker.sh that verifies the .NET worker can connect to Redis/Postgres
- [ ] Create healthchecks/result.sh that checks if Node.js app responds on port 8081
- [ ] Update docker-compose.yml to reference these scripts in HEALTHCHECK instructions for vote, result, and worker services
- [ ] Update k8s-specifications deployments to use livenessProbe/readinessProbe based on the health check logic
Add Docker image security scanning and SBOM generation to CI/CD
The repo has basic docker build workflows (.github/workflows/call-docker-build-*.yaml) but lacks vulnerability scanning (Trivy, Snyk) and SBOM (Software Bill of Materials) generation. Given this is a sample app used for learning Docker/Kubernetes, adding security scanning will educate users on best practices and catch dependency vulnerabilities in result/package.json, vote/requirements.txt, and worker/Worker.csproj.
- [ ] Add Trivy scanning step to each call-docker-build-*.yaml workflow after docker build (scan for HIGH/CRITICAL vulnerabilities)
- [ ] Add SBOM generation using syft or cyclonedx in the build workflows, storing artifacts in GitHub releases
- [ ] Create a security policy documenting how vulnerabilities are reported (.github/SECURITY.md if missing)
- [ ] Add badges to README.md showing latest security scan status
- [ ] Document the scanning results in PR comments for transparency
🌿Good first issues
- Add JavaScript unit tests to result/tests/ directory. The test Dockerfile and tests.sh exist but no actual .js test cases are present; add tests for Socket.io vote broadcasting using Jest or Mocha, which would improve reliability of real-time updates.
- Document the environment variables consumed by each service. vote/app.py, result/server.js, and the .NET worker each read env vars for Redis/Postgres hosts; create a ENV_VARS.md file listing each service's required and optional variables (e.g., REDIS_HOST, DB_PASSWORD) to reduce deployment friction.
- Add health check probes to Kubernetes deployments. k8s-specifications/ contains Deployments but no livenessProbe or readinessProbe specs; wire the existing healthchecks/postgres.sh and healthchecks/redis.sh scripts into db-deployment.yaml and redis-deployment.yaml YAML as exec probes to catch service failures faster.
⭐Top contributors
Click to expand
Top contributors
- @BretFisher — 34 commits
- @mikesir87 — 19 commits
- @sixeyed — 11 commits
- [@Mano Marks](https://github.com/Mano Marks) — 8 commits
- @jrburcio — 6 commits
📝Recent commits
Click to expand
Recent commits
63e9150— Merge pull request #380 from dockersamples/change-ports (mikesir87)620c76c— Change ports of vote/result ports from 5000/5001 to 8080/8081 (mikesir87)1df1971— Merge pull request #379 from dvdksn/example-voting-app-fixups (mikesir87)a4901e4— lint: fix InvalidDefinitionDescription (dvdksn)1940973— lint: fix JSONArgsRecommended (dvdksn)0d75901— lint: fix FromAsCasing (dvdksn)2932a6f— lint: fix LegacyKeyValueFormat (dvdksn)8cd0cad— Merge pull request #292 from julienadam/main (mikesir87)2dede80— Adjust name of debug launch config (mikesir87)8b0f5ad— Merge pull request #311 from dockersamples/fix-python-reloading (mikesir87)
🔒Security observations
- High · Debugger Exposed in Production Configuration —
docker-compose.yml - result service, entrypoint. The docker-compose.yml exposes Node.js debugger on port 9229 bound to 127.0.0.1, using 'nodemon --inspect=0.0.0.0'. The inspect flag 0.0.0.0 allows debugger connections from any interface. While bound to localhost in compose, this pattern enables remote code execution if the container is exposed. Fix: Remove --inspect flag from production builds. Use it only in development with proper network isolation. For local debugging, use --inspect=localhost:9229 instead of 0.0.0.0. - High · Missing Database Credentials Management —
docker-compose.yml - result and worker services. No environment variables or secrets management visible for PostgreSQL database connection in docker-compose.yml. The 'result' service connects to 'db' service without explicit credential configuration, suggesting hardcoded or default credentials may be in use. Fix: Use Docker secrets (for Swarm) or .env files with proper handling. Never commit credentials. Implement proper secrets management for production deployments using Kubernetes Secrets or external secret managers. - High · Insecure Dependency Versions - socket.io —
result/package.json - dependencies.socket.io. socket.io ^4.7.2 uses a caret version constraint allowing up to 5.0.0. socket.io has had multiple security vulnerabilities. The broad version range without pinned minor versions increases risk exposure. Fix: Pin to specific patch versions: 'socket.io': '4.7.2' instead of '^4.7.2'. Regularly audit dependencies using 'npm audit' and update with caution after security testing. - High · Unencrypted Database Communication —
docker-compose.yml - db service, result/server.js (implied). PostgreSQL connection (pg ^8.8.0) is configured without explicit SSL/TLS requirements visible in the compose file. Database credentials may transmit unencrypted across the network. Fix: Enforce SSL/TLS for database connections. In pg connection string, use 'sslmode=require' or 'sslmode=verify-full'. Implement network policies to restrict database access. - Medium · Missing Container Security Context —
docker-compose.yml - all services. Docker Compose configuration lacks security-related settings such as read-only filesystems, non-root user execution, dropped Linux capabilities, or memory/CPU limits. Containers may run as root with excessive privileges. Fix: Add security_opt, user, read_only, and cap_drop settings. Example: 'user: 1000', 'read_only: true', 'cap_drop: [ALL]', 'security_opt: [no-new-privileges:true]'. Set resource limits with 'deploy.resources.limits'. - Medium · Unrestricted Port Exposure —
docker-compose.yml - vote (port 8080) and result (port 8081) services. Vote service exposes port 8080 and Result service exposes port 8081 without host IP restrictions, making them accessible from any interface. This increases attack surface if deployed on shared networks. Fix: Bind to specific IPs: '127.0.0.1:8080:80' for local-only access, or use reverse proxy with authentication. Use network policies to restrict external access. For production, implement firewall rules. - Medium · Deprecated/Outdated Dependencies —
result/package.json - async, method-override. async ^3.1.0 and method-override ^3.0.0 are older packages (async was last updated 2020, method-override 2017). These may contain unpatched vulnerabilities. Fix: Review why these older packages are required. Consider modern alternatives. Run 'npm audit' to identify known vulnerabilities. Update to latest versions after compatibility testing. - Medium · Missing HTTPS/TLS Configuration —
docker-compose.yml - vote and result services, README.md example URLs. Healthcheck and exposed services use plain HTTP. No indication of HTTPS enforcement or certificate management for web services. Data transmitted in clear text. Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.