sogou/workflow
C++ Parallel Computing and Asynchronous Networking Framework
Healthy across all four use cases
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓4 active contributors
- ✓Apache-2.0 licensed
Show 4 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Small team — 4 contributors active in recent commits
- ⚠Single-maintainer risk — top contributor 92% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/sogou/workflow)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/sogou/workflow on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: sogou/workflow
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sogou/workflow shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit today
- 4 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Small team — 4 contributors active in recent commits
- ⚠ Single-maintainer risk — top contributor 92% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live sogou/workflow
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/sogou/workflow.
What it runs against: a local clone of sogou/workflow — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in sogou/workflow | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sogou/workflow. If you don't
# have one yet, run these first:
#
# git clone https://github.com/sogou/workflow.git
# cd workflow
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of sogou/workflow and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sogou/workflow(\\.git)?\\b" \\
&& ok "origin remote is sogou/workflow" \\
|| miss "origin remote is not sogou/workflow (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "src/factory/HttpTaskImpl.cc" \\
&& ok "src/factory/HttpTaskImpl.cc" \\
|| miss "missing critical file: src/factory/HttpTaskImpl.cc"
test -f "src/client/WFHttpChunkedClient.h" \\
&& ok "src/client/WFHttpChunkedClient.h" \\
|| miss "missing critical file: src/client/WFHttpChunkedClient.h"
test -f "src/factory/KafkaTaskImpl.cc" \\
&& ok "src/factory/KafkaTaskImpl.cc" \\
|| miss "missing critical file: src/factory/KafkaTaskImpl.cc"
test -f "src/client/WFMySQLConnection.h" \\
&& ok "src/client/WFMySQLConnection.h" \\
|| miss "missing critical file: src/client/WFMySQLConnection.h"
test -f "docs/en/about-exit.md" \\
&& ok "docs/en/about-exit.md" \\
|| miss "missing critical file: docs/en/about-exit.md"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sogou/workflow"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Sogou C++ Workflow is an enterprise-grade asynchronous networking and parallel computing framework that powers Sogou's 10+ billion daily requests. It provides unified task scheduling for HTTP/Redis/MySQL/Kafka protocol clients, custom RPC servers, and compute tasks—allowing developers to compose complex workflows (series, parallel, DAG) where networking and computation are first-class primitives executed on a shared event-driven engine. Monolithic framework organized by concern: src/core/ contains the event loop and task scheduler, src/protocol/ implements HTTP/Redis/MySQL/Kafka parsers, src/client/ and src/server/ expose high-level APIs (WFHttpServer, WFHttpTask), docs/ has feature deep-dives (about-dns.md, about-service-governance.md), benchmark/ contains production load tests. Build system supports CMake (CMakeLists.txt), Xmake (xmake.lua), Bazel (BUILD, WORKSPACE), and GNU Make.
👥Who it's for
C++ backend engineers at companies like Sogou building high-concurrency microservices, search systems, and cloud platforms who need production-tested async I/O, service governance, and task orchestration without writing raw epoll/iocp code. Also used by framework builders (e.g., srpc maintainers) implementing RPC stacks on top of Workflow's protocol abstraction.
🌱Maturity & risk
Production-ready and battle-tested: handles Sogou's 10+ billion daily requests across search, ads, and cloud services. Active development visible in CI/CD setup (.github/workflows/ci.yml, .github/workflows/xmake.yml), multi-language build support (CMake, Xmake, Bazel), and cross-platform coverage (Linux, macOS, Windows as separate branch). Large codebase (1M+ C++ LOC) and established Apache 2.0 license indicate mature governance.
Low risk for production use cases: Sogou's backing and 10B req/day validation removes speculation risk. Risks: Windows support is gated to separate branch (not mainline), requiring conditional build logic; codebase size (1M+ LOC) means learning curve is steep; dependency management strategy not explicit in file list (check vcpkg/Conan usage). No obvious single-maintainer risk given Sogou's stewardship.
Active areas of work
Active CI/CD: .github/workflows/ shows automated testing on each commit (ci.yml) and Xmake build validation (xmake.yml). Documentation being maintained in both English (docs/en/) and Chinese (docs/). Benchmark suite actively updated (benchmark-01-http_server.cc, benchmark-02-http_server_long_req.cc). No explicit PR or milestone data in file list, but multi-build-system support (CMake + Xmake + Bazel) suggests ongoing infrastructure improvements.
🚀Get running
Clone and build with CMake: git clone https://github.com/sogou/workflow.git && cd workflow && mkdir build && cd build && cmake .. && make -j$(nproc). For Xmake: xmake from repo root. For macOS/Linux system integration, check GNUmakefile: make -f GNUmakefile. Windows users: switch to windows branch and use iocp-based build. Run example HTTP server: binaries will appear in build/output or similar (check CMakeLists.txt output paths).
Daily commands:
For HTTP server example (from README): compile any .cc file in examples/ or write your own using WFHttpServer; link against built libworkflow.a. Core server loop: WFHttpServer server([](WFHttpTask *task) { /* handler */ }); server.start(8888);. For compute workflows: use Workflow::create_series(), Workflow::create_parallel() to compose tasks. Async callback-driven—no blocking calls in handlers.
🗺️Map of the codebase
src/factory/HttpTaskImpl.cc— Core HTTP request/response handling implementation; essential for understanding how all HTTP-based workflows are executedsrc/client/WFHttpChunkedClient.h— HTTP client interface definition; foundational for building HTTP tasks and understanding the async networking contractsrc/factory/KafkaTaskImpl.cc— Kafka protocol implementation in the factory layer; demonstrates how to add new protocol support to the frameworksrc/client/WFMySQLConnection.h— Database connection abstraction; shows how the framework manages stateful connections and resource poolsdocs/en/about-exit.md— Explains task completion and callback mechanisms; critical for understanding the callback-driven programming modelCMakeLists.txt— Build system entry point; defines core library dependencies and compilation targets across all modulesREADME.md— Primary documentation and design philosophy; describes the framework's async model and when to use it
🛠️How to make changes
Add Support for a New Protocol
- Create a client connection class in src/client/ (e.g., WFMyProtocolClient.h) that inherits from the framework's connection base and implements send/receive logic (
src/client/WFMyProtocolClient.h) - Implement the protocol state machine and message parsing in WFMyProtocolClient.cc, following the pattern used by WFMySQLConnection.cc and WFKafkaClient.cc (
src/client/WFMyProtocolClient.cc) - Create a task factory implementation in src/factory/MyProtocolTaskImpl.cc that uses WF_CREATE_* macros to define task creation and callback binding (
src/factory/MyProtocolTaskImpl.cc) - Add a tutorial in docs/en/tutorial-XX-myprotocol_cli.md showing basic usage (request creation, callback handling, error management) (
docs/en/tutorial-XX-myprotocol_cli.md) - Register your factory in src/factory/CMakeLists.txt to ensure compilation and linking (
src/factory/CMakeLists.txt)
Create a New HTTP Server Application
- Include WFHttpServer.h and create a WFHttpServer instance with a target port number (
docs/en/tutorial-04-http_echo_server.md) - Define a request handler lambda/function that inspects req->type, req->path, and populates resp->body following the HTTP task callback contract (
docs/en/about-exit.md) - Call server.start() after setting up routes, and implement graceful shutdown using WFFacilities or signal handlers (
docs/en/tutorial-05-http_proxy.md) - Optionally add HTTPS support by configuring SSL context and binding to port 443, following docs/en/about-config.md patterns (
docs/en/about-config.md)
Implement Parallel Task Execution with Dependencies
- Use ParallelWork or SeriesWork containers from the core API to combine multiple HTTP/database tasks (
docs/en/tutorial-06-parallel_wget.md) - Define task execution order: nest SeriesWork for sequential steps and use ParallelWork::add() to run independent requests concurrently (
docs/en/tutorial-11-graph_task.md) - Set a completion callback on the final work container to aggregate results and handle errors from any failed subtask (
docs/en/about-exit.md) - Configure timeouts at the individual task level or container level using set_timeout() to prevent cascading hangs (
docs/en/about-timeout.md)
Configure and Use Connection Pooling
- Inspect docs/en/about-resource-pool.md and WFMySQLConnection.h to understand the pooling strategy and connection limits (
docs/en/about-resource-pool.md) - When creating MySQL/Redis tasks via the factory, pass a ResourcePool config object or use default pool settings (
src/factory/MySQLTaskImpl.cc) - The framework automatically returns connections to the pool after task completion; monitor pool metrics via WFFacilities counters (
docs/en/about-counter.md)
🔧Why these technologies
- C++ with std::function & lambdas — Enables high-performance async/callback-driven architecture without garbage collection; zero-copy abstractions for networking
- Event loop & non-blocking I/O (epoll/kqueue/IOCP) — Handles millions of concurrent connections with minimal memory footprint; supports 10+ billion requests per day at Sogou scale
- Task factory pattern — Decouples protocol details from application logic; allows adding new protocols (HTTP, MySQL, Kafka, Redis, DNS, Consul) independently
- Connection pooling & resource management — Prevents connection exhaustion and enables reuse; critical for large-scale services managing thousands of backend connections
⚖️Trade-offs already made
-
Callback-based async model instead of async/await or coroutines
- Why: Compatible with older C++ standards (C++11+); avoids runtime overhead of coroutine state machines
- Consequence: Requires careful callback chaining (callback hell); developers must manually manage task dependencies and error propagation
-
Single-threaded event loop per process by default
- Why: Eliminates lock contention and simplifies concurrent state management; ideal for I/O-bound workloads
- Consequence: CPU-bound tasks (e.g., image processing) should be offloaded to thread pools; vertical scaling requires multi-process deployment
-
Synchronous protocol client implementations (no streaming parsers for large payloads)
- Why: Simpler API and easier debugging; reduces buffer management complexity
- Consequence: Large file uploads/downloads may require buffering in memory; not ideal for streaming media without custom extensions
-
Factory pattern for task creation instead of direct class instantiation
- Why: Allows runtime selection of protocol handlers and task implementations without exposing internal details
- Consequence: Less flexibility for advanced customization; harder to subclass or inject custom behaviors
🚫Non-goals (don't propose these)
- Real-time guarantees—framework optimizes for throughput and latency percentiles, not worst-case latency
- WebSocket or binary protocol multiplexing over HTTP/2 (HTTP/1.1 only; use external libraries for these)
- Built-in authentication/authorization—framework provides connection hooks; application must implement security policies
- Cross-platform Windows support at feature parity—Windows support is present but Linux/macOS receive more testing and optimization
- Automatic database query optimization or ORM—framework provides raw protocol access; applications must handle SQL/query building
🪤Traps & gotchas
Windows build is on separate branch (github.com/sogou/workflow/tree/windows), not mainline—conditional include guards expected if cross-platform. epoll/kqueue availability platform-specific; some features (async file I/O) Linux-only (check docs/). Task callbacks execute in worker thread pool by default—non-thread-safe mutations require synchronization (no auto thread-safety). Upstream/load balancing requires explicit upstream configuration; default behavior may not scale without setup. Message serialization (TLV, HTTP) is zero-copy where possible but requires understanding buffer lifecycle—data must remain valid until task completion.
🏗️Architecture
💡Concepts to learn
- Task Graph Execution & DAG Scheduling — Workflow's core differentiator—understanding how series(), parallel(), and custom DAGs execute is essential to composing complex multi-step workflows (e.g., fetch HTTP → query DB → aggregate → respond)
- Zero-Copy Protocol Parsing (Ring Buffer + Slice) — Workflow achieves high throughput (10B requests/day) by avoiding buffer copies during HTTP/Redis parsing; understanding message framing and offset-based parsing is critical for protocol extensions
- Upstream & Load Balancing (Service Discovery) — Production microservice requirement—Workflow's Upstream abstraction handles consistent hashing, round-robin, and failover, essential for scaling beyond single-server deployments
- Resource Pool & Connection Pooling — Workflow manages HTTP/Redis/MySQL connection pools transparently; understanding pool exhaustion, TTL, and keepalive prevents subtle production bugs in high-concurrency scenarios
- Asynchronous Timeout & Timer Management — Every task supports fine-grained timeouts (connect, read, request); timers are scheduled alongside I/O events, making timeout handling a first-class async primitive
- Counter & Conditional Synchronization — Lightweight coordination primitives for cross-task signaling (e.g., fan-out 10 requests, wait for 8 replies before proceeding); replaces mutex/condition variable patterns for async code
- Custom Protocol Implementation (Message Interface) — Workflow abstracts protocol codecs via Message subclasses; building custom RPC requires implementing serialize/deserialize and integrating with WFServer/WFClient templates
🔗Related repos
sogou/srpc— RPC framework built directly atop Workflow; implements srpc, brpc, trpc, thrift protocols using Workflow's task and message abstraction as foundationchriskohlhoff/asio— Comparable C++ async networking library used in Boost; ASIO is single-header and more portable, Workflow is tighter-coupled to task scheduling and Sogou's patternslibevent/libevent— Alternative event loop (epoll/kqueue abstraction); Workflow wraps this model but adds task graph composition and protocol codec layersgrpc/grpc— Production RPC system with protocol buffers; Workflow is lower-level and language-native (no IDL), allowing custom serialization and tighter controlfacebook/folly— Facebook's async utilities (Future, Promise, thread pools); Workflow integrates similar concepts into a unified task-driven framework optimized for protocol work
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for DNS resolution module (docs/about-dns.md)
The repo has detailed DNS documentation but no visible dedicated test suite. Given that DNS is critical for async networking operations and the framework handles 10B+ requests daily, robust DNS tests are essential. This would cover cache behavior, timeout handling, IPv4/IPv6, and edge cases.
- [ ] Create test/dns_test.cc with unit tests for WFDnsClient
- [ ] Add tests for DNS caching mechanisms and TTL handling
- [ ] Test timeout scenarios and fallback logic
- [ ] Test IPv4 and IPv6 resolution
- [ ] Integrate tests into CMakeLists.txt and GNUmakefile
Add Windows-specific CI workflow and fix platform-specific code paths
README shows Windows support, but .github/workflows only contains ci.yml and xmake.yml which likely target Linux/macOS. The codebase likely has #ifdef WIN32 blocks that aren't being validated. A dedicated Windows CI job would catch platform-specific regressions early.
- [ ] Create .github/workflows/windows-ci.yml targeting Windows Server runners
- [ ] Configure workflow to build with MSVC and test core modules (HTTP, Redis, connection pooling)
- [ ] Add CMake flags for Windows-specific features (e.g., IOCP)
- [ ] Document Windows build prerequisites in README or CONTRIBUTING.md
- [ ] Test against both static and dynamic runtime libraries
Create missing tutorial documentation for advanced features (Kafka, Matrix operations)
docs/en/ has tutorials 1-13 listed in README but only tutorials 1-4 appear in the file structure. Tutorials 12 (MySQL) and 13 (Kafka) are documented as existing but missing from the actual repo. These are valuable for users learning service integration patterns.
- [ ] Create docs/en/tutorial-12-mysql_cli.md with Sogou MySQL API examples
- [ ] Create docs/en/tutorial-13-kafka_cli.md with producer/consumer patterns
- [ ] Add Chinese versions: docs/tutorial-12-mysql_cli.md and docs/tutorial-13-kafka_cli.md
- [ ] Include runnable code examples matching docs/en/tutorial-04-http_echo_server.md style
- [ ] Add references in README and README_cn.md
🌿Good first issues
- Add comprehensive unit tests for src/protocol/redis_parser.c (likely only integration tested); see docs/about-redis.md for protocol coverage gaps and write tests mirroring benchmark-01 style.
- Extend docs/en/ with a worked example of custom RPC protocol implementation using WFServer<CustomMessage> template—currently only HTTP/Redis examples exist, blocking users from understanding Message interface.
- Add CMake package config (find_package Workflow) and install targets to CMakeLists.txt for easier downstream consumption via package managers (currently build-only, no pkg-config or CMakeFindModule).
⭐Top contributors
Click to expand
Top contributors
- @Barenboim — 92 commits
- @kedixa — 4 commits
- @holmes1412 — 3 commits
- @bkmgit — 1 commits
📝Recent commits
Click to expand
Recent commits
2401805— Remove 'alloca()' from kafka parser. (#1812) (Barenboim)08b7258— Poller max open files limited to 1M. (#1811) (Barenboim)935f6d3— Fix KafkaMessage boundary checks (#1810) (kedixa)8dc6502— Fix UpstreamPolicie: use size_t when counting for cur_idx. (#1809) (holmes1412)c46d6b8— Fix WFConsultClient::init() bug when specifying an SSL_CTX. (Barenboim)fa48b35— 0.11.11 -> 1.0.0 (Barenboim)078f618— Use C++17 to be compatible with latest GTest 1.17 (#1805) (bkmgit)5cde50c— Fix URI parser warning. (Barenboim)fa51c22— Disable getting connection after server task callback. (#1803) (Barenboim)46b129f— Include errno.h in ExecRequest.h (#1797) (kedixa)
🔒Security observations
Sogou Workflow is an enterprise-grade C++ framework with generally sound architecture. The main security concerns are: (1) Incomplete dependency management visibility preventing vulnerability assessment, (2) Potential input validation gaps in custom protocol handlers typical of low-level networking frameworks, (3) Network service exposure requiring explicit security configuration verification, and (4) CI/CD pipeline configuration not visible for review. The framework handles billions of requests daily, suggesting maturity, but security hardening documentation and automated vulnerability scanning should be emphasized. The codebase lacks visible hardcoded secrets, but comprehensive static analysis of actual implementation code is recommended.
- Medium · Potential Network Service Exposure —
src/client/WFConsulClient.cc, benchmark/benchmark-01-http_server.cc, docs/tutorial-19-dns_server.md. The codebase includes HTTP server implementations (benchmark-01-http_server.cc, benchmark-02-http_server_long_req.cc) and network services (WFConsulClient, DNS server). Without visible security headers configuration or TLS enforcement documentation, these services may be vulnerable to man-in-the-middle attacks or unencrypted data transmission. Fix: Ensure all network services enforce TLS/SSL encryption by default. Document security requirements for HTTP server configuration including mandatory security headers (HSTS, X-Frame-Options, etc.). Implement certificate pinning for external service communication. - Medium · Missing Dependency Vulnerability Tracking —
Root directory - missing dependency manifest files. The provided 'Dependencies/Package file content' is empty. Without a clear package manifest (package.json, requirements.txt, conanfile.txt, etc.), it's impossible to verify that all dependencies are up-to-date and free from known vulnerabilities. Fix: Create and maintain explicit dependency files (e.g., conanfile.txt for C++ packages). Implement automated vulnerability scanning using tools like OWASP Dependency-Check, Snyk, or Conan security scanning. Include dependency updates in CI/CD pipeline. - Medium · Potential Input Validation Gaps in Protocol Handlers —
src/ (protocol handling), docs/tutorial-10-user_defined_protocol.md. The framework implements custom protocol handlers (tutorial-10-user_defined_protocol.md) and message parsing (about-tlv-message.md). Without visible input validation code review, custom protocol implementations may be vulnerable to buffer overflows, format string attacks, or injection attacks. Fix: Implement strict input validation for all custom protocol handlers. Use fuzzing tools to test protocol parsing. Enforce bounds checking and use safe APIs for buffer operations. Document security requirements for protocol implementation. - Low · CI/CD Configuration Visibility —
.github/workflows/ci.yml, .github/workflows/xmake.yml. GitHub Actions workflows are present (.github/workflows/ci.yml, .github/workflows/xmake.yml) but content is not provided for review. These could potentially contain exposed secrets, insecure build steps, or missing security checks. Fix: Review CI/CD pipelines to ensure: (1) No secrets hardcoded in workflows, (2) Use GitHub Secrets for sensitive data, (3) Enable code scanning (CodeQL, SAST tools), (4) Implement dependency checking, (5) Sign container images and releases. - Low · Dual License Complexity —
LICENSE, LICENSE_GPLV2. The project includes both Apache 2.0 and GPLv2 licenses, which may create license compatibility issues and unclear terms for downstream users. Fix: Clarify licensing strategy - choose one primary license or explicitly document which files fall under which license. Update documentation to make license selection clear to users.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.