code4craft/webmagic
A scalable web crawler framework for Java.
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 5mo ago
- ✓14 active contributors
- ✓Apache-2.0 licensed
Show all 7 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Slowing — last commit 5mo ago
- ⚠Concentrated ownership — top contributor handles 74% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/code4craft/webmagic)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/code4craft/webmagic on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: code4craft/webmagic
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/code4craft/webmagic shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 5mo ago
- 14 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Slowing — last commit 5mo ago
- ⚠ Concentrated ownership — top contributor handles 74% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live code4craft/webmagic
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/code4craft/webmagic.
What it runs against: a local clone of code4craft/webmagic — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in code4craft/webmagic | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch develop exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 169 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of code4craft/webmagic. If you don't
# have one yet, run these first:
#
# git clone https://github.com/code4craft/webmagic.git
# cd webmagic
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of code4craft/webmagic and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "code4craft/webmagic(\\.git)?\\b" \\
&& ok "origin remote is code4craft/webmagic" \\
|| miss "origin remote is not code4craft/webmagic (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify develop >/dev/null 2>&1 \\
&& ok "default branch develop exists" \\
|| miss "default branch develop no longer exists"
# 4. Critical files exist
test -f "webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java" \\
&& ok "webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java" \\
|| miss "missing critical file: webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java"
test -f "webmagic-core/src/main/java/us/codecraft/webmagic/processor/PageProcessor.java" \\
&& ok "webmagic-core/src/main/java/us/codecraft/webmagic/processor/PageProcessor.java" \\
|| miss "missing critical file: webmagic-core/src/main/java/us/codecraft/webmagic/processor/PageProcessor.java"
test -f "webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java" \\
&& ok "webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java" \\
|| miss "missing critical file: webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java"
test -f "webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/Scheduler.java" \\
&& ok "webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/Scheduler.java" \\
|| miss "missing critical file: webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/Scheduler.java"
test -f "webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/Pipeline.java" \\
&& ok "webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/Pipeline.java" \\
|| miss "missing critical file: webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/Pipeline.java"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 169 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~139d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/code4craft/webmagic"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
WebMagic is a Java framework for building scalable, multi-threaded web crawlers that handle the complete lifecycle: HTTP downloading, URL deduplication and scheduling, HTML/CSS/XPath content extraction, and persistent storage via pipelines. It provides both low-level APIs (Spider, Request, Page, Pipeline) and high-level annotation-based POJO bindings (@TargetUrl, @HelpUrl, @Extract) to reduce boilerplate when extracting structured data from websites. Monorepo with webmagic-core as the foundation (us/codecraft/webmagic package) containing Spider, Request, Page, ResultItems, and Task core classes. Processor implementations live in webmagic-core/src/main/java/us/codecraft/webmagic/processor/ (PageProcessor interface, SimplePageProcessor, and concrete examples). Downloaders (HttpClientDownloader, AbstractDownloader) in downloader/ package, pipelines (Pipeline, ConsolePipeline, FilePipeline) in pipeline/, and scheduling (DuplicateRemovedScheduler) in scheduler/.
👥Who it's for
Java backend developers building web scrapers, data extraction pipelines, and crawler applications who want a production-ready framework that abstracts HTTP client management, URL queue scheduling, and content parsing without writing custom threading or download retry logic.
🌱Maturity & risk
Mature and production-ready: published to Maven Central with semantic versioning (v1.0.4-SNAPSHOT visible), has Travis CI integration (.travis.yml), targets Java 11+, includes example processors (GithubRepoPageProcessor, BaiduBaikePageProcessor), and maintains a multi-module structure (webmagic-core + webmagic-extension). The codebase is stable with comprehensive pipeline and processor abstractions, though commit recency is not visible from provided data.
Moderate risk from single-maintainer project (code4craft org is one person) and moderately heavy dependency chain (HttpClient 4.5.13, HtmlCleaner, Guava 32.0, Jedis for Redis). Version pinning to Java 11+ may cause friction in older environments. No evidence of active security patching cadence provided, and no dependency vulnerability scanning tool visible in CI config.
Active areas of work
No recent activity timeline visible from provided data, but the pom.xml shows dependencies pinned to recent versions (Log4j2 2.23.1, JUnit 5.10.2, Guava 32.0.0) suggesting some maintenance. The -SNAPSHOT suffix on v1.0.4 indicates active development branch, but specific PRs, issues, or milestones are not visible in the file list.
🚀Get running
git clone https://github.com/code4craft/webmagic.git
cd webmagic
mvn clean install
# Or use Maven directly in webmagic-core submodule
cd webmagic-core
mvn package
Daily commands:
# Build the framework
mvn clean package
# Run an example crawler (from GithubRepoPageProcessor main method)
cd webmagic-core
mvn exec:java -Dexec.mainClass="us.codecraft.webmagic.processor.example.GithubRepoPageProcessor"
# Or instantiate in your own code:
# Spider.create(new MyPageProcessor()).addUrl("http://example.com").thread(5).run();
🗺️Map of the codebase
webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java— Main orchestrator class that coordinates the entire crawling lifecycle—every contributor must understand how requests flow through the spider.webmagic-core/src/main/java/us/codecraft/webmagic/processor/PageProcessor.java— Core abstraction for page processing logic—essential interface that users implement to define extraction and URL routing behavior.webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java— HTTP fetching implementation using Apache HttpClient—critical for request execution and network concerns.webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/Scheduler.java— Abstract interface for URL queue management and duplicate removal—foundational for controlling crawl scope and preventing infinite loops.webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/Pipeline.java— Abstract interface for result persistence—critical extension point where extracted data is stored or processed.webmagic-core/src/main/java/us/codecraft/webmagic/selector/Selectable.java— Fluent API for HTML/JSON extraction supporting CSS, XPath, regex, and JsonPath—core data extraction abstraction.webmagic-core/pom.xml— Maven configuration defining module structure, dependencies (HttpClient, jsoup, xpath), and build properties for the entire framework.
🛠️How to make changes
Add a Custom PageProcessor
- Create a new class implementing PageProcessor interface in your package (
webmagic-core/src/main/java/us/codecraft/webmagic/processor/PageProcessor.java) - Implement process(Page page) method to extract data using page.getHtml().css()/xpath()/regex() and add results via page.putField(key, value) (
webmagic-core/src/main/java/us/codecraft/webmagic/processor/example/GithubRepoPageProcessor.java) - Implement getStartRequests() to return initial URLs and onStart() to configure Site object with headers/cookies/timeout (
webmagic-core/src/main/java/us/codecraft/webmagic/Site.java) - Generate new crawl requests via page.addTargetRequest(url) to discover more pages (
webmagic-core/src/main/java/us/codecraft/webmagic/Request.java)
Add a Custom Pipeline for Result Storage
- Create a class implementing Pipeline interface (
webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/Pipeline.java) - Implement process(ResultItems resultItems, Task task) to access extracted fields via resultItems.get(key) and store to your database/service (
webmagic-core/src/main/java/us/codecraft/webmagic/ResultItems.java) - Add pipeline instance to spider: spider.addPipeline(myPipeline) (
webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java)
Customize URL Scheduling with Duplicate Removal
- Instantiate DuplicateRemovedScheduler wrapping QueueScheduler or PriorityScheduler (
webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/DuplicateRemovedScheduler.java) - Optionally inject custom DuplicateRemover implementation to handle URL deduplication logic (
webmagic-core/src/main/java/us/codecraft/webmagic/scheduler/component/DuplicateRemover.java) - Set scheduler on spider: spider.setScheduler(scheduler) (
webmagic-core/src/main/java/us/codecraft/webmagic/Spider.java)
Configure Proxy Support for Requests
- Create a ProxyProvider implementation or use SimpleProxyProvider with a list of Proxy objects (
webmagic-core/src/main/java/us/codecraft/webmagic/proxy/SimpleProxyProvider.java) - Add proxy provider to Site configuration: site.setProxyProvider(proxyProvider) (
webmagic-core/src/main/java/us/codecraft/webmagic/Site.java) - HttpClientDownloader automatically applies proxies when available during request execution (
webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java)
🪤Traps & gotchas
- XPath/Regex extraction: Page uses XPath by default for HTML parsing (via HtmlCleaner); be careful with tidyText() — it merges nested text and may produce unexpected results. 2. Threading model: Spider.thread(N) creates N daemon threads; they are NOT stopped on JVM shutdown by default, so explicit Spider.stop() is needed to prevent hangs. 3. Site configuration must match: Site.me() in PageProcessor.getSite() is per-processor, not global; different processors can have conflicting retry/sleep settings. 4. Proxy configuration: SimpleProxyProvider requires manual proxy cycling; no built-in IP rotation or dead-proxy detection. 5. HTTP client pooling: HttpClientDownloader reuses a static connection pool; concurrent modification errors can occur if you modify Site.setHttpProxyProvider() after Spider.start().
🏗️Architecture
💡Concepts to learn
- URL deduplication and frontier queue — DuplicateRemovedScheduler and SpiderScheduler prevent revisiting URLs and manage crawl order; understanding this is essential for avoiding infinite loops and optimizing crawl efficiency
- Pipeline pattern for result processing — WebMagic's Pipeline interface enables chaining multiple post-extraction handlers (save to file, database, console) without modifying PageProcessor; critical for extensibility
- XPath selectors for DOM traversal — Page.getHtml().xpath() is the primary extraction API; XPath expressions are used in every example and are more powerful than CSS selectors for complex nested structures
- HTTP connection pooling and retry strategy — HttpClientDownloader manages a pool of reusable connections and implements exponential backoff via Site.setRetryTimes(); essential for handling rate limiting and network failures at scale
- Producer-consumer threading model — Spider uses thread pools where threads pull URLs from a shared queue and enqueue results; understanding thread safety and blocking queues is crucial for tuning Spider.thread(N)
- Proxy chaining and user-agent rotation — Site.setHttpProxyProvider() and custom User-Agent headers prevent blocking; ProxyProvider abstraction allows SimpleProxyProvider or custom rotation logic
- Annotation-driven configuration (reflection-based POJO extraction) — @TargetUrl, @HelpUrl, @Extract annotations reduce boilerplate; the framework uses reflection to instantiate and populate POJOs, trading performance for developer ergonomics
🔗Related repos
HalfMoon-io/spider— Alternative Java web scraping framework; simpler API but less production features like distributed schedulingjavalanche/crawler4j— Java crawler library focusing on large-scale distributed crawling with Frontier, though older and less actively maintained than WebMagicscrapy/scrapy— Python equivalent providing similar pipeline/spider/scheduler architecture; useful for comparing design patterns if migrating from Pythoncode4craft/xsoup— Companion library by same author; provides XPath/CSS selector wrappers (Selectable interface) used throughout WebMagic for content extractionredis/jedis— Optional dependency used by webmagic-extension for distributed URL deduplication and scheduling via Redis instead of in-memory queues
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive unit tests for selector module (Html, Json, XPath, Regex selectors)
The selector package (webmagic-core/src/main/java/us/codecraft/webmagic/selector/) is core to content extraction but lacks visible test coverage. Given the complexity of HTML/JSON parsing and multiple selector types (CssSelector, JsonPathSelector, etc.), comprehensive tests would prevent regressions and document expected behavior for HTML edge cases, malformed JSON, and selector chaining.
- [ ] Create webmagic-core/src/test/java/us/codecraft/webmagic/selector/ directory structure
- [ ] Add HtmlSelectorTest.java testing Html, CssSelector, RegexSelector with sample HTML documents
- [ ] Add JsonSelectorTest.java testing JsonPathSelector with complex JSON structures and edge cases
- [ ] Add AbstractSelectableTest.java testing selector chaining and composition patterns
- [ ] Add integration test for BaseElementSelector with real DOM parsing scenarios
Add unit tests for scheduler component (DuplicateRemover, DuplicateRemovedScheduler, PriorityScheduler)
The scheduler package handles critical crawler logic for URL deduplication and prioritization (files: DuplicateRemovedScheduler.java, PriorityScheduler.java, HashSetDuplicateRemover.java). Without visible tests, it's unclear how duplicate detection handles edge cases, URL normalization, or priority ordering. This is essential for a distributed crawler framework.
- [ ] Create webmagic-core/src/test/java/us/codecraft/webmagic/scheduler/ directory
- [ ] Add DuplicateRemovedSchedulerTest.java testing duplicate URL detection with various URL formats and normalizations
- [ ] Add PrioritySchedulerTest.java testing request prioritization and ordering logic
- [ ] Add HashSetDuplicateRemoverTest.java testing concurrent access and memory efficiency
- [ ] Add MonitorableSchedulerTest.java testing queue statistics and monitoring functionality
Add unit tests for downloader module (HttpClientDownloader, CustomRedirectStrategy, HttpClientGenerator)
The downloader package (HttpClientDownloader.java, CustomRedirectStrategy.java, HttpClientGenerator.java) handles HTTP requests and redirect logic, which are critical for reliability and security. Tests should cover redirect handling, proxy support, SSL/TLS scenarios, and connection pooling behavior to prevent production issues.
- [ ] Create webmagic-core/src/test/java/us/codecraft/webmagic/downloader/ directory
- [ ] Add HttpClientDownloaderTest.java with Moco (already in dependencies) to mock HTTP servers and test redirects, status codes, and headers
- [ ] Add CustomRedirectStrategyTest.java testing 301/302/303/307 redirect chains and circular redirect detection
- [ ] Add HttpClientGeneratorTest.java testing SSL configuration, proxy setup, and connection timeout behavior
- [ ] Add AbstractDownloaderTest.java testing cookie handling, user-agent rotation, and request header injection
🌿Good first issues
- Add unit tests for webmagic-core/src/main/java/us/codecraft/webmagic/model/HttpRequestBody.java — it's a data model with no visible test coverage; contribute MockRequest/MockResponse builders to webmagic-core/src/test/java/.
- Implement a JsonPipeline class in webmagic-core/src/main/java/us/codecraft/webmagic/pipeline/ (similar to ConsolePipeline) that serializes ResultItems to JSON via Fastjson 2.0.19; add test and example usage.
- Refactor BaiduBaikePageProcessor, GithubRepoPageProcessor, and ZhihuPageProcessor examples to use the annotation-based @TargetUrl/@HelpUrl/@Extract approach instead of manual page.addTargetRequests() — this will demonstrate and test the annotation processor module.
⭐Top contributors
Click to expand
Top contributors
- @sutra — 74 commits
- @code4craft — 5 commits
- @dependabot[bot] — 3 commits
- @FrancoisGib — 3 commits
- @ayushi250317 — 3 commits
📝Recent commits
Click to expand
Recent commits
67816a1— Bump org.jruby:jruby from 9.3.9.0 to 9.4.12.1 (#1192) (dependabot[bot])eed2b44— Upgrade parent from 2.2.1 to 2.3.1. (sutra)d877fa5— Fix us.codecraft.webmagic.scheduler.FileCacheQueueScheduler#readCursorFile, refs #1175. (sutra)36533c7— Lookup parent from repository. (sutra)e2cc740— Update for next development version (sutra)c580acb— Merge tag 'WebMagic-1.0.3' into develop (sutra)cfae008— Merge branch 'release/1.0.3' (sutra)f4a8825— Update versions for release (sutra)7efc987— Update LICENSE, fix license year (#1186) (JasonnnW3000)42a1727— Remove useless modifier. (sutra)
🔒Security observations
- High · Outdated and Vulnerable HttpClient Dependency —
pom.xml - httpclient.version=4.5.13, httpcore.version=4.4.15. The project uses Apache HttpClient 4.5.13 and HttpCore 4.4.15, which are old versions with known security vulnerabilities. HttpClient 4.5.x reached end-of-life and contains multiple CVEs including SSL/TLS issues and request smuggling vulnerabilities. Fix: Upgrade to HttpClient 5.x series (e.g., 5.2.x) and HttpCore 5.x. This is critical for a web crawler framework that handles HTTP requests. - High · Vulnerable FastJSON Dependency —
pom.xml - fastjson.version=2.0.19.graal. FastJSON version 2.0.19.graal has known deserialization vulnerabilities. FastJSON is notoriously vulnerable to arbitrary code execution through gadget chains during JSON deserialization. Fix: Upgrade to the latest stable FastJSON version and consider using safer alternatives like Jackson or Gson. Implement strict input validation and disable dangerous features like autotype. - High · Vulnerable Log4j2 Dependency —
pom.xml - log4j2.version=2.23.1. Log4j2 version 2.23.1 is relatively recent but the project should verify it addresses all known CVEs. Previous versions had critical vulnerabilities (CVE-2021-44228, CVE-2021-45046). Fix: Verify the Log4j2 version is up-to-date and all patches are applied. Consider upgrading to the latest available version and monitor security advisories. - Medium · Outdated Groovy Dependency —
pom.xml - groovy-all.version=3.0.13. Groovy 3.0.13 is outdated. The project depends on Groovy which is often used for dynamic code execution/scripting. Older versions may contain vulnerabilities. Fix: Upgrade to Groovy 3.0.x latest patch or 4.x if compatible. Verify no remote code execution vectors exist through Groovy script execution. - Medium · Vulnerable JRuby Dependency —
pom.xml - jruby.version=9.4.12.1, webmagic-core/src/main/java/us/codecraft/webmagic. JRuby 9.4.12.1 is a scripting engine that can execute arbitrary code. If user input is passed to JRuby interpreter without sanitization, it could lead to remote code execution. Fix: Evaluate if JRuby is necessary. If used, ensure strict sandboxing and no user input is directly passed to the interpreter. Consider removing if not essential. - Medium · Vulnerable Jython Dependency —
pom.xml - jython.version=2.7.3. Jython 2.7.3 is an outdated Python implementation for Java. It can execute arbitrary Python code and has known security vulnerabilities. Jython development is stalled. Fix: Remove Jython dependency if not actively used. If required, upgrade to a maintained Python execution framework or use subprocess isolation. - Medium · Outdated Jedis Redis Client —
pom.xml - jedis.version=3.7.1. Jedis 3.7.1 is outdated. For a web crawler that may persist data to Redis, using an old client could expose the application to vulnerabilities. Fix: Upgrade Jedis to the latest stable version (4.x or 5.x). Ensure Redis connections use authentication and encryption. - Medium · Potential SSRF/URL Injection in Web Crawler —
webmagic-core/src/main/java/us/codecraft/webmagic/downloader/HttpClientDownloader.java, webmagic-core/src. As a web crawler framework, the codebase in HttpClientDownloader, Request, and Site classes likely handles user-supplied URLs without sufficient validation. This could lead to Server-Side Request Forgery (SSRF) attacks, allowing attackers to scan internal networks or access restricted resources. Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.