yichuan-w/LEANN
[MLsys2026]: RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device.
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓18 active contributors
- ✓Distributed ownership (top contributor 35% of recent commits)
Show all 6 evidence items →Show less
- ✓MIT licensed
- ✓CI configured
- ✓Tests present
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/yichuan-w/leann)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/yichuan-w/leann on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: yichuan-w/LEANN
Generated by RepoPilot · 2026-05-07 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/yichuan-w/LEANN shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit today
- 18 active contributors
- Distributed ownership (top contributor 35% of recent commits)
- MIT licensed
- CI configured
- Tests present
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live yichuan-w/LEANN
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/yichuan-w/LEANN.
What it runs against: a local clone of yichuan-w/LEANN — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in yichuan-w/LEANN | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of yichuan-w/LEANN. If you don't
# have one yet, run these first:
#
# git clone https://github.com/yichuan-w/LEANN.git
# cd LEANN
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of yichuan-w/LEANN and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "yichuan-w/LEANN(\\.git)?\\b" \\
&& ok "origin remote is yichuan-w/LEANN" \\
|| miss "origin remote is not yichuan-w/LEANN (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "apps/base_rag_example.py" \\
&& ok "apps/base_rag_example.py" \\
|| miss "missing critical file: apps/base_rag_example.py"
test -f "apps/document_rag.py" \\
&& ok "apps/document_rag.py" \\
|| miss "missing critical file: apps/document_rag.py"
test -f "benchmarks/benchmark_embeddings.py" \\
&& ok "benchmarks/benchmark_embeddings.py" \\
|| miss "missing critical file: benchmarks/benchmark_embeddings.py"
test -f "apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py" \\
&& ok "apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py" \\
|| miss "missing critical file: apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py"
test -f "apps/semantic_file_search/leann_index_builder.py" \\
&& ok "apps/semantic_file_search/leann_index_builder.py" \\
|| miss "missing critical file: apps/semantic_file_search/leann_index_builder.py"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/yichuan-w/LEANN"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
LEANN is a vector database that enables RAG (Retrieval-Augmented Generation) on personal devices with 97% storage savings through graph-based selective recomputation and high-degree preserving pruning. Instead of storing all embeddings, it computes them on-demand while maintaining semantic search accuracy. It currently supports indexing and searching multiple data sources: documents (PDFs), chat histories (ChatGPT, Claude, Gemini, iMessage, WeChat), emails, code, images, and browser data. Single-package structure under apps/ directory. Core features split into data sources (apps/*_data/, apps/*_rag.py pairs like claude_data/ + claude_rag.py). Specialized modules: apps/chunking/ for document splitting, apps/multimodal/ for vision-based PDF processing. Example implementations in apps/base_rag_example.py. CI/CD in .github/workflows/, dev container config in .devcontainer/devcontainer.json.
👥Who it's for
Privacy-focused developers and end-users who want to run RAG applications entirely on personal laptops without cloud dependencies. Data scientists and ML engineers exploring efficient vector index compression. Anyone seeking to build semantic search over personal data (emails, chat logs, files) without external API calls.
🌱Maturity & risk
Actively developed with strong momentum (Trendshift badge, MLsys2026 track mention). Implements published research (arxiv 2506.08276). Has CI/CD setup (.github/workflows/build-and-publish.yml, link-check, release automation). Multi-platform support (Ubuntu, macOS ARM64/Intel, WSL, Windows). However, very early (v0.x implied by survey asking about v0.4 roadmap), single maintainer (yichuan-w), and not yet widely battle-tested in production.
Single-maintainer project with no visible team governance—burning out risk is real. Heavy reliance on research novelty (graph pruning algorithms) that may have undiscovered edge cases. No visible test directory in top 60 files suggests test coverage may be incomplete. Dependencies on multiple external data sources (ChatGPT, Claude, Gemini APIs) and local indexing libraries add complexity. Pre-release maturity means breaking changes likely between versions.
Active areas of work
Recent work targets multimodal RAG (vision-based PDF processing in apps/multimodal/vision-based-pdf-multi-vector/). Expansion of data source integrations—email (LEANN_email_reader.py), iMessage, WeChat, multiple LLM chat exports (ChatGPT, Claude, Gemini). Community engagement ramping (Slack community link, survey for v0.4 roadmap). MCP (Model Context Protocol) native integration mentioned in badges, suggesting protocol-level extensibility work.
🚀Get running
git clone https://github.com/yichuan-w/LEANN.git
cd LEANN
python -m venv venv
source venv/bin/activate # or .\venv\Scripts\activate on Windows
pip install -e .
python apps/base_rag_example.py
Alternatively use .devcontainer/devcontainer.json if using VS Code with Dev Containers extension. Check .python-version for the pinned Python version requirement.
Daily commands:
# Clone and setup
git clone https://github.com/yichuan-w/LEANN.git
cd LEANN
pip install -e .
# Run base example
python apps/base_rag_example.py
# Run specific RAG (e.g., document indexing)
python apps/document_rag.py
# Or use Docker
docker build -t leann .
docker run -it leann python apps/base_rag_example.py
Check individual apps/*_rag.py files for argument requirements (input file paths, API keys for external sources).
🗺️Map of the codebase
apps/base_rag_example.py— Core RAG pipeline template that all application-specific RAG implementations inherit from; essential entry point for understanding the framework's primary abstraction.apps/document_rag.py— Primary document-based RAG application demonstrating the standard workflow for indexing and querying; reference implementation for new RAG apps.benchmarks/benchmark_embeddings.py— Embedding model performance and storage benchmarking suite; validates the 97% storage savings claim and guides embedding selection.apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py— Multi-vector indexing strategy for multimodal retrieval; demonstrates advanced RAG patterns beyond text-only scenarios.apps/semantic_file_search/leann_index_builder.py— Index construction and persistence layer; critical for understanding how semantic embeddings are stored and loaded efficiently.README.md— Project overview, installation, and API documentation; establishes the system's goals around 100% privacy, fast inference, and extreme storage efficiency..github/workflows/build-and-publish.yml— CI/CD pipeline and release automation; demonstrates supported Python versions (3.9–3.13) and multi-platform build requirements.
🛠️How to make changes
Add a new RAG application for a data source
- Create a new data reader class in apps/[source]_data/[source]_reader.py that implements data extraction/parsing (
apps/email_data/LEANN_email_reader.py) - Create a new RAG app in apps/[source]_rag.py that inherits from BaseRAGExample and overrides index() and query() methods (
apps/base_rag_example.py) - Implement chunking strategy if needed (see apps/chunking) and embed using the standard embedding model (
apps/chunking/__init__.py) - Add integration test or example in benchmarks/ to validate recall and storage metrics (
benchmarks/financebench/evaluate_financebench.py)
Implement a multi-vector retrieval strategy
- Create vision encoder integration similar to colqwen_forward.py for extracting multiple embedding types (
apps/multimodal/vision-based-pdf-multi-vector/colqwen_forward.py) - Extend LeannMultiVector class to store and index multiple vector types per document (
apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py) - Implement hybrid query method that merges results from different vector spaces (
apps/multimodal/vision-based-pdf-multi-vector/multi-vector-leann-paper-example.py) - Benchmark retrieval quality using VIDORE or custom dataset (
apps/multimodal/vision-based-pdf-multi-vector/vidore_v2_benchmark.py)
Add support for a new embedding model
- Define the embedding model class with encode() method in a new module or extend benchmark_embeddings.py (
benchmarks/benchmark_embeddings.py) - Update base_rag_example.py's embedding initialization to support model selection (
apps/base_rag_example.py) - Run benchmark_embeddings.py to measure storage savings and latency for the new model (
benchmarks/benchmark_embeddings.py) - Add model comparison results to benchmarks/data/ and document in README.md (
README.md)
Integrate with Model Context Protocol (MCP)
- Create an MCP reader in apps/[service]_data/[service]_mcp_reader.py following the pattern of slack_mcp_reader.py (
apps/slack_data/slack_mcp_reader.py) - Implement MCP server interface with list_resources() and read_resource() handlers for data retrieval (
apps/slack_data/slack_mcp_reader.py) - Create corresponding RAG application that uses the MCP reader as data source (
apps/slack_rag.py) - Test MCP integration by running the RAG app with model client supporting MCP protocol (
apps/slack_rag.py)
🔧Why these technologies
- Dense vector embeddings + HNSW/DiskANN — Enables fast semantic search on personal devices with 97% storage reduction vs FAISS while maintaining recall quality.
- Multi-vector indexing (vision + text) — Supports multimodal RAG for PDFs and images; each modality encoded separately improves retrieval relevance.
- Model Context Protocol (MCP) — Standardized integration with external services (Slack, Twitter, Claude) without custom API implementations.
- Local-only inference — Guarantees 100% privacy—no data leaves the device; queries never reach external APIs.
- Modular data readers — Decouples data extraction from RAG logic; enables rapid support for new sources (emails, chats, code repos).
⚖️Trade-offs already made
-
Store indices on disk with lazy-loading vs. keeping entire index in RAM
- Why: Enables 97% storage savings and supports personal devices with limited VRAM.
- Consequence: Disk I/O adds ~5–10ms per query; acceptable for interactive use but slower than pure in-memory FAISS.
-
Single local LLM inference vs. API-based models
- Why: Ensures privacy and offline operation on personal devices.
- Consequence: Requires GPU or CPU capable of running quantized models; inference speed limited by device hardware.
-
Multi-vector indexing for PDFs instead of pure
- Why: undefined
- Consequence: undefined
🪤Traps & gotchas
No explicit requirements.txt: Relies on setup.py/pyproject.toml (not in top 60 list)—check repo root for dependency spec. API keys required for many apps: ChatGPT RAG, Claude RAG, Gemini RAG all need API keys; not obvious from file names alone. Local data reader assumptions: iMessage reader (imessage_reader.py) requires macOS-specific database access (~/Library/Messages/); will fail silently on Linux/Windows. Graph pruning is research-grade: Core algorithm is novel (arxiv 2506.08276)—no peer-review or production benchmark data public yet; edge cases may exist. Multimodal still experimental: vision-based-pdf-multi-vector/ has only README, no clear integration into main RAG flow. Python version pinned: .python-version file exists—check it; likely 3.9+ but exact constraint not shown.
🏗️Architecture
💡Concepts to learn
- Graph-based selective recomputation — Core innovation of LEANN that enables 97% storage savings; essential to understand why this project is fundamentally different from traditional vector databases.
- High-degree preserving pruning — Pruning strategy that removes low-impact edges from the embedding computation graph while preserving search accuracy; the algorithmic secret sauce enabling on-demand embedding generation.
- RAG (Retrieval-Augmented Generation) — Pattern LEANN implements: index personal documents semantically, then retrieve relevant context to augment LLM prompts; understanding this pattern is essential to use LEANN effectively.
- Semantic search via embeddings — LEANN indexes documents as embeddings and retrieves by cosine similarity; you need to understand embedding space geometry to debug search quality issues.
- On-demand computation vs. materialization — LEANN's inverse of traditional DB design—computing results at query time instead of storing them precomputed; trade-off between latency and storage that defines the system's performance envelope.
- MCP (Model Context Protocol) — LEANN integrates natively with MCP for LLM tool integration; if extending LEANN for Claude/other MCP clients, you'll need to understand protocol semantics.
- Data source adapters / ETL pattern — LEANN's plugin architecture (one reader per data source: ChatGPT, iMessage, email, etc.); adding a new source means implementing this adapter pattern correctly.
🔗Related repos
chroma-core/chroma— Popular modern vector database; directly comparable alternative, though Chroma stores full embeddings vs. LEANN's on-demand recomputation approach.weaviate/weaviate— Enterprise vector DB with multi-modal support; represents a heavier-weight alternative for those not constrained to local-only, offline-first requirements.griptape-ai/griptape— Framework for building RAG applications; users of LEANN often integrate it as the indexing layer beneath Griptape workflows.langchain-ai/langchain— De facto standard for RAG orchestration; LEANN is a drop-in vector store alternative within LangChain pipelines.openai/gpt-index— LlamaIndex—the other major RAG framework; many LEANN users evaluate it as a competitor or complementary indexing solution.
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for RAG data readers across all sources
The repo has multiple data readers (chatgpt_reader.py, claude_reader.py, email_reader.py, imessage_reader.py, slack_mcp_reader.py, etc.) but no visible test suite. Adding integration tests would ensure data extraction reliability across different sources and catch regressions when updating readers. This is critical for a RAG system where data quality directly impacts retrieval accuracy.
- [ ] Create tests/ directory with init.py
- [ ] Add tests/data_readers/ subdirectory for reader-specific tests
- [ ] Implement test fixtures in tests/conftest.py for mock data from each source
- [ ] Write unit tests for apps/chatgpt_data/chatgpt_reader.py, apps/claude_data/claude_reader.py, apps/email_data/LEANN_email_reader.py, etc.
- [ ] Add a GitHub Actions workflow tests.yml to run pytest on pull requests
- [ ] Document testing setup in CLAUDE.md or new TESTING.md file
Add comprehensive CI/CD workflow for multimodal vision-based PDF indexing
The apps/multimodal/vision-based-pdf-multi-vector/ directory contains benchmark scripts (vidore_v1_benchmark.py, vidore_v2_benchmark.py) and vision model integration (colqwen_forward.py) but no automated validation. Adding a workflow to validate vision model outputs and benchmark performance would prevent regressions in the multimodal RAG pipeline, especially critical for the 97% storage savings claim.
- [ ] Create .github/workflows/multimodal-vision-tests.yml workflow
- [ ] Add smoke tests that validate colqwen_forward.py runs without errors
- [ ] Implement benchmark comparison logic to detect performance degradation in vidore benchmarks
- [ ] Add a step to test leann_multi_vector.py with sample PDFs
- [ ] Document expected performance metrics in apps/multimodal/vision-based-pdf-multi-vector/README.md
Create CLI testing and documentation for semantic file search feature
apps/semantic_file_search/ contains leann_index_builder.py and leann-plus-temporal-search.py but lacks documentation on how to use these tools and no tests validating the temporal search functionality. With proper tests and docs, users can leverage this differentiating feature for indexed document retrieval with time-based filtering.
- [ ] Add tests/semantic_file_search/ with tests for leann_index_builder.py and temporal search logic
- [ ] Test spotlight_index_dump.py to ensure index serialization/deserialization works correctly
- [ ] Create apps/semantic_file_search/README.md with CLI usage examples and API documentation
- [ ] Add integration test in .github/workflows/build-and-publish.yml to validate index building on sample documents
- [ ] Document temporal search query syntax and performance characteristics
🌿Good first issues
- Add test coverage for data readers: Create
tests/test_readers.pywith unit tests forapps/chatgpt_data/chatgpt_reader.py,apps/claude_data/claude_reader.py, andapps/email_data/LEANN_email_reader.pyto ensure they handle malformed input gracefully. - Document API key setup: Add a
.env.examplefile and expand README with step-by-step sections for obtaining and configuring API keys for each RAG variant (ChatGPT, Claude, Gemini, etc.)—currently scattered or implicit. - Create a 'getting started' Jupyter notebook: Port
apps/base_rag_example.pylogic intoexamples/quickstart.ipynbwith cell-by-cell explanation and inline visualization of embedding graphs and pruning impact (fits repo's Jupyter usage pattern).
⭐Top contributors
Click to expand
Top contributors
- @ASuresh0524 — 35 commits
- @andylizf — 20 commits
- @yichuan-w — 11 commits
- @tolgakaratas — 11 commits
- @octo-patch — 5 commits
📝Recent commits
Click to expand
Recent commits
3031b8e— feat(cli): add --metadata-filters flag to leann search (#306) (octo-patch)2de6e7e— perf: skip app-index rglob scan when CLI indexes already exist (#302) (octo-patch)ea12e25— test: verify daemon isolation across different passages files (fixes #281) (#305) (octo-patch)d83d8a6— fix(diskann): warn when dataset has fewer vectors than PQ centroids (#300) (octo-patch)fb5ed18— fix: hash multi-document files in change detection (fixes #290) (#299) (octo-patch)f86beeb— fix(server, interactive): resolve pydantic ForwardRef and readline history errors (#304) (rgr4y)ffb0d90— fix(packaging): ship IVF backend with default leann install (#297) (ASuresh0524)651f4c5— feat: add Novita AI as LLM provider (#294) (Alex-wuhu)d8fa507— Feat/build index from arrays (#289) (sjswerdloff)f6d4334— fix(packaging): add leann-core README and SPDX license metadata (ASuresh0524)
🔒Security observations
- High · Potential Credential Exposure in Chat Data Readers —
apps/chatgpt_data/chatgpt_reader.py, apps/claude_data/claude_reader.py, apps/gemini_data/gemini_reader.py, apps/qwen_data/qwen_reader.py, apps/slack_data/slack_mcp_reader.py, apps/twitter_data/twitter_mcp_reader.py. Multiple data reader modules (chatgpt_reader.py, claude_reader.py, gemini_reader.py, qwen_reader.py, slack_mcp_reader.py, twitter_mcp_reader.py) access external APIs and user data. Without visible API key management or encryption, there is risk of credential exposure through logs, error messages, or unencrypted storage. Fix: Implement secure credential management using environment variables or a secrets manager (e.g., python-dotenv, AWS Secrets Manager). Never hardcode API keys. Use credential encryption at rest and in transit. - High · Unvalidated Email Data Processing —
apps/email_data/LEANN_email_reader.py, apps/email_data/email.py, apps/email_rag.py. The email_data module (apps/email_data/) processes email data without visible input validation or sanitization. This could lead to injection attacks or processing of malicious email content. Fix: Implement strict input validation and sanitization for email data. Use parameterized queries if database operations are involved. Validate and sanitize all user-controlled inputs before processing. - High · Potential Local File Traversal in Document Processing —
apps/document_rag.py, apps/code_rag.py, apps/image_rag.py, apps/semantic_file_search/leann_index_builder.py. Multiple file processing modules (document_rag.py, code_rag.py, image_rag.py) handle file paths without visible path normalization or validation, potentially allowing directory traversal attacks. Fix: Implement path validation using os.path.abspath() and ensure resolved paths remain within intended directories. Use pathlib.Path for safer path operations. Whitelist allowed directories. - Medium · Missing Input Validation in RAG Applications —
apps/browser_rag.py, apps/code_rag.py, apps/chatgpt_rag.py, apps/claude_rag.py, apps/email_rag.py, apps/imessage_rag.py, apps/wechat_rag.py. Multiple RAG applications accept user input without visible validation (e.g., browser_rag.py, code_rag.py, chatgpt_rag.py). This could lead to prompt injection or other input-based attacks. Fix: Implement comprehensive input validation for all user-provided queries and parameters. Use allowlists where possible. Sanitize inputs before passing to language models or databases. - Medium · Dependency Management Risk - No Lock File Visible —
Root directory / dependency configuration. The repository structure does not show a requirements.txt, pyproject.toml, or poetry.lock file in the root. This indicates missing dependency version pinning, making the project vulnerable to supply chain attacks or breaking changes. Fix: Create and maintain a requirements.txt or pyproject.toml with pinned dependency versions. Use lock files (poetry.lock or pip-compile) for reproducible builds. Regularly audit dependencies for known vulnerabilities using tools like safety or pip-audit. - Medium · Potential Data Leakage in Local Database/Index Storage —
apps/semantic_file_search/leann_index_builder.py, apps/multimodal/vision-based-pdf-multi-vector/leann_multi_vector.py. The semantic_file_search and multimodal modules create local indices and databases. Without visible encryption, sensitive data stored locally could be exposed if the device is compromised. Fix: Implement encryption for locally stored indices and embeddings. Use encryption at rest (AES-256 or similar). Consider implementing access controls and proper key management. - Medium · Incomplete CI/CD Security Configuration —
.github/workflows/build-and-publish.yml, .github/workflows/release-. GitHub Actions workflows present but no visible secrets management configuration or branch protection rules documented. This could allow unauthorized code execution or package publishing. Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.