mozilla/DeepSpeech
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 11mo ago
- ✓9 active contributors
- ✓Distributed ownership (top contributor 37% of recent commits)
Show all 7 evidence items →Show less
- ✓MPL-2.0 licensed
- ✓CI configured
- ✓Tests present
- ⚠Slowing — last commit 11mo ago
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/mozilla/deepspeech)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/mozilla/deepspeech on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: mozilla/DeepSpeech
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/mozilla/DeepSpeech shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 11mo ago
- 9 active contributors
- Distributed ownership (top contributor 37% of recent commits)
- MPL-2.0 licensed
- CI configured
- Tests present
- ⚠ Slowing — last commit 11mo ago
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live mozilla/DeepSpeech
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/mozilla/DeepSpeech.
What it runs against: a local clone of mozilla/DeepSpeech — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in mozilla/DeepSpeech | Confirms the artifact applies here, not a fork |
| 2 | License is still MPL-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 353 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of mozilla/DeepSpeech. If you don't
# have one yet, run these first:
#
# git clone https://github.com/mozilla/DeepSpeech.git
# cd DeepSpeech
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of mozilla/DeepSpeech and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "mozilla/DeepSpeech(\\.git)?\\b" \\
&& ok "origin remote is mozilla/DeepSpeech" \\
|| miss "origin remote is not mozilla/DeepSpeech (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MPL-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MPL-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is MPL-2.0" \\
|| miss "license drift — was MPL-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "DeepSpeech.py" \\
&& ok "DeepSpeech.py" \\
|| miss "missing critical file: DeepSpeech.py"
test -f "Makefile" \\
&& ok "Makefile" \\
|| miss "missing critical file: Makefile"
test -f ".github/workflows/build-and-test.yml" \\
&& ok ".github/workflows/build-and-test.yml" \\
|| miss "missing critical file: .github/workflows/build-and-test.yml"
test -f "VERSION" \\
&& ok "VERSION" \\
|| miss "missing critical file: VERSION"
test -f "README.rst" \\
&& ok "README.rst" \\
|| miss "missing critical file: README.rst"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 353 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~323d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/mozilla/DeepSpeech"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
DeepSpeech is an offline, on-device speech-to-text engine built on TensorFlow that converts audio directly to text without cloud connectivity. It implements Baidu's Deep Speech neural network architecture and runs efficiently on embedded devices from Raspberry Pi 4 to GPU servers, supporting real-time inference. Monorepo with language-specific bindings: core C++ engine (native_client/ with TensorFlow integration), Python bindings for training/inference (training/, training_data.py), Node.js/JavaScript bindings (.github/actions/node-build/, .github/actions/node-install/), and C#/Swift/Java wrappers. Build orchestration via Makefile and 19+ custom GitHub Actions in .github/actions/ (build-tensorflow, package-tensorflow, multistrap, chroot-bind-mount for cross-compilation).
👥Who it's for
Embedded systems developers, IoT engineers, and privacy-focused application builders who need speech recognition without cloud dependencies. Users range from Raspberry Pi hobbyists building smart home systems to enterprises requiring on-device inference for compliance or latency constraints.
🌱Maturity & risk
This project is discontinued as of the README header. While historically mature (1.05M lines of C++, comprehensive CI/CD in .github/workflows/, established GitHub Actions infrastructure), it is no longer actively maintained. Use only for legacy systems or as reference material; do not build new production systems on it.
High risk for new projects: The project is officially discontinued and receives no active maintenance. Dependencies on TensorFlow (substantial, unmaintained pins) and cross-platform build complexity (19 GitHub Actions for multiplatform builds) create fragility. Breaking changes in TensorFlow, Python, or system libraries will not be fixed. Consider maintained alternatives like Whisper (OpenAI) or Coqui STT for new work.
Active areas of work
The repository is in maintenance-only mode. The .github/workflows/ directory shows CI pipelines for macOS, Linux, and Docker still configured, but no active feature development. The .lock.yml and .compute files suggest infrastructure management was in place. No recent development indicated in the README discontinuation notice.
🚀Get running
Clone and inspect (read-only recommended):
git clone https://github.com/mozilla/DeepSpeech.git
cd DeepSpeech
Build system is complex: uses Makefile with TensorFlow dependency. Legacy setup via pip install deepspeech (pre-built wheels, no longer updated). Do not attempt new builds without understanding TensorFlow version pinning and platform-specific toolchains in .github/actions/.
Daily commands:
This is not a web service; it's an inference library. Historical usage pattern: deepspeech --model model.pbmm --scorer scorer.scorer --audio audio.wav. Node.js example via npm (if you have pre-built wheels): const DeepSpeech = require('deepspeech'); const model = new DeepSpeech.Model('model.pbmm'); const result = model.stt(audioBuffer);. Modern builds are not recommended; use archived releases only.
🗺️Map of the codebase
DeepSpeech.py— Main entry point for the DeepSpeech engine; defines the core inference API and model loading logic that all users and integrations depend on.Makefile— Build orchestration and dependency management; essential for understanding how to compile, test, and package DeepSpeech across all platforms..github/workflows/build-and-test.yml— CI/CD pipeline configuration; defines how builds are validated and artifacts are produced across macOS, Linux, and Windows.VERSION— Single source of truth for release versioning; used across build scripts, documentation, and packaging workflows.README.rst— Project overview and discontinuation status; communicates that DeepSpeech is no longer actively maintained, critical context for all contributors.GRAPH_VERSION— Model graph format version tracking; ensures compatibility between inference engine and pre-trained model checkpoints..github/actions/setup-tensorflow/action.yml— TensorFlow build environment configuration; abstracts platform-specific compiler and library setup required for inference engine compilation.
🛠️How to make changes
Add support for a new public audio dataset
- Create a new import script following the pattern of existing importers (e.g., bin/import_cv.py). Script should parse dataset format, download audio, and write CSV with path + transcript columns. (
bin/import_<dataset_name>.py) - Verify the import script works with sample data and outputs a CSV compatible with the training pipeline. (
bin/import_<dataset_name>.py) - Document dataset import steps and any prerequisites (API keys, manual downloads) in the project documentation. (
README.rst)
Add a new platform build target (e.g., iOS, Android)
- Create platform-specific build rules in Makefile under a new target (e.g., 'make ios-build'), handling cross-compilation and SDK setup. (
Makefile) - Add a new GitHub Actions workflow file or extend build-and-test.yml with platform-specific build job that uses .github/actions/setup-tensorflow and .github/actions/host-build. (
.github/workflows/build-and-test.yml) - Create a new .github/actions/<platform>-build/action.yml that encapsulates compiler flags, SDK paths, and packaging steps specific to the target platform. (
.github/actions/<platform>-build/action.yml) - Add test cases in bin/run-ci-*.sh to validate inference on the new platform using available sample data. (
bin/run-ci-<platform>_test.sh)
Add a new model quantization or optimization format (e.g., ONNX, TFLite variant)
- Increment GRAPH_VERSION to signal model format change; update version reference in DeepSpeech.py to handle checkpoint compatibility. (
GRAPH_VERSION) - Extend DeepSpeech.py to detect and load the new format, adding conditional branches for model initialization and inference. (
DeepSpeech.py) - Add new CI test script (e.g., bin/run-ci-ldc93s1_new_<format>.sh) that trains, exports to the new format, and validates inference accuracy. (
bin/run-ci-ldc93s1_new_<format>.sh) - Update Makefile to include export and validation targets for the new format. (
Makefile)
🔧Why these technologies
- TensorFlow (C++/Python) — Enables fast prototyping of deep learning models (RNN+CTC architecture) and provides mature graph serialization and cross-platform inference runtime for embedded deployment.
- Bazel build system — Supports complex multi-platform, multi-language builds (C++, Python, Node.js) with fine-grained dependency management and caching for fast incremental builds.
- GitHub Actions — Provides free CI/CD for cross-platform builds (macOS Intel/M1, Linux ARM/x86, Windows); custom actions encapsulate platform-specific build complexity as reusable workflows.
- Docker (build templates) — Ensures reproducible build environments and simplifies onboarding; templates allow users to build DeepSpeech without installing compiler toolchains.
⚖️Trade-offs already made
-
Project discontinued; no active development
- Why: Mozilla concluded DeepSpeech did not achieve production-grade accuracy or performance targets for commercial deployment.
- Consequence: Users must rely on community forks or alternative speech engines (Coqui STT, Kaldi). Security and compatibility fixes are not guaranteed.
-
Single monolithic graph (no modular sub-models)
- Why: Simplifies inference and training pipeline; avoids multi-stage orchestration complexity.
- Consequence: Model updates require full retraining; difficult to iterate on individual components (feature extraction, acoustic model, decoder).
-
Python API wraps C++ TensorFlow inference
- Why: Python ease-of-use for research; C++ provides low-latency inference and embedded compatibility.
- Consequence: Requires maintaining language bindings; overhead of Python→C++ bridging for real-time applications.
-
CTC (Connectionist Temporal Classification) decoder instead of language model fusion
- Why: Simpler architecture; avoids dependency on external N-gram or neural language models.
- Consequence: Lower accuracy than systems with strong language models (e.g., speech recognition in noisy environments); no out-of-vocabulary handling.
🚫Non-goals (don't propose these)
- Does not support streaming/online recognition; requires complete audio buffer before inference.
- Does not provide a trained multilingual model; each language requires separate training.
- Does not handle low-resource languages without substantial labeled audio data (1000+ hours).
- Not a real-time cloud service; DeepSpeech is strictly an on-device inference engine with offline operation.
🪤Traps & gotchas
- TensorFlow pinning is strict and fragile: The project targets specific TensorFlow versions (likely 1.x or early 2.x based on age); modern TensorFlow will not work without extensive patching. 2. Cross-compilation complexity:
.github/actions/multistrap/and.github/actions/chroot-bind-mount/handle ARM builds; these require Linux with specific kernel features and multiarch support — non-trivial to reproduce locally. 3. SWIG bindings are auto-generated: Language bindings (Python, Node, C#, Swift, Java) are generated from SWIG templates; hand-editing them will be overwritten. 4. No Python 3.10+ support: Codebase targets Python 3.6–3.9; later versions will fail due to TensorFlow and NumPy version constraints. 5. Model file formats are frozen: Existing.pbmmand.scorerfiles are not forward-compatible with updated TensorFlow; you cannot retrain easily without extensive refactoring.
🏗️Architecture
💡Concepts to learn
- End-to-End Speech Recognition (E2E SpeechRecognition) — DeepSpeech implements Baidu's end-to-end approach (input audio → output text directly) rather than traditional ASR pipelines (audio → phonemes → words); understanding this explains the model architecture and training data requirements
- CTC (Connectionist Temporal Classification) Loss — DeepSpeech uses CTC loss for aligning variable-length audio sequences to text without explicit alignment labels; critical for understanding the training pipeline in
training/ - SWIG (Simplified Wrapper and Interface Generator) — The project uses SWIG to auto-generate Python, Node.js, C#, Swift, and Java bindings from C++ core; understanding SWIG is essential for modifying language bindings or debugging binding failures
- TensorFlow Frozen Graph (.pbmm format) — DeepSpeech models are distributed as TensorFlow frozen graphs (.pbmm files); these are pre-compiled inference graphs that cannot be retrained without re-serializing, explaining the inference-only deployment model
- Cross-Compilation and Multistrap — The
.github/actions/multistrap/action manages ARM toolchains for compiling on x86_64 machines to run on ARM (Raspberry Pi); essential for understanding the embedded deployment pipeline - Language Model (Scorer/KenLM) — DeepSpeech uses a separate language model (
.scorerfiles, typically KenLM-based) to improve accuracy via beam search decoding; understanding the separate acoustic and language model components is crucial for fine-tuning - Hermetic Build (Self-Contained GitHub Actions) — The 19+ custom GitHub Actions (
.github/actions/build-tensorflow/,.github/actions/package-tensorflow/, etc.) create hermetic, reproducible builds isolated from host system state; understanding this pattern is key for debugging CI failures and reproducing builds locally
🔗Related repos
openai/whisper— Modern successor to DeepSpeech for offline speech-to-text; actively maintained, supports multiple languages, more robust than DeepSpeech, recommended for new projectscoqui-ai/STT— Community fork of DeepSpeech after Mozilla discontinued it; inherits the codebase and fixes compatibility issues with modern TensorFlow and Pythonmozilla/TensorFlow— Mozilla's TensorFlow fork/patches used by DeepSpeech to optimize inference on embedded devices; explains custom TensorFlow build actionslium/ffsplit— Companion tool for audio preprocessing and feature extraction in speech recognition pipelines; commonly used with DeepSpeech training data preparationmozilla/common-voice— Mozilla's crowdsourced multilingual audio dataset project; source data used to train DeepSpeech models and validate the platform
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for GitHub Actions in check_artifact_exists action
The .github/actions/check_artifact_exists action has a main.js entry point but lacks test coverage. Given that this is a critical CI/CD action used across workflows, adding Jest tests would prevent regressions when modifying artifact checking logic. The package.json already includes devDependencies for testing infrastructure (@actions/core, @actions/github, @octokit/plugin-throttling), but no test files exist.
- [ ] Create .github/actions/check_artifact_exists/tests/main.test.js with Jest configuration
- [ ] Add test cases for artifact existence checks, GitHub API rate limiting (using @octokit/plugin-throttling), and zip file validation (using adm-zip)
- [ ] Add 'test' script to .github/actions/check_artifact_exists/package.json and update CI workflow to run these tests
- [ ] Add test coverage thresholds to prevent regression of artifact checking logic
Document and test custom GitHub Actions in .github/actions directory
The repository contains 20+ custom GitHub Actions (.github/actions/*/action.yml files) but most lack README.md documentation. Only check_artifact_exists and get_cache_key have READMEs. This creates friction for contributors trying to understand the build pipeline. Adding missing documentation would improve maintainability.
- [ ] Create README.md files for undocumented actions: build-tensorflow, host-build, node-build, python-build, package-tensorflow, package, run-tests, select-xcode, etc.
- [ ] Document inputs, outputs, and usage examples for each action in the corresponding README
- [ ] Add a .github/actions/README.md index file linking to all custom actions with their purposes in the build-and-test.yml workflow
Add tests for DeepSpeech.py CLI entry point with coverage for different platform builds
DeepSpeech.py is the main entry point but there's no evidence of unit tests validating CLI argument parsing or platform-specific behavior (given the extensive cross-platform build actions for macOS, Linux, Windows in .github/workflows/). Adding tests would catch regressions when the CLI interface changes and ensure consistent behavior across platforms.
- [ ] Create tests/test_deepspeech_cli.py with pytest to test argument parsing, model loading, and inference on sample audio
- [ ] Add platform-specific test cases in the existing test suite to validate Windows (win-install-sox), macOS, and Linux behaviors
- [ ] Add a GitHub Actions workflow step in .github/workflows/build-and-test.yml to run these CLI tests after package builds complete
🌿Good first issues
- Add Python 3.9+ compatibility testing: The project targets Python 3.6–3.9, but NumPy and TensorFlow have dropped 3.9 support. Audit
training/andnative_client/Python code for deprecated APIs and propose a compatibility layer or document end-of-life for newer Python versions.: Helps future maintainers (if any) understand Python compatibility constraints; involves reading setup.py, requirements files, and CI configs. - Document the SWIG build pipeline: The
.github/actions/folder contains 5+ custom Actions that orchestrate SWIG bindings (Python, Node, C#, Swift, Java) but lack comments explaining the data flow. Create adocs/BUILD_BINDINGS.mdfile mapping each action to its input/output and explaining when/why each language binding is triggered.: New contributors need to understand how C++ changes propagate to language bindings; currently opaque in the Actions YAML. - Add ARM64 (aarch64) architecture test to CI/CD: The
.github/actions/multistrap/action suggests ARM support, but.github/workflows/build-and-test.ymlmay not have explicit aarch64 test jobs. Verify coverage and add a GitHub Actions job usingruns-on: [self-hosted, arm64]or QEMU to test builds on actual ARM64 hardware or emulation.: Validates that the Raspberry Pi 4 use case (claimed in README) actually works; currently undocumented in workflows.
⭐Top contributors
Click to expand
Top contributors
- @reuben — 37 commits
- @lissyx — 30 commits
- [@Alexandre Lissy](https://github.com/Alexandre Lissy) — 24 commits
- @CatalinVoss — 4 commits
- [@Daniel Tinazzi](https://github.com/Daniel Tinazzi) — 1 commits
📝Recent commits
Click to expand
Recent commits
6913ae8— Update README.rst (lissyx)aa1d285— Merge pull request #3703 from danieltinazzi/master (lissyx)4fa8dd3— Fixed M-AILABS broken link (Daniel Tinazzi)73e1e4f— Merge pull request #3674 from mozilla/lissyx-patch-1 (lissyx)f606ac5— Update conf.py (lissyx)9e67724— Merge pull request #3647 from mozilla/ftyers-patch-2 (lissyx)e026bb7— Update augmentations.py (ftyers)9353bb2— Merge pull request #3641 from lissyx/nodejs-16 (reuben)a60eb77— Add NodeJS 16.0.0 (Alexandre Lissy)280f272— Merge pull request #3639 from lissyx/ga-linux-armv7-aarch64 (lissyx)
🔒Security observations
The DeepSpeech repository has moderate security concerns primarily centered on outdated dependencies with known vulnerabilities, particularly in GitHub Actions dependencies (@actions/core, @actions/github) and the adm-zip library. The most significant risk is the use of outdated packages that may contain exploitable vulnerabilities, combined with potential path traversal risks in artifact extraction operations. Additionally, the project's discontinued status means no security updates will be provided. The codebase requires immediate dependency updates and a security audit of the artifact handling logic. While no obvious hardcoded secrets, SQL injection, or XSS patterns are evident from the file structure alone, the dependency vulnerabilities and lack of maintenance pose meaningful risks to any deployment.
- High · Outdated and Vulnerable Dependencies —
.github/actions/check_artifact_exists/package.json. The package.json contains dependencies with known vulnerabilities. @actions/core@^1.2.6 (released 2020) and @actions/github@^4.0.0 (released 2020) are significantly outdated. These packages have known security vulnerabilities including prototype pollution, dependency vulnerabilities, and other issues reported in npm security advisories. Fix: Update all dependencies to their latest stable versions. Run 'npm audit' and 'npm audit fix' to identify and remediate vulnerable packages. Specifically update: @actions/core to ^1.10.x, @actions/github to ^6.x+, @octokit/plugin-throttling to latest, and @vercel/ncc to latest. - Medium · Outdated adm-zip Library —
.github/actions/check_artifact_exists/package.json. The adm-zip@^0.5.2 dependency is outdated (from 2020). This library handles ZIP file extraction and has had security issues related to path traversal vulnerabilities in earlier versions. Using an old version increases the risk of exploitation. Fix: Upgrade adm-zip to version ^0.5.9 or later, preferably the latest stable version (0.5.12+). Verify that ZIP extraction operations validate and sanitize file paths to prevent directory traversal attacks. - Medium · Potential Artifact Extraction Vulnerability —
.github/actions/check_artifact_exists/main.js. The check_artifact_exists action uses adm-zip for artifact handling. Without reviewing the main.js implementation, there is a risk of path traversal vulnerabilities if ZIP files are extracted without proper path validation, especially given the outdated dependency version. Fix: Review main.js implementation to ensure: 1) ZIP files are extracted with strict path validation, 2) No relative path traversal (../) is allowed, 3) Extracted files are contained within expected directories, 4) Consider using library functions that handle path normalization safely. - Low · Discontinued Project Status —
README.rst. The project is marked as discontinued in the README. This means no security updates, bug fixes, or maintenance are planned. Users relying on this codebase will not receive security patches for any vulnerabilities discovered. Fix: For users: Consider migrating to actively maintained alternatives. For contributors: Maintain awareness that this is an archived project and plan accordingly for long-term security compliance. - Low · Missing Security Policy Documentation —
Repository root. No visible SECURITY.md or security policy file in the repository structure to document how security issues should be reported, especially critical for a machine learning framework that may be used in production environments. Fix: Create a SECURITY.md file documenting responsible disclosure practices. Include information about reporting security vulnerabilities through alternative channels given the project's discontinued status.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.