k2-fsa/sherpa-onnx
Speech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, HarmonyOS, Raspberry Pi, RISC-V, RK NPU, Axera NPU, Ascend NPU, x86_64 servers, websocket server/client, support 12 programming languages
Healthy across the board
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓19 active contributors
- ✓Apache-2.0 licensed
Show 3 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 76% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/k2-fsa/sherpa-onnx)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/k2-fsa/sherpa-onnx on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: k2-fsa/sherpa-onnx
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/k2-fsa/sherpa-onnx shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit today
- 19 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 76% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live k2-fsa/sherpa-onnx
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/k2-fsa/sherpa-onnx.
What it runs against: a local clone of k2-fsa/sherpa-onnx — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in k2-fsa/sherpa-onnx | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of k2-fsa/sherpa-onnx. If you don't
# have one yet, run these first:
#
# git clone https://github.com/k2-fsa/sherpa-onnx.git
# cd sherpa-onnx
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of k2-fsa/sherpa-onnx and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "k2-fsa/sherpa-onnx(\\.git)?\\b" \\
&& ok "origin remote is k2-fsa/sherpa-onnx" \\
|| miss "origin remote is not k2-fsa/sherpa-onnx (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f ".github/scripts/node-addon/index.js" \\
&& ok ".github/scripts/node-addon/index.js" \\
|| miss "missing critical file: .github/scripts/node-addon/index.js"
test -f ".github/scripts/export-qnn/generate_config.py" \\
&& ok ".github/scripts/export-qnn/generate_config.py" \\
|| miss "missing critical file: .github/scripts/export-qnn/generate_config.py"
test -f ".github/scripts/export-ascend/generate_paraformer.py" \\
&& ok ".github/scripts/export-ascend/generate_paraformer.py" \\
|| miss "missing critical file: .github/scripts/export-ascend/generate_paraformer.py"
test -f ".github/workflows/build-wheels-linux.yaml" \\
&& ok ".github/workflows/build-wheels-linux.yaml" \\
|| miss "missing critical file: .github/workflows/build-wheels-linux.yaml"
test -f ".github/workflows/c-api.yaml" \\
&& ok ".github/workflows/c-api.yaml" \\
|| miss "missing critical file: .github/workflows/c-api.yaml"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/k2-fsa/sherpa-onnx"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Sherpa-ONNX is a multi-function speech AI framework that runs speech-to-text (ASR), text-to-speech (TTS), speaker diarization, speech enhancement, source separation, and voice activity detection entirely offline using ONNX Runtime and next-gen Kaldi models. It supports 12 programming languages and specialized hardware accelerators (RKNN, QNN, Ascend, Axera NPUs) for embedded deployment without internet dependency. Monorepo with core C++ engine in undeclared root, language-specific bindings in directories inferred from CI scripts (C#, Kotlin, Dart, Rust, Go, JavaScript in .github/scripts/test-*.sh), model export scripts in .github/scripts/export-*/ (Ascend, QNN), and build configuration via CMake (321K lines). Node.js bindings use native addon pattern with optional platform-specific dependencies (sherpa-onnx-{darwin,linux,win}-{arm64,x64,ia32}).
👥Who it's for
Embedded systems engineers, mobile developers (Android/iOS/HarmonyOS), and speech AI practitioners who need production-grade offline speech processing on edge devices (Raspberry Pi, RISC-V boards, smartphones) without cloud latency or privacy concerns. Contributors span C++, Python, Rust, JavaScript, Kotlin, and Dart ecosystems.
🌱Maturity & risk
Actively maintained monorepo with substantial codebase (4.2M+ lines C++, 2M+ Python), extensive platform support (x64, x86, arm64, arm32, riscv64 across 6+ OSs), and organized CI scripts in .github/scripts/ for testing 12 languages and multiple NPU backends. Production-ready for embedded speech tasks with clear API contracts across language bindings.
Risk factors: tightly coupled ONNX Runtime dependency with version-specific model compatibility; heavy reliance on specialized hardware (NPU support requires vendor-specific SDKs like QNN, RKNN); .github/scripts/ indicates broad platform matrix means fragmented test coverage and potential regression points. Multi-language binding maintenance (12 languages) can create stale or inconsistent APIs across language targets.
Active areas of work
Recent work includes NPU backend expansion (QNN export scripts for Qualcomm, Ascend export for Huawei), model-specific generators (Paraformer, SenseVoice, Whisper, Zipformer variants), and language binding maintenance across Dart, Rust, and Node.js. CI pipeline suggests active testing of streaming vs non-streaming ASR, TTS, speaker tasks, and speech enhancement.
🚀Get running
git clone https://github.com/k2-fsa/sherpa-onnx.git
cd sherpa-onnx
# For Node.js bindings:
npm install sherpa-onnx
# For C++ core, use CMake:
cmake -B build && cmake --build build
# For Python:
pip install sherpa-onnx
Daily commands:
For Node.js: node .github/scripts/node-addon/index.js after npm install. For C++: compile via CMake then execute generated binaries. For Python: python -m sherpa_onnx --help after pip install. Language-specific test scripts in .github/scripts/test-{language}.sh show canonical invocation patterns.
🗺️Map of the codebase
.github/scripts/node-addon/index.js— Node.js native addon entry point; defines the primary JavaScript interface for all speech recognition, text-to-speech, and audio processing features..github/scripts/export-qnn/generate_config.py— Generates ONNX model configurations for Qualcomm QNN acceleration; essential for understanding model export pipelines across multiple NPU targets..github/scripts/export-ascend/generate_paraformer.py— Demonstrates Ascend NPU model export workflow; critical for understanding multi-target acceleration strategy and model optimization patterns..github/workflows/build-wheels-linux.yaml— Primary wheel-building CI/CD pipeline; defines build dependencies, testing harnesses, and release processes for the entire project..github/workflows/c-api.yaml— C API validation workflow; ensures core C bindings work across platforms and is the foundation for all language bindings..github/workflows/android.yaml— Android APK build pipeline; shows how native code integrates with Java/Kotlin and validates embedded systems support across ARM architectures..github/scripts/test-python.sh— Python integration test suite; validates the Python bindings and demonstrates the full feature set used by contributors.
🛠️How to make changes
Add Support for a New ASR Model (e.g., Zipformer, Paraformer, Whisper variant)
- Create model export script in
.github/scripts/export-{backend}/generate_{model_name}.pyfollowing patterns inexport-qnn/generate_paraformer.pyandexport-ascend/generate_whisper.py(.github/scripts/export-qnn/generate_paraformer.py) - Add GitHub Actions workflow in
.github/workflows/to automate export (referenceexport-3dspeaker-to-onnx.yaml) (.github/workflows/export-3dspeaker-to-onnx.yaml) - Add test script in
.github/scripts/test-{model_name}.shfollowing the pattern oftest-offline-whisper.shortest-online-transducer.sh(.github/scripts/test-offline-whisper.sh) - Add test workflow in
.github/workflows/that calls the test script and validates across all supported platforms (referencec-api.yaml) (.github/workflows/c-api.yaml) - Update language binding examples in node-addon/index.js to demonstrate the new model (if adding a new feature category) (
.github/scripts/node-addon/index.js)
Add Support for a New Hardware Accelerator (e.g., new NPU, DSP, or GPU backend)
- Create new export directory
.github/scripts/export-{new_accelerator}/with model generation scripts mimickingexport-qnn/structure (.github/scripts/export-qnn/generate_config.py) - Create CI/CD workflow in
.github/workflows/(referenceandroid-rknn.yamloraxera-linux-aarch64.yamlfor similar patterns) (.github/workflows/android-rknn.yaml) - Add device info validation script (reference
export-qnn/device_info.py) (.github/scripts/export-qnn/device_info.py) - Create test workflow validating compiled binaries work on target hardware (reference
build-wheels-aarch64-rknn.yaml) (.github/workflows/build-wheels-aarch64-rknn.yaml)
Add a New Language Binding (e.g., Go, Lua, Ruby)
- Create language-specific wrapper directory and binding code, following patterns in
.github/scripts/node-addon/for compiled bindings (.github/scripts/node-addon/index.js) - Add test script in
.github/scripts/test-{language}.shvalidating all major features (ASR, TTS, VAD, speaker diarization, etc.) (.github/scripts/test-python.sh) - Create CI/CD workflow in
.github/workflows/to run tests and publish package to language-specific registry (referencetest-nodejs-npm.sh) (.github/scripts/test-nodejs-npm.sh) - Document platform support matrix in workflow and README (reference existing workflows for format) (
.github/workflows/c-api.yaml)
Add a New Pre-processing or Post-processing Step (e.g., audio resampling, punctuation, denoising)
- Implement feature in native C++ core (actual implementation not shown, but integration pattern follows C API in
.github/scripts/as-cmake-sub-project/main.cc) (.github/scripts/as-cmake-sub-project/main.cc) - Expose via Node.js addon in
.github/scripts/node-addon/index.jswith parameter validation (.github/scripts/node-addon/index.js) - Add test script in
.github/scripts/test-{feature}.shwith sample audio (referencetest-offline-punctuation.shortest-offline-speech-denoiser.sh) (.github/scripts/test-offline-punctuation.sh) - Add CI/CD validation workflow in
.github/workflows/to test across platforms (.github/workflows/c-api.yaml)
🔧Why these technologies
- ONNX Runtime — Provides vendor-neutral model inference with hardware acceleration
🪤Traps & gotchas
ONNX model versioning: models must match the exact ONNX opset version and onnxruntime build used at compile time—mismatches cause silent inference failures. NPU SDKs not bundled: QNN (Qualcomm) and RKNN (Rockchip) backends require separate vendor SDK installations; CI scripts assume these are pre-installed in CI environment. Platform-specific native modules: Node.js optional dependencies fail silently on unsupported OS/arch combinations; users must match their platform to available binaries (only 6 combinations in package.json). CMake subdirectory inclusion: .github/scripts/as-cmake-sub-project shows that embedded use requires careful target name management to avoid conflicts. No pre-built binaries documented in this snippet: unclear if npm packages are pre-built or require local compilation—check npm registry separately.
🏗️Architecture
💡Concepts to learn
- ONNX (Open Neural Network Exchange) — Sherpa-onnx's entire inference layer is built on ONNX format models and ONNX Runtime—understanding ONNX opsets, quantization formats, and operator coverage is critical for model compatibility and performance tuning.
- Streaming vs Non-streaming ASR (Online/Offline) — Sherpa-onnx explicitly supports both modes (see test scripts like test-online-transducer.sh vs test-offline-transducer.sh); they require different model architectures (stateless decoders vs stateful RNNs) and API patterns.
- Neural Processing Unit (NPU) Acceleration — Sherpa-onnx supports 4 NPU backends (RKNN, QNN, Ascend, Axera); each requires vendor-specific quantization and compilation (export-qnn/, export-ascend/ scripts); critical for embedded performance.
- Native Language Bindings (FFI/JNI/P-Invoke) — Supporting 12 languages requires different glue layers (node-addon-api for Node.js, ctypes for Python, JNI for Java, C#'s P/Invoke); each binding is a maintenance surface and potential source of inconsistency.
- Transducer RNN Architecture — Sherpa-onnx's core ASR models (zipformer, paraformer variants) use transducer architecture for streaming; this differs from attention-based Seq2Seq and requires understanding alignment-free training.
- Cross-compilation and Platform Abstraction — CMakeLists.txt and .github/scripts/as-cmake-sub-project demonstrate building for 5+ architectures and 6+ OSs; requires careful target-specific configuration and understanding of ARM vs x86 toolchain differences.
- Speaker Diarization (Who Spoke When) — Sherpa-onnx bundles speaker diarization alongside ASR; this is a distinct task requiring speaker embeddings, clustering, and temporal alignment—not just speech-to-text.
🔗Related repos
openai/whisper— Competing offline ASR model; sherpa-onnx includes Whisper export scripts (.github/scripts/export-*/generate_whisper.py) showing how to integrate alternative architectures.k2-fsa/k2— Companion repo providing the next-gen Kaldi toolkit that sherpa-onnx is built on; shared team, shared model format and training pipeline.microsoft/onnxruntime— Core runtime dependency; sherpa-onnx's entire inference pipeline depends on ONNX Runtime's operator support and performance characteristics across platforms.mozilla/DeepSpeech— Predecessor offline ASR framework; demonstrates the problem space (local speech-to-text) that sherpa-onnx solves with modern ONNX models instead of TensorFlow.
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive CI workflow for Node.js addon on all supported platforms
The repo has many platform-specific test scripts (test-nodejs-addon-npm.sh, test-nodejs-npm.sh) and exports binaries for darwin-arm64, darwin-x64, linux-x64, linux-arm64, win-x64, win-ia32, but there's no dedicated GitHub Actions workflow validating the Node.js addon builds and tests across these platforms. This would catch cross-platform compatibility issues early and ensure the npm package works on all advertised platforms.
- [ ] Create .github/workflows/nodejs-addon-macos.yaml to test darwin-arm64 and darwin-x64 builds
- [ ] Create .github/workflows/nodejs-addon-linux.yaml to test linux-x64 and linux-arm64 builds
- [ ] Create .github/workflows/nodejs-addon-windows.yaml to test win-x64 and win-ia32 builds
- [ ] Reference existing scripts in .github/scripts/test-nodejs-addon-npm.sh and .github/scripts/node-addon/ directory
- [ ] Ensure each workflow validates the optional dependency installation and runs basic smoke tests
Add missing test coverage for QNN export scripts with CI validation
The repo has comprehensive export scripts for Ascend (.github/scripts/export-ascend/) with multiple model generators (generate_paraformer.py, generate_sense_voice.py, generate_whisper.py, generate_zipformer_ctc_20250703.py), but the QNN export scripts (.github/scripts/export-qnn/) lack corresponding test workflows and validation CI. Adding structured tests would catch regressions in QNN model export quality.
- [ ] Create .github/workflows/test-qnn-export.yaml workflow
- [ ] Add unit tests in a new test file (e.g., .github/scripts/test-qnn-export.sh) that validates generate_paraformer.py, generate_sense_voice.py, and generate_zipformer.py outputs
- [ ] Validate that generated config files in generate_config.py match expected schema
- [ ] Reference device_info.py to ensure device detection works in test environment
- [ ] Compare structure against existing .github/scripts/export-ascend/ for consistency
Add integration tests for speech enhancement and source separation features with CI workflow
The repo advertises speech enhancement and source separation capabilities (test-offline-speech-denoiser.sh and test-offline-source-separation.sh exist), but there's no dedicated GitHub Actions workflow that runs these tests on a schedule or on pull requests. This ensures these features remain functional across platform changes and dependency updates.
- [ ] Create .github/workflows/test-speech-enhancement.yaml that runs .github/scripts/test-offline-speech-denoiser.sh on Linux/macOS
- [ ] Create .github/workflows/test-source-separation.yaml that runs .github/scripts/test-offline-source-separation.sh on Linux/macOS
- [ ] Ensure workflows download or cache required model files for speech enhancement and source separation
- [ ] Add validation to compare output audio quality metrics against baseline thresholds
- [ ] Reference the README feature matrix to ensure advertised ✔️ features are covered by CI
🌿Good first issues
- Add missing language binding test scripts:
.github/scripts/has ~20 test scripts but likely misses some of the 12 advertised languages (e.g., Pascal, Objective-C)—create test-pascal.sh and test-objective-c.sh following the pattern of test-rust.sh or test-dart.sh. - Document model export workflow for new NPU target:
.github/scripts/export-qnn/andexport-ascend/exist but lack README explaining how to addexport-rknpu/generate_whisper.pyor support a new model family—write.github/scripts/EXPORT_README.mdwith concrete examples. - Extend Node.js binding platform coverage: package.json optionalDependencies only covers 6 platform/arch pairs (darwin-{arm64,x64}, linux-{x64,arm64}, win-{x64,ia32}); add support for linux-arm32 and linux-riscv64 by defining new optional dependency entries and updating
.github/scripts/node-addon/index.jsloader logic.
⭐Top contributors
Click to expand
Top contributors
- @csukuangfj — 76 commits
- @alex-spacemit — 3 commits
- @negbie — 2 commits
- @GLM-FM — 2 commits
- @Wasser1462 — 2 commits
📝Recent commits
Click to expand
Recent commits
ae2bc66— feat(nemo): add buffered RNNT streaming path for Parakeet Unified (#3575) (milanleonard)a90437c— WASM fixes (Bump emsdk 4.0.23) (#3582) (scottyeager)7adbb3f— Fix building Linux wheels for Python 3.8 (#3593) (csukuangfj)f3b1a9d— Release v1.13.1 (#3590) (csukuangfj)2a3a9d9— Fix bugs in NeMo transducer modified beam search. (#3589) (csukuangfj)0f6e4c0— fix: add bounds checks to prevent SIGSEGV in speaker diarization (#3563) (jasagiri)8a588f8— Replace dead link with archived link (#3576) (hms0411rt)d4aca0e— Fix x64 gpu onnxruntime url (#3578) (difey)c669159— feat(sense-voice): per-stream use_itn override via Stream::GetOptionInt (#3565) (wangxiuwen)b059280— Release v1.13.0 (#3561) (csukuangfj)
🔒Security observations
The sherpa-onnx Node.js package has a generally reasonable security posture for a machine learning framework. No critical vulnerabilities were identified in the visible configuration files. The main concerns are moderate: version placeholder handling during build/release, optional dependency validation at runtime, and standard DevOps best practices (license scanning, signature verification, security policy documentation). The codebase appears to follow good practices with proper licensing, GitHub organization structure, and multi-platform build configurations. Recommended improvements focus on runtime validation of native modules, release pipeline transparency, and establishing clear security reporting channels.
- Medium · Version Placeholder in package.json —
package.json (version, optionalDependencies). The package.json contains version placeholders (SHERPA_ONNX_VERSION) instead of actual semantic versions. This suggests a build-time substitution process that could lead to version confusion attacks or incorrect dependency resolution if not properly managed during the release pipeline. Fix: Ensure the build pipeline properly substitutes these placeholders with valid semantic versions before publishing to npm. Implement verification steps to confirm placeholders are replaced in all published artifacts. - Medium · Broad optionalDependencies Usage —
package.json (optionalDependencies). The package.json declares optionalDependencies for platform-specific binaries (darwin-arm64, linux-x64, win-x64, etc.). While this is a common pattern for native modules, failed installations of these optional dependencies could silently degrade to non-functional fallbacks without clear error reporting to users. Fix: Implement explicit runtime checks to verify that the correct native binary for the current platform was successfully installed. Provide clear error messages if required platform binaries are missing. Consider adding pre/post-install scripts to validate binary integrity. - Low · No License Verification for Dependencies —
package.json, .github/scripts/. While the root package declares Apache-2.0 license, there is no visible mechanism to verify that all optional dependencies comply with compatible licenses, especially when bundling multiple platform-specific binaries. Fix: Implement license scanning in CI/CD pipelines using tools likelicense-checkeror FOSSA to ensure all dependencies have compatible licenses before release. - Low · No Integrity Verification for Node Addon —
.github/scripts/node-addon/. The Node.js addon files (.github/scripts/node-addon/) lack visible checksums or signature verification mechanisms to ensure the distributed native modules haven't been tampered with. Fix: Implement SRI (Subresource Integrity) hashes or GPG signatures for published native modules. Document verification steps in README for users to validate downloaded binaries. - Low · Missing Security Policy —
Repository root. No SECURITY.md or security policy file is evident in the repository structure, making it unclear how security vulnerabilities should be reported privately. Fix: Create a SECURITY.md file defining responsible disclosure practices, security contact information, and the process for reporting vulnerabilities. - Low · GitHub Workflows May Require OIDC Token Hardening —
.github/workflows/*.yaml. While GitHub Actions workflows are present, without reviewing their contents, there's a standard risk of credential exposure if workflows improperly handle secrets or use third-party actions without version pinning. Fix: Audit all workflow files for: (1) Proper use of GitHub's OIDC token provider for cloud authentication, (2) Version pinning of third-party actions (use commit SHAs, not branches), (3) Minimal permission scopes via 'permissions' declarations, (4) No use of PATs where OIDC tokens suffice.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.