RepoPilotOpen in app →

trufflesecurity/trufflehog

Find, verify, and analyze leaked credentials

Healthy

Healthy across the board

worst of 4 axes
Use as dependencyConcerns

copyleft license (AGPL-3.0) — review compatibility

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1d ago
  • 27+ active contributors
  • Distributed ownership (top contributor 17% of recent commits)
Show 4 more →
  • AGPL-3.0 licensed
  • CI configured
  • Tests present
  • AGPL-3.0 is copyleft — check downstream compatibility
What would change the summary?
  • Use as dependency ConcernsMixed if: relicense under MIT/Apache-2.0 (rare for established libs)

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/trufflesecurity/trufflehog)](https://repopilot.app/r/trufflesecurity/trufflehog)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/trufflesecurity/trufflehog on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: trufflesecurity/trufflehog

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/trufflesecurity/trufflehog shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 1d ago
  • 27+ active contributors
  • Distributed ownership (top contributor 17% of recent commits)
  • AGPL-3.0 licensed
  • CI configured
  • Tests present
  • ⚠ AGPL-3.0 is copyleft — check downstream compatibility

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live trufflesecurity/trufflehog repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/trufflesecurity/trufflehog.

What it runs against: a local clone of trufflesecurity/trufflehog — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in trufflesecurity/trufflehog | Confirms the artifact applies here, not a fork | | 2 | License is still AGPL-3.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>trufflesecurity/trufflehog</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of trufflesecurity/trufflehog. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/trufflesecurity/trufflehog.git
#   cd trufflehog
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of trufflesecurity/trufflehog and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "trufflesecurity/trufflehog(\\.git)?\\b" \\
  && ok "origin remote is trufflesecurity/trufflehog" \\
  || miss "origin remote is not trufflesecurity/trufflehog (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(AGPL-3\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"AGPL-3\\.0\"" package.json 2>/dev/null) \\
  && ok "license is AGPL-3.0" \\
  || miss "license drift — was AGPL-3.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "main.go" \\
  && ok "main.go" \\
  || miss "missing critical file: main.go"
test -f "pkg/analyzer/analyzers/analyzers.go" \\
  && ok "pkg/analyzer/analyzers/analyzers.go" \\
  || miss "missing critical file: pkg/analyzer/analyzers/analyzers.go"
test -f "go.mod" \\
  && ok "go.mod" \\
  || miss "missing critical file: go.mod"
test -f "pkg/detectors" \\
  && ok "pkg/detectors" \\
  || miss "missing critical file: pkg/detectors"
test -f ".github/workflows/detector-tests.yml" \\
  && ok ".github/workflows/detector-tests.yml" \\
  || miss "missing critical file: .github/workflows/detector-tests.yml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/trufflesecurity/trufflehog"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

TruffleHog is a Go-based secrets discovery and validation tool that scans Git repositories, cloud storage, logs, and other data sources to find leaked credentials (API keys, database passwords, private keys, etc.), classify them by type (AWS, Stripe, Cloudflare, etc.), validate whether they're active by attempting authentication, and analyze permissions/access for high-risk secret types. It detects over 800 secret types across multiple sources and provides detailed analysis of compromised credentials. Monolithic Go project with pkg/detectors/ containing 800+ detector implementations (one per secret type), cmd/ likely housing CLI entry points, pkg/ containing core libraries for scanning/validation logic, examples/ with sample YAML configurations (generic.yml, generic_with_filters.yml), and GitHub Actions workflows under .github/workflows/ for testing each detector type independently (detector-tests.yml).

👥Who it's for

Security engineers, DevOps teams, and credential management professionals who need to discover and validate leaked secrets in source control, cloud infrastructure, and logs. Also used by enterprises running continuous monitoring via TruffleHog Enterprise on Git, Jira, Slack, Confluence, and Microsoft Teams.

🌱Maturity & risk

Production-ready and actively maintained. The project shows 11MB+ of Go code, comprehensive GitHub Actions workflows (test.yml, detector-tests.yml, codeql-analysis.yml, performance.yml, smoke.yml), a GoReleaser config for releases, and multiple detector implementations. The AGPL-3.0 license and extensive detector coverage indicate mature, serious development.

Moderate risk: the codebase has 800+ detector implementations scattered across pkg/detectors/ which creates maintenance burden, relies on numerous third-party credential validation APIs (AWS STS, Stripe, Cloudflare, etc.) creating external dependency risk, and the Go toolchain constraint at 1.25.5 with custom replace directives for archived libraries (coinbase/waas-client-library-go fork) suggests dependency management complexity. Last update recency and CI coverage mitigate this somewhat.

Active areas of work

Active development with detector expansion (800+ detectors indicate ongoing classification work), performance optimization (docs/iterative_decoding_performance.md suggests recent perf work), CI/CD refinement (multiple specialized workflows), and Renovate dependency management configured (.github/renovate.json). Pre-commit hooks and Cursor/Claude AI integration configs suggest developer experience improvements.

🚀Get running

git clone https://github.com/trufflesecurity/trufflehog.git && cd trufflehog && go mod download && make build (inferred from Makefile presence). For CLI usage: ./trufflehog git https://github.com/your-org/your-repo or ./trufflehog filesystem /path/to/scan.

Daily commands: make build to compile the binary, then ./dist/trufflehog [command] [flags]. Use make test to run test suite. Detectors are tested via .github/workflows/detector-tests.yml which likely invokes specific detector test files.

🗺️Map of the codebase

  • main.go — Entry point for TruffleHog CLI; defines command structure, source selection, and scanning orchestration—every contributor must understand the flow here
  • pkg/analyzer/analyzers/analyzers.go — Registry and factory for all credential analyzers; core abstraction for credential verification across 100+ services
  • go.mod — Dependency manifest with critical replacements (overseer, waas-client-library); tracks Go 1.25.0 requirement and major upstream libraries
  • pkg/detectors — Primary detector framework directory containing patterns for 600+ credential types; each new detector extends this architecture
  • .github/workflows/detector-tests.yml — CI pipeline for detector validation; enforces test coverage and prevents regressions in credential detection
  • hack/docs/Adding_Detectors_Internal.md — Canonical guide for adding new detectors; establishes patterns and conventions all contributors must follow
  • Dockerfile — Container image definition for scanning; dependency for deployment and CI/CD pipelines across all environments

🛠️How to make changes

Add a New Detector for a Credential Type

  1. Read the detector implementation guide to understand the pattern (hack/docs/Adding_Detectors_Internal.md)
  2. Use the code generator to create detector boilerplate (hack/generate/generate.go)
  3. Implement detector logic in pkg/detectors/{service}/detector.go with Validate() and Secret() methods (pkg/detectors)
  4. Add unit tests with real credential samples in pkg/detectors/{service}/detector_test.go (pkg/detectors)
  5. Register detector in central registry (auto-discovered if in correct directory structure) (pkg/detectors)
  6. Run detector tests via CI/CD pipeline to validate coverage (.github/workflows/detector-tests.yml)

Add a New Analyzer for Permission/Verification Checks

  1. Create analyzer subdirectory in pkg/analyzer/analyzers/{service}/ (pkg/analyzer/analyzers)
  2. Implement Analyzer interface with Analyze() method following airtable/anthropic patterns (pkg/analyzer/analyzers/anthropic/anthropic.go)
  3. Define service capabilities and permissions in permissions.yaml (pkg/analyzer/analyzers/airtable/common/permissions.yaml)
  4. Register analyzer in pkg/analyzer/analyzers/analyzers.go factory (pkg/analyzer/analyzers/analyzers.go)
  5. Add test cases with expected_output.json validation (pkg/analyzer/analyzers/anthropic/anthropic_test.go)

Add Custom Detection Pattern with Generic Configuration

  1. Create custom pattern YAML following the generic example structure (examples/generic.yml)
  2. Define regex patterns, entropy thresholds, and keyword filters in YAML config (examples/generic_with_filters.yml)
  3. Run TruffleHog with --config flag to test the custom pattern (main.go)
  4. Validate pattern performance using snifftest utility (hack/snifftest/snifftest.sh)

Integrate into CI/CD Pipeline (GitHub Actions)

  1. Reference TruffleHog in workflow or use as GitHub Action (action.yml)
  2. Configure scanning parameters in workflow (source type, detectors, output format) (.github/workflows/test.yml)
  3. For Docker execution, pull from Dockerfile and run with mounted repositories (Dockerfile)
  4. Review existing detector validation pipeline for test patterns (.github/workflows/detector-tests.yml)

🔧Why these technologies

  • Go 1.25.0 — High-performance concurrent scanning with minimal memory footprint; enables fast processing of massive Git repositories and real-time credential detection at scale
  • Kingpin/v2 (CLI framework) — Structured command-line parsing with subcommands for different sources (git, github, gitlab, filesystem); declarative flag validation
  • AWS SDK v2 + GCP Cloud Storage/SecretManager — Native integration with major cloud credential stores and audit logging; enables enterprise deployment scenarios
  • Docker + GoReleaser — Multi-platform binary distribution and containerized scanning; CI/CD friendly for GitHub Actions and enterprise runners

⚖️Trade-offs already made

  • Extensible detector registry vs. monolithic pattern matching

    • Why: TruffleHog supports 600+ service types; individual detectors allow service-specific logic (formatting, entropy tuning, vendor-specific APIs)
    • Consequence: Higher code duplication and maintenance burden per detector, but enables custom validation logic and keeps detection logic decoupled
  • Regex + entropy + keyword patterns vs. ML-based detection

    • Why: Deterministic patterns provide reproducible, auditable results; fast scanning without model inference latency
    • Consequence: May have higher false-positive rates on novel credential formats; requires manual pattern maintenance as services evolve their secret formats
  • Synchronous analyzer verification vs. async with caching

    • Why: Each credential discovery triggers real-time API validation to determine severity (valid/invalid/permission scope); simplifies result accuracy
    • Consequence: Scanning latency increases with network round-trips (~500ms-2s per verified credential); detectors are rate-limited by service API quotas
  • Custom YAML config for patterns vs. code-only detectors

    • Why: Generic detector allows users to define custom regex patterns without recompiling; balances flexibility with maintainability
    • Consequence: YAML config adds complexity for advanced users; regex-based patterns less powerful than compiled detector validation logic

🚫Non-goals (don't propose these)

  • Does not perform credential rotation or remediation—only detection and reporting
  • Not a real-time monitoring service—designed for batch scanning and CI/CD integration, not continuous background monitoring
  • Does not handle end-to-end encryption of results in transit—security depends on TLS and deployment environment
  • Not a SIEM or centralized logging platform—outputs to stdout, files, or webhooks; aggregation/correlation is caller responsibility
  • Does not automatically suppress false positives—user must curate patterns and detector configuration

🪤Traps & gotchas

Detector implementations require valid API credentials for services they validate against (AWS STS, Stripe, etc.) — CI/CD workflows likely use encrypted secrets. Custom Go replace directives for archived Coinbase library and jpillora/overseer fork mean dependency updates need careful testing. The 800+ detectors means merge conflicts in detector registration are likely. AGPL-3.0 license has copyleft implications for closed-source forks. Performance of scanning scales with detector count — docs/iterative_decoding_performance.md suggests optimization is ongoing.

🏗️Architecture

💡Concepts to learn

  • Credential Validation via Live API Calls — TruffleHog doesn't just find patterns — it confirms secrets are active by attempting actual authentication (AWS STS assume-role, Stripe token validation, etc.) reducing false positives and determining present danger
  • Multi-Source Scanning (Git, S3, Filesystem, Logs) — Unlike single-source tools, TruffleHog orchestrates scanning across heterogeneous data sources requiring pluggable source adapters — see cmd/ for implementation patterns
  • Aho-Corasick String Matching (Multi-Pattern Matching) — TruffleHog uses github.com/BobuSumisu/aho-corasick to efficiently match 800+ secret patterns simultaneously without sequential regex evaluation — critical for performance at scale
  • Secrets Classification & Mapping — Detectors map raw credential strings to their origin service (AWS Key → prod AWS account, Stripe Key → Stripe API) — enables targeted remediation and permissions analysis
  • Concurrent Source Scanning — Docs mention concurrency.md suggesting parallelized scanning across sources/detectors — critical for scanning large Git histories and cloud buckets efficiently
  • YAML-based Scanning Rules — examples/generic.yml and generic_with_filters.yml show declarative rule configuration — allows users to customize detection without code changes
  • Pluggable Detector Interface — 800+ detectors follow a common interface pattern in pkg/detectors/ enabling community contributions and independent testing of each secret type
  • gitleaks/gitleaks — Alternative Git-focused secrets scanner using regex patterns; TruffleHog is more comprehensive with 800+ detectors and active validation
  • Yelp/detect-secrets — Python-based secrets detection with plugins; TruffleHog is faster (Go) and validates credentials rather than just detecting patterns
  • trufflesecurity/trufflehog-action — Official GitHub Action wrapper for TruffleHog enabling CI/CD integration for automated secrets scanning on every push
  • advisories/advisories — Complementary vulnerability database; TruffleHog focuses on leaked credentials while advisories track known software vulnerabilities

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for detector verification workflow

The repo has 100+ detectors in /pkg/detectors but the .github/workflows/detector-tests.yml workflow exists without clear documentation of what's being tested. A new contributor could add structured integration tests that verify each detector's ability to find and verify credentials against real (or safely mocked) service APIs. This directly supports the core mission of credential verification and would catch regressions when detectors are updated.

  • [ ] Review existing detector test patterns in /pkg/detectors (identify common test structure)
  • [ ] Create a new test file /pkg/detectors/integration_test.go with a test suite that iterates through detector fixtures
  • [ ] Add test fixtures directory /pkg/detectors/testdata with sample credentials for 10-15 high-value detectors (AWS, GitHub, Stripe, etc)
  • [ ] Update .github/workflows/detector-tests.yml to document and enforce these integration tests
  • [ ] Document the process in /hack/docs/Adding_Detectors_Internal.md for future detector additions

Implement performance benchmarking CI workflow for detector scanning

The repo has /hack/bench directory with benchmarking scripts and plotting utilities, but no automated CI workflow to track performance regressions. With Go's built-in benchmarking support, a new contributor could create a GitHub Action that runs benchmarks on each PR, compares against main branch, and comments on PRs with performance deltas. This directly supports the /docs/iterative_decoding_performance.md and prevents slow detectors from being merged.

  • [ ] Create .github/workflows/bench-regression.yml that runs 'go test -bench ./pkg/detectors -benchmem'
  • [ ] Parse benchmark output and compare against baseline from main branch using a tool like benchstat
  • [ ] Add step to comment on PRs with performance impact (e.g., '+5% slower on AWS detector')
  • [ ] Update /hack/bench/README.md with instructions for local benchmarking and CI integration
  • [ ] Add benchmark tags to detector test files that currently lack them

Add GitHub Action for automated pre-commit hook validation

The repo has .pre-commit-hooks.yaml and .pre-commit-config.yaml but contributors may skip local setup. A new contributor could create a GitHub Action that validates all commits follow pre-commit standards (linting, formatting, secret detection) before merge, and add a detailed PreCommit.md workflow guide. This ensures code quality consistency and reduces maintainer review burden.

  • [ ] Create .github/workflows/pre-commit-validate.yml that runs 'pre-commit run --all-files' on PRs
  • [ ] Configure auto-fix commits for formatting issues (using amend or bot account) or require manual fixes
  • [ ] Expand /PreCommit.md with section on CI validation, troubleshooting common pre-commit failures, and IDE integration
  • [ ] Add status badge to README.md linking to pre-commit validation workflow
  • [ ] Create /docs/DEVELOPER_SETUP.md documenting local pre-commit installation and .pre-commit-hooks.yaml custom hooks

🌿Good first issues

  • Add a new detector for [popular SaaS service] by creating pkg/detectors/[service]/[service].go and implementing validation against that service's API with proper test coverage — look at existing detectors like AWS or Stripe for patterns
  • Write missing integration tests for pkg/detectors/ implementations that lack test files — use the pattern in existing *_test.go files and add fixtures for real credentials (sanitized/fake ones)
  • Expand docs/process_flow.md or docs/concurrency.md with concrete examples showing how scanning parallelization works across detectors — reference the relevant code paths in cmd/ and pkg/

Top contributors

Click to expand

📝Recent commits

Click to expand
  • ba0a524 — Add Pinecone API key detector (#4917) (dylanTruffle)
  • ab5dd03 — Make detector Result.SecretParts initialization stricter (#4948) (mcastorina)
  • 90ca685 — Add default Content-Type: application/json header for custom detector verification request (#4947) (MuneebUllahKhan222)
  • 5f47aad — chore: bump golangci-lint-action v7 → v9 (Node 24) (#4936) (bryanbeverly)
  • a4e3016 — ci: bump JS actions to Node 24 majors (checkout v6, setup-go v6, codeql v4, auth v3, goreleaser v7, mikepenz v6, jaxxsto (bryanbeverly)
  • 8a12e8e — [INS-320] Cloudinary detector (#4747) (MuneebUllahKhan222)
  • cf31c26 — Log non-critical chunk errors at V(2).Info instead of Error (#4928) (johnelliott)
  • 99dc7bd — Deduplicate concurrent credential verification requests via singleflight (#4314) (kashifkhan0771)
  • 3fc0c2a — Make checksecretparts required in CI (#4921) (mcastorina)
  • ad2f9a4 — Populate SecretParts on all detectors (#4919) (mcastorina)

🔒Security observations

TruffleHog demonstrates a strong security-focused foundation as a credential detection tool, with proper code organization and CI/CD infrastructure. However, several areas require attention: (1) The incomplete dependency file prevents full vulnerability assessment, (2) Docker configuration uses unpinned images and installs unnecessary packages, (3) The SSRF vulnerability policy is potentially too permissive given the tool's access to sensitive credentials during scanning, and (4) Missing explicit SBOM and continuous vulnerability scanning in pipelines. The codebase structure shows good

  • High · Incomplete Dependency Lock File Analysis — go.mod. The go.mod file provided is truncated and incomplete. Line shows 'gi' which appears to be a cut-off dependency. This prevents thorough analysis of all project dependencies and their known vulnerabilities. The full dependency tree should be reviewable. Fix: Provide the complete go.mod file content. Run 'go mod graph' and 'govulncheck ./...' to identify vulnerable dependencies. Consider using automated tools like Dependabot or Snyk for continuous monitoring.
  • Medium · Docker Base Image Uses Unspecified Alpine Version — Dockerfile. The Dockerfile uses 'alpine:3.22' without a pinned digest (hash). While Alpine 3.22 is recent, unpinned tags can receive unexpected updates that may include breaking changes or security patches that weren't tested. Fix: Pin the Alpine image to a specific digest: FROM alpine:3.22@sha256:... to ensure reproducible and immutable builds. Also consider multi-stage optimization - remove unnecessary tools from the final image.
  • Medium · Permissive Package Installation in Docker — Dockerfile (RUN apk add --no-cache...). The Dockerfile installs multiple packages (bash, git, openssh-client, rpm2cpio, binutils, cpio) which increases attack surface. Some packages like openssh-client and git may not be necessary for a scanning tool's runtime. Fix: Minimize installed packages to only what's strictly required at runtime. Move development-only tools (like git) to the builder stage if needed. Document why each package is required.
  • Medium · Sensitive Security Policy on SSRF — SECURITY.md - Blind SSRF & Outbound Request Policy. The SECURITY.md file indicates TruffleHog has a permissive policy on blind SSRF vulnerabilities, treating them as 'hardening opportunities' rather than security issues. This could mask exploitation scenarios where an attacker induces outbound requests to leak credentials or trigger secondary vulnerabilities. Fix: Review and strengthen SSRF vulnerability assessment criteria. Even 'blind' SSRF can be exploited for credential exfiltration or to probe internal networks. Consider issuing security advisories for confirmed exploitation scenarios.
  • Low · Missing Security Headers and Best Practices in Entrypoint — entrypoint.sh. The entrypoint.sh script lacks visible security hardening. No error handling visible in the snippet, and potential for injection if environment variables are not properly quoted. Fix: Review entrypoint.sh for: proper error handling (set -euo pipefail), safe variable expansion (use quoted variables), and principle of least privilege in script execution. Add security-focused comments.
  • Low · Potential Dependency on Archived/Forked Libraries — go.mod - coinbase/waas-client-library-go replacement. The go.mod file includes a replacement for 'coinbase/waas-client-library-go' pointing to a fork, indicating the original library is archived. While the team maintains their fork, this introduces maintenance burden and potential security drift. Fix: Maintain regular security audits of forked dependencies. Monitor the original repository for security patches that should be backported. Consider contributing critical fixes upstream or evaluating alternative maintained libraries.
  • Low · Missing SBOM and Vulnerability Scanning in CI/CD — .github/workflows/. While GitHub workflows for testing exist, there's no explicit mention of SBOM generation or continuous vulnerability scanning in the workflow files listed. For a security-focused tool, this is a gap. Fix: Add workflow for: 1) SBOM generation using syft or trivy, 2) Dependency vulnerability scanning using 'govulncheck', 3) Container image scanning in release pipeline. Include SBOM in release artifacts.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.