RepoPilotOpen in app →

benbjohnson/litestream

Streaming replication for SQLite.

Healthy

Healthy across the board

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2d ago
  • 5 active contributors
  • Apache-2.0 licensed
Show 3 more →
  • CI configured
  • Tests present
  • Concentrated ownership — top contributor handles 73% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/benbjohnson/litestream)](https://repopilot.app/r/benbjohnson/litestream)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/benbjohnson/litestream on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: benbjohnson/litestream

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/benbjohnson/litestream shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 2d ago
  • 5 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Concentrated ownership — top contributor handles 73% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live benbjohnson/litestream repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/benbjohnson/litestream.

What it runs against: a local clone of benbjohnson/litestream — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in benbjohnson/litestream | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>benbjohnson/litestream</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of benbjohnson/litestream. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/benbjohnson/litestream.git
#   cd litestream
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of benbjohnson/litestream and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "benbjohnson/litestream(\\.git)?\\b" \\
  && ok "origin remote is benbjohnson/litestream" \\
  || miss "origin remote is not benbjohnson/litestream (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "cmd/litestream/main.go" \\
  && ok "cmd/litestream/main.go" \\
  || miss "missing critical file: cmd/litestream/main.go"
test -f "abs/replica_client.go" \\
  && ok "abs/replica_client.go" \\
  || miss "missing critical file: abs/replica_client.go"
test -f "cmd/litestream/replicate.go" \\
  && ok "cmd/litestream/replicate.go" \\
  || miss "missing critical file: cmd/litestream/replicate.go"
test -f "cmd/litestream/directory_watcher.go" \\
  && ok "cmd/litestream/directory_watcher.go" \\
  || miss "missing critical file: cmd/litestream/directory_watcher.go"
test -f ".claude/settings.json" \\
  && ok ".claude/settings.json" \\
  || miss "missing critical file: .claude/settings.json"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/benbjohnson/litestream"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Litestream is a standalone disaster recovery tool that runs as a background daemon and continuously replicates SQLite database changes incrementally to remote storage (S3, Azure Blob, GCS, SFTP, WebDAV) or replica files. It communicates exclusively through SQLite's C API to guarantee zero corruption and enables point-in-time recovery without modifying your application code. Monolithic Go binary (Makefile-driven build) with cmd/litestream for CLI, internal/ packages for core replication logic (replicator.go, db.go), storage backends in internal/storage/ (s3.go, azure.go, gcs.go, sftp.go, webdav.go), LTX format handling via superfly/ltx dependency, SQLite binding via cgo (mattn/go-sqlite3 and modernc.org/sqlite). Examples in _examples/library/ demonstrate programmatic usage.

👥Who it's for

Backend developers and DevOps engineers running SQLite in production who need disaster recovery, high availability, and point-in-time restore capabilities without the complexity of traditional database replication systems. Users range from single-instance deployments to Kubernetes clusters (evidenced by .github/workflows/integration-tests.yml and fly.io partnership).

🌱Maturity & risk

Beta status (per README badge) with active development: Go 1.25.0 toolchain, comprehensive CI/CD (commit, integration, nightly stability, upgrade test workflows), strong dependency maintenance (aws-sdk-go-v2, Azure SDK, GCS client all recent). Maintained by benbjohnson with regular contributions and community engagement noted in acknowledgements, indicating production-ready stability with ongoing improvements.

Moderate risk factors: single-maintainer repo (benbjohnson) though with community contributors, large C cgo footprint (708K lines) introducing platform-specific stability concerns, 40+ direct dependencies (cloud SDKs) requiring careful version management. WAL/LTX file format changes could break replica compatibility across versions—migration tooling exists (.claude/commands/analyze-ltx.md) but major version upgrades need planning.

Active areas of work

Active development on LTX (streaming transaction format) optimization (.claude/agents/ltx-compaction-specialist.md), storage backend expansion (aliyun OSS SDK imported), replica client improvements (.claude/agents/replica-client-developer.md), and performance optimization work visible in .claude/agents/performance-optimizer.md and .claude/commands/. Pre-release checklist and upgrade test workflows suggest ongoing version hardening.

🚀Get running

git clone https://github.com/benbjohnson/litestream.git
cd litestream
make
./litestream version
``` (or use Dockerfile for containerized build; Go 1.25+ required per go.mod)

**Daily commands:**
```bash
make build          # Compiles litestream binary
./litestream serve  # Runs daemon (reads config from /etc/litestream.yml or -config flag)
./litestream replicate -db mydb.db -replica s3://bucket/mydb  # One-off replication
make test           # Run test suite
make docker         # Build Docker image

🗺️Map of the codebase

  • cmd/litestream/main.go — Entry point for the CLI tool; handles database replication orchestration and command routing.
  • abs/replica_client.go — Core abstraction for replica communication across storage backends (S3, Azure, GCS); every storage integration depends on this.
  • cmd/litestream/replicate.go — Primary replication loop logic; coordinates WAL tracking, snapshot creation, and incremental change propagation.
  • cmd/litestream/directory_watcher.go — Monitors filesystem for database changes; critical for triggering replication on database events.
  • .claude/settings.json — AI assistant configuration and context rules; establishes development conventions and agent responsibilities.
  • CONTRIBUTING.md — Contribution guidelines and project standards; required reading before submitting patches or pull requests.

🛠️How to make changes

Add a New Storage Backend

  1. Implement the replica.Client interface in a new package (e.g., pkg/s3client/client.go) with Read, Write, Delete, and List methods (abs/replica_client.go)
  2. Create a factory function that parses configuration and returns your client; register it in cmd/litestream/replicate.go (cmd/litestream/replicate.go)
  3. Add integration tests in pkg/yourbackend/client_test.go to verify Upload, Download, and Cleanup behavior (cmd/litestream-test/main.go)
  4. Update CONTRIBUTING.md with backend-specific setup instructions and required credentials (CONTRIBUTING.md)

Add a New CLI Subcommand

  1. Create a new file cmd/litestream/yourcommand.go with a struct and Run() method following the pattern in cmd/litestream/list.go (cmd/litestream/list.go)
  2. Register the command in cmd/litestream/main.go by adding a case in the subcommand switch statement (cmd/litestream/main.go)
  3. Write unit tests in cmd/litestream/yourcommand_test.go covering success, error, and edge cases (cmd/litestream/list_test.go)
  4. Update the command help text and document usage in the project README.md (README.md)

Enhance Replication Logic or WAL Tracking

  1. Modify the replication loop in cmd/litestream/replicate.go, focusing on the snapshot creation and change detection phases (cmd/litestream/replicate.go)
  2. Update directory_watcher.go if filesystem monitoring needs adjustment or new event types (cmd/litestream/directory_watcher.go)
  3. Add comprehensive tests in cmd/litestream/directory_watcher_test.go and cmd/litestream/replicate_test.go (cmd/litestream/directory_watcher_test.go)
  4. Run stress and integration tests via cmd/litestream-test/main.go to validate under load (cmd/litestream-test/main.go)

Debug or Fix Replication Issues

  1. Review .claude/commands/debug-wal.md and trace-replication.md for diagnostic guidance and log inspection (.claude/commands/debug-wal.md)
  2. Enable verbose logging in cmd/litestream/replicate.go and cmd/litestream/directory_watcher.go to trace replication state changes (cmd/litestream/replicate.go)
  3. Use cmd/litestream/info.go to inspect replica metadata and database status (cmd/litestream/info.go)
  4. Create a minimal reproduction test case in cmd/litestream-test/main.go and add to scripts/ for future regression testing (cmd/litestream-test/main.go)

🔧Why these technologies

  • SQLite VFS (Virtual File System) — Litestream hooks into SQLite's file I/O layer for non-invasive monitoring of database changes without corruption risk
  • AWS SDK v2 / Azure SDK / Google Cloud Storage SDK — Native cloud SDKs enable reliable, performant replication to multiple cloud backends with consistent authentication and retry logic
  • Go 1.25+ — Enables cross-platform compilation (Linux, macOS, Windows), minimal runtime dependencies, and efficient concurrent replication monitoring
  • File-based WAL (Write-Ahead Log) Tracking — Leverages SQLite's native journaling to capture incremental changes efficiently without parsing database internals

⚖️Trade-offs already made

  • Streaming replication instead of point-in-time backups

    • Why: Minimizes RPO (recovery point objective) and provides continuous protection against data loss
    • Consequence: Higher storage costs and network bandwidth; requires ongoing replication daemon vs. scheduled backup jobs
  • Multi-cloud backend support (S3, Azure, GCS)

    • Why: Avoids vendor lock-in and allows users to choose cost-optimal storage
    • Consequence: More code paths to test and maintain; potential inconsistencies in feature parity across backends
  • Daemon-based architecture (background process)

    • Why: Ensures replication happens automatically without application code changes or manual intervention
    • Consequence: Adds operational overhead (process management, monitoring) and potential resource contention on host
  • Filesystem-level monitoring via directory watcher

    • Why: Detects all database changes without coupling to application logic
    • Consequence: Subject to filesystem race conditions and may miss changes under extreme load; requires stress testing

🚫Non-goals (don't propose these)

  • Real-time synchronous replication with zero RPO (async by design)
  • Multi-master or bidirectional replication (one-way only)
  • Automatic failover / high-availability orchestration (backup tool, not a HA system)
  • Encryption at rest (relies on storage backend encryption)
  • Built-in authentication / access control (assumes external security)
  • GUI or web-based administration (CLI-only)

🪤Traps & gotchas

WAL mode must be enabled on SQLite database (Litestream requires PRAGMA journal_mode=WAL); SQLite checkpoint conflicts can stall replication if application holds long transactions; cloud SDK credentials require environment variables (AWS_ACCESS_KEY_ID, AZURE_STORAGE_ACCOUNT_NAME, GOOGLE_APPLICATION_CREDENTIALS, etc.) or config file auth; LTX format version incompatibility between litestream versions can break replica reads—always test upgrade paths; cgo build requires C compiler and sqlite3-dev headers; Docker builds include CGO_ENABLED=1 but cross-compilation requires manual CGO setup.

🏗️Architecture

💡Concepts to learn

  • WAL (Write-Ahead Log) — Litestream's entire replication strategy hinges on reading SQLite's WAL frames; understanding frame format, checkpoints, and wrapping is essential for debugging replica lag or corruption
  • LTX (Litestream Transaction Format) — Wire format for streaming SQLite changes to replicas; defined in superfly/ltx dependency and parsed in internal/wal.go; version mismatches break replica compatibility
  • Incremental Backup / Snapshot + Delta Replication — Litestream replicates full database snapshots on first run, then only changed pages via LTX frames; understanding this hybrid reduces unnecessary storage costs and transfer overhead
  • Checkpoint (SQLite Context) — Litestream must coordinate with SQLite checkpoints to safely read consistent WAL segments; incorrect checkpoint handling causes replica consistency violations
  • VFS (Virtual File System) Layer — psanford/sqlite3vfs dependency provides abstraction for custom storage backends; understanding VFS enables zero-copy replica reads and custom recovery strategies
  • SafePoint (Litestream Concept) — Tracks the highest confirmed replicated transaction ID; determines point-in-time restore capabilities and is critical for replica failover ordering
  • Object Storage (S3-compatible API) — Litestream uses S3 as the primary cloud target; understanding S3 multipart upload, eventual consistency, and cost implications is crucial for production deployments at scale
  • superfly/ltx — Streaming transaction format library that Litestream depends on (v0.5.1); understanding LTX schema is critical for replica format changes
  • mattn/go-sqlite3 — Primary cgo SQLite binding used by Litestream for direct database access and WAL inspection; fork issues here directly impact Litestream stability
  • nats-io/nats.go — Vendored for potential future pub/sub replication notifications (nats.go v1.44.0 in go.mod); see if IPC/event-driven replica sync is planned
  • rqlite/rqlite — Alternative SQLite replication tool using Raft consensus; shares similar problem domain (distributed SQLite) but different architecture (consensus vs. log shipping)
  • canonical/dqlite — Embedded distributed SQLite using Raft; complements Litestream for high-availability use cases requiring quorum-based replication

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for Alibaba OSS backend

The repo has storage backends for S3, Azure Blob, GCS, and WebDAV with corresponding test scripts (cmd/litestream-test/scripts/test-s3-*.sh), but there's no visible integration test coverage for the Alibaba OSS backend (aliyun/alibabacloud-oss-go-sdk-v2 is already a dependency). This is a gap for a production storage backend. A new contributor could add test scripts similar to the S3 retention and access point tests to ensure the Alibaba backend handles replication, retention policies, and edge cases correctly.

  • [ ] Review existing S3 integration tests in cmd/litestream-test/scripts/ to understand the test pattern
  • [ ] Create cmd/litestream-test/scripts/test-alioss-retention.sh following the S3 retention test pattern
  • [ ] Create cmd/litestream-test/scripts/test-alioss-scenarios.sh for basic replication scenarios
  • [ ] Add Alibaba OSS credentials/environment setup documentation in cmd/litestream-test/S3-RETENTION-TESTING.md
  • [ ] Ensure tests cover retention policies, multi-part uploads, and error recovery

Add LTX (Log Transaction) format validation and debugging tools

The repo has .claude/commands/analyze-ltx.md and .claude/commands/debug-ltx.md suggesting LTX is critical to Litestream's architecture, and litestream-test includes scripts for format isolation testing (test-format-isolation.sh). However, there's no visible CLI tool or comprehensive test suite for LTX frame validation and corruption detection. A new contributor could add unit tests in a new file (e.g., ltx/ltx_test.go or ltx/frame_validation_test.go) covering frame parsing, checksum verification, and malformed data handling—directly supporting the superfly/ltx v0.5.1 dependency.

  • [ ] Examine superfly/ltx v0.5.1 API and Litestream's ltx package (if it exists) to understand frame structure
  • [ ] Create comprehensive unit tests for LTX frame parsing, including valid frames, corrupted frames, and edge cases
  • [ ] Add property-based tests (using testing/quick or similar) for frame round-trip serialization
  • [ ] Document LTX format in CONTRIBUTING.md or create ltx/FORMAT.md with examples of valid/invalid frames
  • [ ] Add CLI subcommand 'litestream validate-ltx <file>' for offline LTX file validation (can reference .claude/commands/analyze-ltx.md)

Add SFTP backend integration tests and fix coverage gaps

The repo has github.com/pkg/sftp as a direct dependency, indicating SFTP is a supported backend, but there are no visible SFTP-specific integration test scripts in cmd/litestream-test/scripts/ (only S3, GCS, and Azure are clearly covered). This is a significant gap for a file-based replication backend. A new contributor could add SFTP integration tests following the existing pattern, including connection failure recovery, concurrent writes, and disk space limits.

  • [ ] Review abs/replica_client.go to understand SFTP backend implementation
  • [ ] Create cmd/litestream-test/scripts/test-sftp-scenarios.sh with basic replication, restore, and sync tests
  • [ ] Create cmd/litestream-test/scripts/test-sftp-resilience.sh covering connection drops, authentication failures, and disk full scenarios
  • [ ] Add SFTP test setup instructions (Docker container or mock SFTP server) to cmd/litestream-test/README.md
  • [ ] Ensure tests validate file permissions, directory creation, and cleanup behavior

🌿Good first issues

  • Add integration tests for Aliyun OSS backend: aliyun/alibabacloud-oss-go-sdk-v2 is imported but internal/storage/oss.go appears missing—implement per S3 pattern in internal/storage/s3.go and wire into Makefile test targets
  • Expand .claude/commands/debug-wal.md with worked example: document how to use internal/wal.go parse functions to inspect a real corrupted WAL and recover; parallels existing debug-ipc.md structure
  • Add Prometheus metrics for replica lag: prometheus/client_golang is vendored but replica lag histogram missing—add GaugeLag metric to internal/replica.go and expose via metrics endpoint per existing patterns in cmd/litestream/serve.go

Top contributors

Click to expand

📝Recent commits

Click to expand
  • fc050c9 — fix(cli): exit 0 for explicit help (#1233) (corylanou)
  • 9cc2f2d — fix(build): bump Go toolchain to 1.25.9 for stdlib vulns (#1235) (corylanou)
  • 6b8bb01 — refactor(db): return sync results explicitly (#1224) (corylanou)
  • 016c368 — fix(restore): fix v3 restore (#1229) (darkgnotic)
  • 4cb0f8c — fix(db): wrap raw errors with LTXError in paths reaching replica monitor (#1227) (corylanou)
  • 5780f4d — fix(replica): wrap missing LTX file error with LTXError in uploadLTXFile (#1225) (corylanou)
  • 1678a9b — fix(ci): dedupe nightly stability issues (#1217) (corylanou)
  • ad55ec1 — feat(ci): add nightly stability CI, LTX behavioral tests, and release gating (#1195) (corylanou)
  • d65fa36 — fix(cmd): add helpful hint when 'litestream start' is run without args (#1208) (corylanou)
  • 13ef323 — feat(logging): add tint pretty handler, optional source info, and path cleaning (#1204) (corylanou)

🔒Security observations

  • High · SQLite Extension Loading Disabled in Production Build — Dockerfile - VFS extension build section. The Dockerfile builds the main litestream binary with 'sqlite_omit_load_extension' tag, which is good. However, the separate VFS loadable extension (litestream-vfs.so) is built without this restriction. If this extension is loaded from untrusted sources, it could execute arbitrary code since SQLite extensions are essentially shared libraries with no sandboxing. Fix: Ensure VFS extension files are only loaded from trusted, signed sources. Implement runtime verification of extension integrity. Document security implications in deployment guides. Consider signing the extension with a certificate and validating signatures at load time.
  • High · Incomplete Dockerfile Multi-Stage Build — Dockerfile - final gcc compilation command. The Dockerfile appears truncated at the final gcc command (missing '-lpthread' and final output redirection). This incomplete build configuration could result in a broken binary or security tooling being bypassed. The build line ends mid-argument which suggests the actual Dockerfile may have issues. Fix: Complete and verify the entire Dockerfile. Ensure all build steps complete successfully. Add build validation tests to CI/CD pipeline. Use BuildKit with '--progress=plain' to catch build failures.
  • High · Outdated Golang Version in go.mod — go.mod - toolchain version specification. The go.mod specifies 'go 1.25.0' with toolchain 'go1.25.9'. Go 1.25 does not exist (current stable is 1.23.x). This appears to be a version specification error that could cause build failures or use of unstable/non-existent toolchain versions, potentially missing critical security patches. Fix: Update to a stable, current Go version (1.23.x or later). Use 'go get -u' to update dependencies to versions compatible with the specified Go version. Add Go version validation to CI/CD pipeline.
  • Medium · Dependency on Experimental/Pre-Release Package — go.mod - sqlite3vfs dependency. The project depends on 'github.com/psanford/sqlite3vfs v0.0.0-20251127171934-4e34e03a991a' which uses a date-based version (v0.0.0) indicating a pre-release or unversioned dependency. This could introduce unstable or untested code paths. Fix: Upgrade to a stable tagged version if available. If v0.0.0 is necessary, document the reason and implement enhanced testing for this dependency. Monitor the upstream repository for security updates.
  • Medium · Multiple Unverified Third-Party Storage Backend Dependencies — go.mod - AWS SDK, Azure SDK, Google Cloud Storage, Aliyun OSS dependencies. The project integrates multiple cloud storage backends (S3, Azure Blob, Google Cloud Storage, Aliyun OSS) without visible verification of their security configurations. Cloud credentials could be mishandled during replication. Fix: Implement credential management best practices: use IAM roles/managed identities instead of API keys, enforce least privilege permissions, audit credential usage, implement secret rotation, add comprehensive logging of credential-related operations.
  • Medium · Incomplete Security Policy — SECURITY.md. SECURITY.md lacks specific details: no PGP key for encrypted reports, no explicit timeline for security patches, no vulnerability disclosure timeline, no information about security update notifications, and email address(es) for reporting are not specified. Fix: Expand security policy with: specific email address(es) for security reports, PGP public key for encrypted submissions, expected response timelines (e.g., 24 hours for acknowledgment), patch release timeline, security advisory notification process.
  • Medium · CGO_ENABLED=1 in VFS Build Could Introduce C Vulnerabilities — Dockerfile - VFS extension build with CGO_ENABLED=1. The VFS extension is built with CGO_ENABLED=1, allowing C code compilation. C libraries can have memory safety vulnerabilities. The src/litestream-vfs.c file is referenced but not visible in the file structure provided. Fix: Perform security audits of the C extension code. Enable compiler security flags: '-fstack-protector-strong', '-D_FORTIFY_SOURCE=2'. Use static analysis tools (clang-analyzer, cppcheck) on C code. Consider fuzzing the V

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · benbjohnson/litestream — RepoPilot