danny-avila/LibreChat
Enhanced ChatGPT Clone: Features Agents, MCP, DeepSeek, Anthropic, AWS, OpenAI, Responses API, Azure, Groq, o1, GPT-5, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, Code Interpreter, langchain, DALL-E-3, OpenAPI Actions, Functions, Secure Multi-User Auth, Presets, open-source for self-hosting. Active.
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit today
- ✓15 active contributors
- ✓MIT licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Single-maintainer risk — top contributor 83% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/danny-avila/librechat)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/danny-avila/librechat on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: danny-avila/LibreChat
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/danny-avila/LibreChat shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit today
- 15 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Single-maintainer risk — top contributor 83% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live danny-avila/LibreChat
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/danny-avila/LibreChat.
What it runs against: a local clone of danny-avila/LibreChat — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in danny-avila/LibreChat | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of danny-avila/LibreChat. If you don't
# have one yet, run these first:
#
# git clone https://github.com/danny-avila/LibreChat.git
# cd LibreChat
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of danny-avila/LibreChat and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "danny-avila/LibreChat(\\.git)?\\b" \\
&& ok "origin remote is danny-avila/LibreChat" \\
|| miss "origin remote is not danny-avila/LibreChat (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "api/app/clients/BaseClient.js" \\
&& ok "api/app/clients/BaseClient.js" \\
|| miss "missing critical file: api/app/clients/BaseClient.js"
test -f "api/app/clients/prompts/formatMessages.js" \\
&& ok "api/app/clients/prompts/formatMessages.js" \\
|| miss "missing critical file: api/app/clients/prompts/formatMessages.js"
test -f ".env.example" \\
&& ok ".env.example" \\
|| miss "missing critical file: .env.example"
test -f ".github/workflows/build.yml" \\
&& ok ".github/workflows/build.yml" \\
|| miss "missing critical file: .github/workflows/build.yml"
test -f "Dockerfile" \\
&& ok "Dockerfile" \\
|| miss "missing critical file: Dockerfile"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/danny-avila/LibreChat"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
LibreChat is a self-hosted, open-source ChatGPT clone that provides a unified interface for dozens of AI models (OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, Groq, DeepSeek, Mistral, etc.) with features like Agents, Model Context Protocol (MCP) support, Code Interpreter, message search, and multi-user authentication. It solves the problem of vendor lock-in and data privacy by letting organizations deploy a single UI that can route to any LLM endpoint they control or subscribe to. Monorepo structure: client/ contains React frontend, server/ contains Express.js backend with endpoints under api/, shared schemas in @librechat/data-schemas, agent logic in @librechat/agents, with deployment configs in .devcontainer/, .do/ (DigitalOcean), and multiple docker-compose.yml files for self-hosting. Backend uses module aliases (~) for imports and organized under server/ with subdirectories for auth, routes, models, controllers.
👥Who it's for
Self-hosted AI enthusiasts, enterprises needing multi-model LLM access without vendor lock-in, developers building custom AI applications who want a production-grade chat UI foundation, and organizations with regulatory requirements to keep conversation data on-premise or in specific cloud regions.
🌱Maturity & risk
Actively maintained production-ready project: v0.8.5 released, extensive TypeScript codebase (14M+ lines), comprehensive CI/CD pipeline with 40+ GitHub Actions workflows, hundreds of test files, and frequent deployment workflows (deploy-dev.yml, deploy.yml, gitnexus-deploy.yml). Strong signal of maturity and active development.
Large dependency surface with 50+ direct dependencies (@aws-sdk/, @azure/, @anthropic-ai/, various LLM providers) creates maintenance burden and potential security vulnerabilities; monorepo complexity across frontend/backend/@librechat/ packages increases risk of breaking changes; no visible lock file in provided manifest suggests floating dependency versions. Single organizational maintainer (danny-avila) but appears healthy based on CI automation.
Active areas of work
Active multi-provider AI integration work: MCP (Model Context Protocol) support, Code Interpreter enhancements across Python/Node.js/Go/C++/Java/PHP/Rust, custom endpoint configuration, embedding generation pipeline (generate_embeddings.yml workflow). Multiple deployment paths being maintained: Railway, Zeabur, Sealos templates, plus Helm charts. Locization and i18n sync active (locize-i18n-sync.yml, i18n-unused-keys.yml). Recent focus on accessibility (a11y.yml workflow) and image tagging strategies.
🚀Get running
git clone https://github.com/danny-avila/LibreChat.git
cd LibreChat
npm install
npm run server-dev
Or with Docker Compose: docker-compose up using the provided docker-compose.yml in root. See .env.example for required environment variables (OpenAI keys, database URL, etc.).
Daily commands:
From root directory: npm run server-dev (starts backend in development mode per package.json script) and npm run client-dev (inferred from structure). Full setup via Docker: docker-compose up reads root docker-compose.yml. Dev container provided: open in VS Code with Remote Containers extension for .devcontainer/devcontainer.json setup.
🗺️Map of the codebase
api/app/clients/BaseClient.js— Core abstraction for all LLM provider integrations; every new AI model integration extends this class and must follow its contractapi/app/clients/prompts/formatMessages.js— Central message formatting pipeline that normalizes conversation history across all AI providers; critical for prompt injection safety and context handling.env.example— Defines all required environment variables for provider keys, database connections, and feature flags; essential for deployment configuration.github/workflows/build.yml— Primary CI/CD pipeline orchestrating tests, Docker builds, and multi-platform image deployment across staging and productionDockerfile— Single-stage production container build; entry point for self-hosted deployments and all containerized environmentsapi/app/clients/tools/index.js— Tool registry and factory for agent capabilities (image generation, web search, weather, etc.); gate for all structured tool accessapi/app/clients/prompts/createVisionPrompt.js— Multimodal vision handling for image analysis; coordinates vision provider capabilities across OpenAI, Google, and other models
🛠️How to make changes
Add a New LLM Provider Integration
- Create a new client class in api/app/clients/ that extends BaseClient (e.g., MyProviderClient.js) (
api/app/clients/BaseClient.js) - Implement required methods: constructor, makeRequest, streamRequest, and provider-specific header/auth logic (
api/app/clients/MyProviderClient.js) - Register the new client in api/app/clients/index.js exports (
api/app/clients/index.js) - Add environment variables for API keys/endpoints to .env.example (
.env.example) - Create test file at api/app/clients/specs/MyProvider.test.js following existing test patterns (
api/app/clients/specs/BaseClient.test.js)
Add a New Agent Tool (Structured Function)
- Create tool file at api/app/clients/tools/structured/MyTool.js with class extending Tool base with execute() method (
api/app/clients/tools/structured/DALLE3.js) - Define tool schema with input parameters, descriptions, and authentication requirements (
api/app/clients/tools/structured/Wolfram.js) - Register tool in api/app/clients/tools/manifest.json with name, description, and auth config (
api/app/clients/tools/manifest.json) - Add tool export to api/app/clients/tools/index.js factory loader (
api/app/clients/tools/index.js) - Create integration test at api/app/clients/tools/structured/specs/MyTool.spec.js (
api/app/clients/tools/structured/specs/GoogleSearch.spec.js)
Customize Message Formatting for a New Provider
- Review provider's API documentation for message schema (role, content, tool_use, etc.) (
api/app/clients/prompts/formatMessages.js) - Extend or override formatMessages() logic in your client class to transform role/content pairs to provider format (
api/app/clients/MyProviderClient.js) - If provider requires vision/image handling, implement custom vision prompt in MyProviderClient using createVisionPrompt as reference (
api/app/clients/prompts/createVisionPrompt.js) - Add test cases in api/app/clients/specs/MyProvider.test.js validating message format transformation (
api/app/clients/specs/BaseClient.test.js)
Configure Multi-Model Deployment (Self-Hosted)
- Use .devcontainer/docker-compose.yml as template or create docker-compose.yml for production stack with MongoDB, Redis, Milvus services (
.devcontainer/docker-compose.yml) - Copy .env.example to .env and configure provider API keys (OpenAI, Anthropic, etc.) and local model paths for Ollama (
.env.example) - Select appropriate Dockerfile (standard vs multi-stage) and build with docker build -t librechat:latest . (
Dockerfile) - Deploy using docker-compose or Kubernetes; configure MONGODB_URI, REDIS_URL, and AGENTS_ENABLED environment variables at runtime (
Dockerfile)
🔧Why these technologies
- Node.js + Express (Backend API) — Handles concurrent WebSocket/SSE streams for real-time LLM responses; non-blocking I/O ideal for streaming chat inference
- MongoDB — Flexible document schema for storing conversations, user settings, and multi-provider metadata; essential for multi-user chat persistence
- Redis — Session caching, conversation context caching, and rate-limiting; critical for sub-100ms message history retrieval
- Milvus (Vector DB) — Semantic search and RAG over conversation history; enables context-aware retrieval for agents
- Docker + Docker Compose — Simplifies multi-service deployment (API, MongoDB, Redis, Milvus) with environment isolation for self-hosting
- Jest + Playwright — Unit testing for provider integrations and end-to-end testing for UI flows; validates provider compatibility across updates
- LangChain (conditional) — Agent orchestration framework for tool-use chains; used for advanced reasoning tasks across heterogeneous tools
🪤Traps & gotchas
.envfile is REQUIRED and not in repo—copy from.env.exampleand populate with actual API keys (OpenAI, Anthropic, AWS, etc.) or the app will fail silently. 2) MongoDB connection required (MONGO_URI env var)—dev setup assumes running MongoDB locally or docker-compose handles it. 3) Redis optional but recommended (KEYV_STORE for session cache)—not having it may cause session issues. 4) Module aliases (~/*) configured in package.json and tsconfig—build will fail if removed. 5)npm run server-devonly works from root directory per script comment, not from server/ subdirectory. 6) Deployment workflows reference external services (Railway, Zeabur, Sealos, DigitalOcean) with template-specific config in.do/and.devcontainer/.
🏗️Architecture
💡Concepts to learn
- Model Context Protocol (MCP) — LibreChat uses MCP (via @modelcontextprotocol/sdk) to standardize how agents discover and invoke external tools/APIs—understanding MCP schemas and lifecycle is essential for adding new agent capabilities
- LLM Router / Provider Abstraction — Core architecture pattern where all requests route through a provider abstraction layer (in server/src/services/) to support swappable backends (OpenAI, Anthropic, AWS, etc.) without duplicating chat logic
- Multi-User Session Management with Redis — LibreChat uses Redis (connect-redis) and Keyv for distributed session storage across multiple backend instances; critical for scalability and understanding why MongoDB alone is insufficient
- Monorepo Package Structure (@librechat/*) — Code is split across @librechat/agents, @librechat/api, @librechat/data-schemas—understanding how packages depend on each other and share types prevents circular dependencies and versioning conflicts
- Server-Sent Events (SSE) for Streaming Responses — Chat responses stream from LLMs via SSE (not WebSocket), allowing real-time token-by-token display in the UI; critical for understanding the request/response flow in routes and middleware
- SAML Authentication / Multi-Auth Strategy — LibreChat supports SAML (via @node-saml/passport-saml) alongside JWT for enterprise SSO integration; understanding Passport.js strategies is needed to add or modify authentication providers
- Code Interpreter Sandboxing — LibreChat executes code in isolated Python/Node.js/Go/C++/Java/Rust runtimes; the security model and API for code execution is non-trivial and requires understanding execution context isolation
🔗Related repos
lm-sys/FastChat— Similar multi-model LLM chat interface but focused on benchmarking and distributed serving rather than enterprise self-hostinggetumbrel/umbrel— Self-hosting platform that could package LibreChat as an app for non-technical users managing multiple LLM services at homeopenpilot/openpilot— Demonstrates MCP (Model Context Protocol) integration patterns that LibreChat adopted for tool-use and agentic capabilitiesanthropics/anthropic-sdk-python— Official Anthropic SDK that LibreChat integrates with—understanding its API design helps contribute Claude/Vertex AI featureslangchain-ai/langchainjs— Langchain integration is embedded in LibreChat's agent system; understanding LangChain's tool/agent abstractions clarifies the agent architecture
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive integration tests for MCP (Model Context Protocol) client implementations
The repo prominently features MCP support in its description and has @modelcontextprotocol/sdk as a dependency, but there's no visible test coverage for MCP-related client interactions. Given the complexity of protocol handling and the presence of BaseClient.js and other client implementations, integration tests would prevent regressions and improve confidence in MCP feature reliability.
- [ ] Create api/app/clients/tests/MCP.integration.test.js
- [ ] Add tests for MCP server connection, message routing, and error handling
- [ ] Reference existing BaseClient.js patterns and create test fixtures for mock MCP responses
- [ ] Integrate tests into .github/workflows/backend-review.yml (note: backend-review.yml exists but likely needs MCP coverage)
- [ ] Document MCP testing patterns in a new api/app/clients/README.md
Implement workflow for E2E tests covering multi-provider model switching (OpenAI, Anthropic, DeepSeek, etc.)
LibreChat supports extensive AI model switching across providers (OpenAI, Azure, AWS Bedrock, Anthropic, DeepSeek, Groq, etc.) as a core feature. The existing .github/workflows/ folder shows playwright.yml and various test workflows, but no dedicated E2E workflow validates that message routing and responses work correctly across all providers. This prevents silent failures when APIs change.
- [ ] Create .github/workflows/e2e-provider-switching.yml with Playwright tests
- [ ] Add client/tests/e2e/provider-switching.spec.ts testing model selection and response handling
- [ ] Reference existing .github/workflows/client.yml structure for CI integration
- [ ] Configure workflow to test at least 3 major providers (OpenAI, Anthropic, DeepSeek) with mock/test API keys
- [ ] Add parallel job matrix in workflow to avoid timeout issues with multiple provider tests
Refactor and add unit tests for prompt engineering modules in api/app/clients/prompts/
The prompts directory contains critical files like createVisionPrompt.js, artifacts.js, and formatAgentMessages.spec.js, but the presence of only one .spec.js file suggests incomplete test coverage. These prompt templates directly affect AI output quality and cost efficiency. Comprehensive unit tests are essential for safe refactoring and preventing prompt injection vulnerabilities.
- [ ] Create api/app/clients/prompts/tests/ directory with unit tests for each prompt builder
- [ ] Write api/app/clients/prompts/tests/createVisionPrompt.test.js covering image token counting and vision-specific formatting
- [ ] Write api/app/clients/prompts/tests/artifacts.test.js validating artifact JSON structure and security boundaries
- [ ] Add api/app/clients/prompts/tests/createContextHandlers.test.js with context injection scenarios
- [ ] Extend existing formatAgentMessages.spec.js with edge cases (empty messages, malformed agent responses)
- [ ] Add test coverage reporting to .github/workflows/backend-review.yml for prompts directory
🌿Good first issues
- Add unit tests for
server/src/services/LLM provider adapters (OpenRouter, Mistral, Groq) which likely lack test coverage given test suite focus—pick one provider and write integration tests mocking API responses - Document the multi-provider configuration flow: create a guide explaining how to add a custom OpenAI-compatible endpoint, including modifying schemas in
packages/@librechat/data-schemas/and routing logic inserver/src/routes/with a working example - Implement missing i18n keys flagged by
i18n-unused-keys.ymlworkflow—review output and add translations for new features (Code Interpreter, Agents, MCP) inlocizeor translation files
⭐Top contributors
Click to expand
Top contributors
- @danny-avila — 83 commits
- @berry-13 — 2 commits
- @github-actions[bot] — 2 commits
- @ethanlaj — 2 commits
- @usnavy13 — 1 commits
📝Recent commits
Click to expand
Recent commits
9441563— 🛡️ refactor: ScopeallowedAddressesBy Port (#13022) (danny-avila)b39bf83— 📦 chore: Update@librechat/agentsto v3.1.79 (#13000) (danny-avila)4238dd4— 🪪 fix: Preserve OIDC Logout ID Token Hint (#12999) (danny-avila)8f92ec0— 🧭 fix: Navigate Signed CDN Downloads (#12998) (danny-avila)40a05bb— 📦 chore: npm audit fixes and Mongoose 8.23 TypeScript follow-ups (#12996) (danny-avila)1bc2692— 🌥️ feat: Add Optional Region-aware S3/CloudFront Storage Keys (#12987) (danny-avila)65b63b8— 🪟 refactor: Improve Subagent Dialog Prompt Rendering (#12982) (danny-avila)ddf5879— ⏱️ fix: Align Auto-Refill Next Date (#12980) (danny-avila)5efbcb8— 🌐 fix: Percent-encode X-File-Metadata header for Unicode filenames (#12983) (usnavy13)5c338a4— 🛂 fix: Harden Agent File Preview Access (#12981) (danny-avila)
🔒Security observations
- High · Potential Unvalidated External Dependencies —
api/package.json - dependencies section. Multiple AI provider SDKs are included (@anthropic-ai/vertex-sdk, @aws-sdk/, @azure/, @google/genai) without explicit version pinning constraints. These could pull in vulnerable transitive dependencies. The use of caret (^) versions allows automatic updates to minor/patch versions which may introduce vulnerabilities. Fix: Implement strict version pinning for critical security-sensitive dependencies. Use exact versions (remove ^) for AI provider SDKs. Implement automated dependency vulnerability scanning via Dependabot or Snyk in CI/CD pipeline. - High · Missing Environment Variable Validation —
.env.example, api/app configuration. The .env.example file shows critical database configuration (MONGO_URI) and server settings without validation patterns visible in the codebase. No evidence of input validation, sanitization, or schema validation for environment variables before use. Fix: Implement environment variable validation at application startup using libraries like joi or zod. Validate that required variables exist, match expected formats, and contain no suspicious values. Fail fast if validation fails. - High · Insecure Default Configuration in Dockerfile —
Dockerfile - ARG NODE_MAX_OLD_SPACE_SIZE and USER directives. The Dockerfile specifies NODE_MAX_OLD_SPACE_SIZE=6144MB as default, which could lead to excessive memory consumption. More critically, the application runs as 'node' user (non-root) which is good, but there's no explicit security hardening like capability dropping or read-only filesystem enforcement. Fix: Add security hardening: use --cap-drop=ALL, set --read-only with mounted volumes for writable paths. Implement resource limits in docker-compose. Add security_opt and set appropriate ulimits. - High · Potential API Key/Secret Exposure in Logs —
api/app/clients - all client implementations. With numerous AI provider integrations (@aws-sdk, @azure/*, @anthropic-ai/vertex-sdk), API keys and credentials are likely passed through environment variables. No evidence of log sanitization to prevent credentials from being exposed in application logs or error traces. Fix: Implement comprehensive log sanitization. Create a sanitizer utility that removes/masks API keys, tokens, and credentials from all logged data. Use structured logging with dedicated fields for sensitive data marked as 'redacted'. - Medium · Outdated/Potentially Vulnerable Base Image —
Dockerfile - FROM node:20-alpine. Node:20-alpine is used but the Dockerfile doesn't specify an explicit minor version tag (e.g., 20.11-alpine). Alpine packages are updated, but pinning to specific versions reduces supply chain risk. Fix: Pin to specific Node.js version tag (e.g., node:20.11.1-alpine). Implement regular image rebuild pipeline. Use image scanning tools like Trivy in CI/CD to detect vulnerabilities before deployment. - Medium · Missing CORS Security Configuration —
api/package.json - cors dependency listed. The package.json includes 'cors' dependency, but no explicit CORS configuration is visible. Without proper CORS headers, the application may be vulnerable to cross-origin attacks or may be overly permissive. Fix: Ensure CORS is configured with specific allowed origins (not '*'). Implement whitelist of trusted domains. Document and enforce in middleware setup. Review .env for CORS_ORIGIN configuration. - Medium · Cryptographic Library Exposure Risk —
api/package.json - bcryptjs dependency. The bcryptjs dependency (^2.4.3) is present but with caret versioning. bcryptjs is critical for password hashing and should use exact versions to prevent unexpected behavioral changes. Fix: Pin bcryptjs to exact version. Verify it's used consistently for all password operations. Ensure salt rounds are configured securely (minimum 12). Audit all password hashing implementations. - Medium · Session Management via Connect-Redis —
api/package.json - connect-redis dependency. connect-redis is used for session storage, but Redis configuration security is not visible. No evidence of Redis authentication, encryption in transit, or secure session secret configuration. Fix: Ensure Redis is configured with strong authentication.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.