RepoPilotOpen in app →

Gitlawb/openclaude

runs anywhere. uses anything

Healthy

Healthy across the board

weakest axis
Use as dependencyConcerns

non-standard license (Other)

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit today
  • 25+ active contributors
  • Distributed ownership (top contributor 16% of recent commits)
Show all 7 evidence items →
  • Other licensed
  • CI configured
  • Tests present
  • Non-standard license (Other) — review terms
What would change the summary?
  • Use as dependency ConcernsMixed if: clarify license terms

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/gitlawb/openclaude)](https://repopilot.app/r/gitlawb/openclaude)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/gitlawb/openclaude on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: Gitlawb/openclaude

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Gitlawb/openclaude shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit today
  • 25+ active contributors
  • Distributed ownership (top contributor 16% of recent commits)
  • Other licensed
  • CI configured
  • Tests present
  • ⚠ Non-standard license (Other) — review terms

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live Gitlawb/openclaude repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/Gitlawb/openclaude.

What it runs against: a local clone of Gitlawb/openclaude — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in Gitlawb/openclaude | Confirms the artifact applies here, not a fork | | 2 | License is still Other | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>Gitlawb/openclaude</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Gitlawb/openclaude. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/Gitlawb/openclaude.git
#   cd openclaude
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of Gitlawb/openclaude and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Gitlawb/openclaude(\\.git)?\\b" \\
  && ok "origin remote is Gitlawb/openclaude" \\
  || miss "origin remote is not Gitlawb/openclaude (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Other)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Other\"" package.json 2>/dev/null) \\
  && ok "license is Other" \\
  || miss "license drift — was Other at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "bin/openclaude" \\
  && ok "bin/openclaude" \\
  || miss "missing critical file: bin/openclaude"
test -f "src/bridge/bridgeMain.ts" \\
  && ok "src/bridge/bridgeMain.ts" \\
  || miss "missing critical file: src/bridge/bridgeMain.ts"
test -f "src/QueryEngine.ts" \\
  && ok "src/QueryEngine.ts" \\
  || miss "missing critical file: src/QueryEngine.ts"
test -f "package.json" \\
  && ok "package.json" \\
  || miss "missing critical file: package.json"
test -f "scripts/build.ts" \\
  && ok "scripts/build.ts" \\
  || miss "missing critical file: scripts/build.ts"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Gitlawb/openclaude"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

OpenClaude is a terminal-first CLI coding agent that unifies access to 200+ LLMs across cloud providers (OpenAI, Gemini, GitHub Models, DeepSeek) and local backends (Ollama, Atomic Chat) through a single interface. It provides a persistent terminal workflow with streaming output, slash commands, MCP support, file tools (bash, grep, glob), and vendor-agnostic provider profiles stored locally. npm/bun monorepo: src/entrypoints/ contains sdk.d.ts and SDK surface; bin/openclaude is the CLI entry; dist/cli.mjs and dist/sdk.mjs are built outputs. Provider integrations are pluggable (docs/integrations/how-to/ shows add-vendor, add-model patterns). Python support lives in python/ (atomic_chat_provider.py, ollama_provider.py). Build orchestration via bun scripts/ directory.

👥Who it's for

Software engineers and AI developers who want a unified CLI experience for LLM-powered coding tasks without being locked into a single provider's API or SDK. Particularly useful for teams testing multiple model backends (local Ollama vs cloud APIs) or migrating between providers.

🌱Maturity & risk

Actively developed at v0.9.2 with release automation (GitHub Actions workflows), organized documentation, CI/PR checks, and sponsors (GitLawb, Bankr.bot, Atomic Chat). The codebase is substantial (21M+ TypeScript) but still in 0.x versioning, indicating the API surface is stabilizing but not yet frozen for 1.0.

Single primary maintainer pattern (Gitlawb org) increases bus factor risk. Heavy TypeScript/JavaScript dependency graph (bun.lock present) requires careful version management. The integration subsystem (docs/integrations/, 15+ provider adapters) is complex—adding new providers or updating existing ones can cause cascading breakage. Early v0.x release cycle means breaking changes are still possible.

Active areas of work

Integration infrastructure is mature (docs/integrations/overview.md, reference-samples.md guide contributors). Recent focus on provider profiles (/provider command), vendor onboarding (add-anthropic-proxy.md), and hook-chains.md suggests active work on flexibility. Release automation (release.yml, release-please-manifest.json) indicates regular version bumps.

🚀Get running

git clone https://github.com/Gitlawb/openclaude.git
cd openclaude
npm install
npm run dev

Or after install globally: npm install -g @gitlawb/openclaude && openclaude. Inside the CLI, run /provider to add an LLM backend.

Daily commands: Development: npm run dev (builds + runs cli.mjs). Provider-specific: npm run dev:ollama, npm run dev:openai, npm run dev:gemini (each uses scripts/provider-launch.ts). Fast iteration: npm run dev:ollama:fast --bare for minimal overhead. Profile init: npm run profile:fast to bootstrap an Ollama provider, then run dev.

🗺️Map of the codebase

  • bin/openclaude — Main CLI entry point; all contributors need to understand how the CLI is invoked and bootstrapped
  • src/bridge/bridgeMain.ts — Core bridge orchestration logic that connects CLI to LLM providers and manages sessions
  • src/QueryEngine.ts — Central query execution engine that routes requests to providers; critical for understanding request flow
  • package.json — Defines exports, build targets, and CLI binary configuration; essential for understanding distribution and module resolution
  • scripts/build.ts — Build orchestration script that compiles TypeScript and generates integration artifacts
  • src/bridge/bridgeConfig.ts — Configuration loading and validation for bridge initialization; critical for provider setup
  • docs/integrations/overview.md — High-level guide to the provider integration system and architecture

🛠️How to make changes

Add Support for a New LLM Provider

  1. Create provider configuration in docs/integrations/how-to/add-vendor.md following the reference (docs/integrations/how-to/add-vendor.md)
  2. Add provider implementation (Python or Node.js wrapper) alongside existing providers (python/ollama_provider.py)
  3. Register provider in provider discovery system (scripts/provider-discovery.ts)
  4. Update QueryEngine.ts to route requests to the new provider (src/QueryEngine.ts)
  5. Generate integration artifacts via build script (scripts/generate-integrations-artifacts.ts)
  6. Add tests in src/tests/providerCounts.test.ts (src/__tests__/providerCounts.test.ts)

Add a New Tool or Agent Capability

  1. Define tool contract in src/Tool.ts (src/Tool.ts)
  2. Implement tool execution in src/Task.ts or as separate module (src/Task.ts)
  3. Register tool in bridgeMain.ts bridge initialization (src/bridge/bridgeMain.ts)
  4. Add tool invocation tests in src/tests/ (src/__tests__/bugfixes.test.ts)

Modify Configuration & Bootstrap Flow

  1. Update config schema in src/bridge/bridgeConfig.ts (src/bridge/bridgeConfig.ts)
  2. Adjust env variable handling in src/bootstrap/state.ts (src/bootstrap/state.ts)
  3. Update provider bootstrap logic in scripts/provider-bootstrap.ts (scripts/provider-bootstrap.ts)
  4. Document setup steps in docs/advanced-setup.md or relevant guide (docs/advanced-setup.md)

Add a New Session or State Management Feature

  1. Extend session history tracking in src/assistant/sessionHistory.ts (src/assistant/sessionHistory.ts)
  2. Update bridge state management in src/bootstrap/state.ts (src/bootstrap/state.ts)
  3. Integrate with session creation in src/bridge/createSession.ts (src/bridge/createSession.ts)
  4. Add integration tests in src/tests/ (src/__tests__/bugfixes.test.ts)

🪤Traps & gotchas

Ripgrep (rg) must be installed system-wide—npm install alone is insufficient, install fails silently without it. Provider profiles are persisted locally in a config directory (exact path not visible but likely ~/.openclaude or similar—check bin/openclaude); migrations between versions may break old profiles. Python providers require Python + requirements.txt deps installed; no automatic detection. Bun is the dev lock manager (bun.lock)—mixing npm/yarn can cause resolve conflicts. Integration codegen (npm run integrations:check) must pass in CI; edited provider models without running generate-integrations-artifacts.ts will fail CI.

🏗️Architecture

💡Concepts to learn

  • LLM Provider Abstraction Layer — OpenClaude's core value is hiding provider-specific API differences (OpenAI vs Gemini vs Ollama) behind a unified interface; understanding the adapter pattern used in docs/integrations/ is key to adding new providers
  • Server-Sent Events (SSE) Streaming — OpenClaude streams LLM responses in real-time (core UX feature); cloud providers use SSE and local backends may use chunked responses—understanding streaming is critical for debugging provider integrations
  • Model Context Protocol (MCP) — OpenClaude uses MCP for tool/agent extensions and inter-process communication with external services; the /agent command and tool chaining depend on this protocol
  • Hook Chains / Middleware Pattern — docs/hook-chains.md suggests OpenClaude uses composable hooks for request preprocessing and response postprocessing across providers; this is how vendor-specific transformations are applied without duplicating code
  • CLI State Persistence & Profiles — OpenClaude persists provider configs (API keys, model choices) in local profiles via /provider command; understanding where and how profiles are stored is essential for supporting multi-profile workflows and migrations
  • Cross-Language RPC (Node.js ↔ Python) — Python providers (python/atomic_chat_provider.py, python/ollama_provider.py) communicate with the Node.js CLI via subprocess; this async message-passing pattern avoids reimplementing providers in TypeScript
  • Codegen for Integration Artifacts — scripts/generate-integrations-artifacts.ts validates and auto-generates provider metadata; understanding this codegen flow is critical for adding models or vendors without manual boilerplate
  • aider-ai/aider — Similar CLI-first coding agent using LLMs with multi-provider support; direct competitor in terminal-based AI coding workflows
  • continuedev/continue — AI coding agent with IDE/editor integrations (VS Code, JetBrains) that also supports multiple LLM backends; complements OpenClaude's terminal focus with GUI editors
  • BerriAI/litellm — LLM proxy layer (mentioned in docs/litellm-setup.md) that abstracts provider APIs; OpenClaude can route through LiteLLM for unified logging and cost tracking
  • ollama/ollama — Local LLM runtime that OpenClaude integrates deeply with (python/ollama_provider.py, dev:ollama scripts); primary backend for offline-first workflows
  • modelcontextprotocol/specification — MCP protocol spec that OpenClaude implements for tool/agent workflows; understanding MCP is essential for extending OpenClaude's agent capabilities

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive test coverage for Python integration providers

The repo has Python providers (atomic_chat_provider.py, ollama_provider.py, smart_router.py) with test files (test_atomic_chat_provider.py, test_ollama_provider.py, test_smart_router.py) but the test structure appears minimal. Given the multi-language nature of OpenClaude and the critical role of these providers, expanding Python test coverage would catch integration bugs early and serve as documentation for other maintainers adding new providers.

  • [ ] Expand python/tests/test_atomic_chat_provider.py with edge cases (network failures, malformed responses, auth errors)
  • [ ] Expand python/tests/test_ollama_provider.py with local model availability checks and fallback scenarios
  • [ ] Add integration tests in python/tests/ that test provider chaining via smart_router.py
  • [ ] Add a python/tests/conftest.py fixture for mock LLM responses across all provider tests
  • [ ] Reference the test patterns in scripts/ that handle similar mock scenarios

Add GitHub Actions workflow for Python provider validation

The repo has .github/workflows/pr-checks.yml and release.yml for Node/CLI checks, but there's no dedicated Python testing pipeline. Since Python providers are part of the core integration strategy (per docs/integrations/), a Python-specific CI workflow would ensure provider changes don't break across Python versions and dependencies.

  • [ ] Create .github/workflows/python-checks.yml that runs python/tests/ on PR
  • [ ] Add pytest/coverage commands targeting python/ directory
  • [ ] Test against Python 3.9, 3.10, 3.11, 3.12 using matrix strategy
  • [ ] Validate python/requirements.txt installation and import resolution
  • [ ] Add step to check for breaking changes in provider interfaces (e.g., atomic_chat_provider.py signature changes)

Create missing documentation for slash commands and hook-chains integration

The repo references docs/hook-chains.md in the file structure, indicating a slash commands/hook system exists. However, docs/integrations/how-to/ has guides for adding models, gateways, and vendors, but no guide for implementing custom slash commands or hooks. This blocks contributors from extending CLI behavior and is a gap relative to the 'MCP, slash commands' promise in the README.

  • [ ] Create docs/integrations/how-to/add-slash-command.md with examples from existing slash command implementations
  • [ ] Reference hook-chain architecture from docs/hook-chains.md and show how to wire new commands into it
  • [ ] Add a concrete example: 'Add a /test-runner slash command that integrates with Ollama provider'
  • [ ] Document the interface/contract that slash commands must satisfy (input/output types)
  • [ ] Link the new guide from docs/integrations/overview.md and add to CONTRIBUTING.md

🌿Good first issues

  • Add missing provider documentation: docs/integrations/how-to/ has add-vendor.md and add-model.md but no add-usage-support.md implementation guide—write a concrete example (e.g., Ollama's token counting) and PR it
  • Create test coverage for Python integrations: python/tests/ exists but conftest.py and init.py are empty—add unit tests for atomic_chat_provider.py and ollama_provider.py similar to bin/import-specifier.test.mjs pattern
  • Add Windows quick-start troubleshooting: docs/quick-start-windows.md exists but is likely minimal; document ripgrep install via chocolatey and common PATH issues (Ollama, Python) that Windows users hit

Top contributors

Click to expand

📝Recent commits

Click to expand
  • ed7b697 — Feat: Add startup logo palette picker (#1072) (techbrewboss)
  • 41b2496 — docs: update setup guides to clarify the only available auth method for Gemini (#1064) (jackil182)
  • 1672639 — fix plan mode branding and plan path (#1062) (techbrewboss)
  • 4830d6f — fix(openai-shim): strip store for local providers (vLLM, custom) (#1048) (0xfandom)
  • 4b1e516 — feat: incremental and cached token counting (#795) (LifeJiggy)
  • e438c89 — fix: resolve two bugs making interactive mode unusable with plugin ecosystems (#825) (#830) (fernandoxavier02)
  • 7cfc8d5 — feat(cli): honor --model alone without requiring --provider (#854) (0xfandom)
  • 402cd3d — feat(websearch): add first-class Brave adapter; fix Google + Brave presets; restore Exa snippets (#1044) (dnakhla)
  • 0adf97d — fix(openai-shim): strip store when baseUrl points at Cerebras (#1040) (0xfandom)
  • feb5791 — fix(effort): persist xhigh and send reasoning_effort on chat_completions (#857) (0xfandom)

🔒Security observations

OpenClaude demonstrates a generally solid security posture with proper DevOps practices including multi-stage Docker builds, locked dependencies via bun.lock, and a documented security policy. Primary concerns are minor: incomplete Dockerfile cleanup commands, missing patch-level image pinning, and incomplete security documentation. The codebase shows good security awareness (dedicated SECURITY.md, security-hardening tests, telemetry verification scripts). No critical vulnerabilities like hardcoded secrets, SQL injection vectors, or dangerous XSS patterns are evident in the provided files. The main recommendations focus on Docker hardening and completing security documentation.

  • Medium · Incomplete apt-get cleanup in Dockerfile — Dockerfile (line 23-24). The Dockerfile's apt-get cleanup command is incomplete. The line 'rm -rf /var/lib/apt/' is truncated and doesn't remove all package manager caches (/var/lib/apt/lists, /var/cache/apt). This leaves behind unnecessary files that increase container size and potential attack surface. Fix: Complete the cleanup: 'rm -rf /var/lib/apt/lists/* /var/cache/apt/* /tmp/* /var/tmp/*' to fully remove package manager artifacts and temporary files.
  • Medium · Missing SECURITY.md completion — SECURITY.md. The SECURITY.md file appears incomplete - it cuts off mid-sentence at 'Our general goal'. This incomplete security policy may not clearly communicate the full vulnerability reporting and response process to users. Fix: Complete the SECURITY.md file with full vulnerability response SLA, contact escalation procedures, and disclosure timeline.
  • Medium · Node image version pinning in Dockerfile — Dockerfile (lines 2 and 17). Using 'node:22-slim' without a specific patch version (e.g., node:22.11.0-slim) means the image tag can float to newer patch versions, potentially introducing unexpected changes. While slim images are good, lack of patch-level pinning reduces reproducibility. Fix: Pin to specific patch versions: 'FROM node:22.11.0-slim AS build' and 'FROM node:22.11.0-slim'. Consider using digest-based pinning for maximum reproducibility.
  • Low · Development dependencies exposed in build context — Dockerfile (runtime stage, lines 17-25). While the Dockerfile properly prunes devDependencies for the runtime stage, TypeScript sources and build scripts remain in the image. If the built CLI is somehow compromised or introspected, build-time information could leak implementation details. Fix: Consider whether source files and scripts need to be retained in the runtime image. Evaluate if they're needed for runtime operation or if they can be excluded from the final image.
  • Low · Missing security headers documentation — src/assistant/ directory. No evidence of documented security headers or Content Security Policy (CSP) configuration for the web components (React components visible in src/assistant/). React components and HTTP handlers should define appropriate security policies. Fix: Document and implement security headers (CSP, X-Frame-Options, X-Content-Type-Options, etc.) for any web-exposed surfaces. Review React component security practices for XSS prevention.
  • Low · No evidence of dependency pinning strategy — package.json and bun.lock. While package.json shows the project uses bun.lock for lockfile pinning, the presence of multiple development scripts that install dependencies without explicit version constraints could introduce supply chain risks if dependencies are updated. Fix: Ensure bun.lock is always committed and used. Consider implementing automated dependency security scanning (e.g., GitHub's Dependabot) and review significant updates before merging.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · Gitlawb/openclaude — RepoPilot