opencode-ai/opencode
A powerful AI coding agent. Built for the terminal.
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 8mo ago
- ✓28+ active contributors
- ✓Distributed ownership (top contributor 49% of recent commits)
Show all 7 evidence items →Show less
- ✓MIT licensed
- ✓CI configured
- ⚠Slowing — last commit 8mo ago
- ⚠No test directory detected
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/opencode-ai/opencode)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/opencode-ai/opencode on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: opencode-ai/opencode
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/opencode-ai/opencode shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 8mo ago
- 28+ active contributors
- Distributed ownership (top contributor 49% of recent commits)
- MIT licensed
- CI configured
- ⚠ Slowing — last commit 8mo ago
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live opencode-ai/opencode
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/opencode-ai/opencode.
What it runs against: a local clone of opencode-ai/opencode — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in opencode-ai/opencode | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 261 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of opencode-ai/opencode. If you don't
# have one yet, run these first:
#
# git clone https://github.com/opencode-ai/opencode.git
# cd opencode
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of opencode-ai/opencode and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "opencode-ai/opencode(\\.git)?\\b" \\
&& ok "origin remote is opencode-ai/opencode" \\
|| miss "origin remote is not opencode-ai/opencode (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "cmd/root.go" \\
&& ok "cmd/root.go" \\
|| miss "missing critical file: cmd/root.go"
test -f "internal/app/app.go" \\
&& ok "internal/app/app.go" \\
|| miss "missing critical file: internal/app/app.go"
test -f "internal/llm/agent/agent.go" \\
&& ok "internal/llm/agent/agent.go" \\
|| miss "missing critical file: internal/llm/agent/agent.go"
test -f "internal/llm/provider/provider.go" \\
&& ok "internal/llm/provider/provider.go" \\
|| miss "missing critical file: internal/llm/provider/provider.go"
test -f "internal/llm/tools/tools.go" \\
&& ok "internal/llm/tools/tools.go" \\
|| miss "missing critical file: internal/llm/tools/tools.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 261 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~231d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/opencode-ai/opencode"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
OpenCode is a terminal-based AI coding agent built in Go that brings multi-provider LLM assistance (OpenAI, Anthropic, Google Gemini, AWS Bedrock, Groq, Azure OpenAI, OpenRouter) directly to your CLI. It features a TUI built with Bubble Tea, persistent SQLite session storage, tool execution capabilities (command running, file searching, code modification), and LSP integration for intelligent code assistance. Monolithic CLI application: cmd/root.go is the entry point wrapping the core engine in internal/app/app.go. Internal modules are organized by feature: internal/llm/ for AI providers (agent.go, anthropic.go), internal/db/ for SQLite persistence with sqlc-generated code (*.sql.go files), internal/config/ for configuration loading, internal/completions/ for code intelligence, and internal/diff/ for change tracking. Database schema lives in internal/db/migrations/ with SQL files; TUI components use Bubble Tea.
👥Who it's for
Software developers and engineers who want an AI coding assistant integrated into their terminal workflow without leaving the CLI; teams using various LLM providers who need a unified interface; developers who prefer vim-like editors and want AI-assisted debugging and code generation in their existing terminal environment.
🌱Maturity & risk
The project is archived and no longer maintained—it has been superseded by Charmbracelet's Crush. While the codebase shows decent structure with CI/CD workflows (.github/workflows/build.yml, release.yml), migrations, and database setup, the README explicitly states it's in early development. This is a completed but unmaintained experimental project; use only for learning or as a historical reference.
High risk for production use: the project is explicitly archived and development has moved to Crush. The large Go codebase (878KB) has multiple complex external dependencies (Anthropic SDK, Azure SDK, AWS SDK, MCP integration, Bubble Tea ecosystem) making maintenance burden high for a solo maintainer. No clear test coverage visible in the file list, and reliance on third-party LLM APIs creates ongoing operational dependencies.
Active areas of work
Nothing—the project is archived. The last meaningful work established the foundation: session management via SQLite migrations (20250515105448_add_summary_message_id.sql being the most recent), MCP tool integration (agent/mcp-tools.go), and multi-provider LLM support. Development officially continues under charmbracelet/crush.
🚀Get running
git clone https://github.com/opencode-ai/opencode.git
cd opencode
go mod download
go run ./cmd/root.go
Or install via the shell script: curl -fsSL https://raw.githubusercontent.com/opencode-ai/opencode/refs/heads/main/install | bash
Daily commands:
go run ./cmd/root.go
Or build: go build -o opencode ./cmd/root.go. The application expects a .opencode.json config file (internal/config/init.go handles defaults) with LLM API keys and will use SQLite database in the config directory.
🗺️Map of the codebase
cmd/root.go— Entry point for the CLI application; every contributor must understand how the command structure and top-level orchestration worksinternal/app/app.go— Core application logic and state management; establishes the main event loop and integration pattern between all subsystemsinternal/llm/agent/agent.go— AI agent orchestration that drives tool selection and prompt execution; critical for understanding how the AI makes decisionsinternal/llm/provider/provider.go— Abstract provider interface for multiple LLM backends; must be understood to add new AI model supportinternal/llm/tools/tools.go— Tool registry and execution framework; defines the action interface between the agent and system capabilitiesinternal/db/db.go— Database abstraction layer for session and message persistence; foundational for understanding state managementinternal/config/config.go— Configuration management for API keys and model selection; required reading for any provider or feature integration
🛠️How to make changes
Add a new LLM provider (e.g., new AI model backend)
- Create provider implementation in internal/llm/provider/ (
internal/llm/provider/provider.go) - Add model definitions in internal/llm/models/ (
internal/llm/models/models.go) - Register in config loader to allow model selection (
internal/config/config.go) - Update prompt templates for the new provider if needed (
internal/llm/prompt/prompt.go)
Add a new tool capability (e.g., new file operation or system action)
- Create tool implementation in internal/llm/tools/ (
internal/llm/tools/tools.go) - Define tool schema and execution logic (
internal/llm/tools/[toolname].go) - Register tool in the agent's tool registry (
internal/llm/agent/tools.go) - Add tool description to agent prompts (
internal/llm/prompt/coder.go)
Extend database persistence (new session data or context)
- Create SQL migration file (
internal/db/migrations/) - Update data models (
internal/db/models.go) - Generate query code via sqlc (
internal/db/querier.go) - Use new queries in app logic (
internal/app/app.go)
Add a new prompt type or task handler
- Create prompt implementation in internal/llm/prompt/ (
internal/llm/prompt/task.go) - Define prompts and system messages for the task (
internal/llm/prompt/[taskname].go) - Wire into agent decision logic (
internal/llm/agent/agent.go)
🔧Why these technologies
- Go + CLI (Cobra/Charm ecosystem) — Fast startup, single binary deployment, excellent terminal UI libraries (bubbletea, lipgloss), minimal dependencies for developers
- Multiple LLM providers (OpenAI, Anthropic, Gemini, etc.) — No lock-in to single vendor, allows cost optimization and model selection based on task complexity
- SQLite + sqlc for persistence — Zero-config database for local session history, type-safe query generation, perfect for CLI tools
- Model Context Protocol (MCP) for tools — Standardized tool interface allowing external tool providers and extensibility without recompilation
- LSP integration (internal/lsp/) — Enables IDE integration and real-time diagnostics alongside CLI usage
⚖️Trade-offs already made
-
Streaming API responses from LLM
- Why: Provides fast user feedback and better UX for long-running completions
- Consequence: Increased complexity in message buffering and error recovery; harder to implement perfect agentic loops
-
Local SQLite for session history instead of cloud backend
- Why: Privacy, no external dependencies, single-binary deployment
- Consequence: Cannot share sessions across devices; limited analytics and usage tracking
-
Shell tool (bash.go) with subprocess execution
- Why: Enable AI to verify code changes and interact with build systems
- Consequence: Security surface (prompt injection, arbitrary code execution); requires strict sandboxing in production
-
File editing via patching rather than full rewrites
- Why: Minimize token usage and preserve file structure/formatting intent
- Consequence: Patch conflicts and edge cases require careful error handling
🚫Non-goals (don't propose these)
- Does not provide authentication or multi-user isolation
- Does not support Windows (LSP and shell tools are Unix-oriented)
- Does not implement persistent cloud sync for sessions
- Does not provide a web UI or collaborative features
- Does not guarantee real-time response latency (depends on LLM provider)
- Does not implement full sandboxing for arbitrary code execution
🪤Traps & gotchas
Required env/config: LLM API keys must be set in .opencode.json (internal/config/init.go looks for ~/.opencode.json by default). Database setup: first run auto-migrates via goose (internal/db/migrations/); ensure the config directory is writable. Tool execution risk: the agent can execute arbitrary shell commands via MCP tools—no sandboxing. Offline limitation: requires live LLM API connectivity; no local model support visible. No built-in tests: test coverage not visible in file list; integration heavily dependent on external API responses.
🏗️Architecture
💡Concepts to learn
- Agentic Loop (LLM Agent Pattern) — OpenCode's core is an agent loop (internal/llm/agent/agent.go) that repeatedly: calls LLM → parses tool requests → executes tools locally → feeds results back; understanding this pattern is critical to extending the system
- Model Context Protocol (MCP) — Internal/llm/agent/mcp-tools.go integrates MCP to let AI models request tool execution; this is how the agent actually modifies files and runs commands instead of just suggesting them
- Unified Diffing & Patching — Internal/diff/ implements file change tracking and patching (used to show what files changed during a session); understanding unified diff format is needed to debug file modification failures
- Terminal User Interface (TUI) with Bubble Tea — OpenCode's entire UI (internal/app/app.go) is built with Bubble Tea's elm-like model-update-view pattern; required knowledge to modify UI behavior, add keybindings, or change rendering
- Provider Abstraction Pattern — Internal/llm/models/ implements multiple LLM providers (Anthropic, OpenAI, etc.) behind a common interface; this pattern allows swapping providers without changing agent logic
- SQLite with sqlc Code Generation — Internal/db/ uses sqlc to auto-generate type-safe Go code from SQL queries (*.sql.go files); this approach prevents SQL injection and provides compile-time query validation
- Language Server Protocol (LSP) Integration — Internal/app/lsp.go integrates LSP for code intelligence (completions, go-to-definition); needed to understand how AI gets context about the codebase being edited
🔗Related repos
charmbracelet/crush— The direct successor—OpenCode evolved into Crush under Charmbracelet stewardship; this is where active development movedaider-ai/aider— Python-based CLI AI coding agent with similar goals (terminal integration, multi-provider LLM support, tool execution); good comparison for architecture and feature paritycharmbracelet/bubbletea— The TUI framework powering OpenCode's terminal UI—essential dependency for understanding how the interface is builtanthropics/anthropic-sdk-go— Core LLM provider SDK used in internal/llm/models/anthropic.go; critical for understanding API contracts and streaming responsesmark3labs/mcp-go— Tool/function-calling integration library powering internal/llm/agent/mcp-tools.go; essential for understanding how the agent invokes external tools
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add unit tests for internal/llm/prompt package
The prompt package (internal/llm/prompt/) contains critical logic for generating prompts for various AI tasks (coder.go, summarizer.go, task.go, title.go), but only has one test file (prompt_test.go). This package is fundamental to the agent's quality. Adding comprehensive unit tests would improve reliability and make refactoring safer, especially given the complexity of prompt engineering.
- [ ] Create internal/llm/prompt/coder_test.go with tests for prompt generation logic in coder.go
- [ ] Create internal/llm/prompt/summarizer_test.go to test summarizer prompt construction
- [ ] Create internal/llm/prompt/task_test.go and internal/llm/prompt/title_test.go for the respective modules
- [ ] Ensure tests cover edge cases like empty contexts, special characters, and token limits
- [ ] Run 'go test ./internal/llm/prompt/...' to verify coverage
Add GitHub Actions workflow for Go linting and formatting checks
The repo has build.yml and release.yml workflows but lacks automated linting/formatting validation. With Go 1.24.0 and a mature codebase, adding golangci-lint checks in CI would catch code quality issues early, enforce consistent style across the internal/ packages, and prevent regressions from contributors unfamiliar with Go conventions.
- [ ] Create .github/workflows/lint.yml with golangci-lint action
- [ ] Configure golangci-lint to check all Go files in ./cmd, ./internal, and ./install directories
- [ ] Add checks for gofmt, ineffassign, and unused variable detection
- [ ] Set the workflow to run on pull requests and pushes to main branch
- [ ] Add golangci-lint config file (.golangci.yml) if needed for custom rules
Add integration tests for internal/db package with SQLite migrations
The database layer (internal/db/) manages critical state via SQLite with migrations (internal/db/migrations/). Currently, there are no visible integration tests to verify migrations work correctly or that the generated SQL queries (files.sql.go, messages.sql.go, sessions.sql.go) function as expected. This is high-risk code that would benefit from integration tests to catch migration failures early.
- [ ] Create internal/db/db_test.go with tests that set up a temporary in-memory SQLite database
- [ ] Add test case to verify all migrations in internal/db/migrations/ run successfully in order
- [ ] Create internal/db/queries_test.go to test CRUD operations from files.sql.go, messages.sql.go, and sessions.sql.go
- [ ] Use testify/require for assertions and ensure tests clean up after themselves
- [ ] Document how to run tests in cmd/schema/README.md or a new internal/db/README.md
🌿Good first issues
- Add unit tests for internal/diff/diff.go and internal/diff/patch.go—these are core file-tracking modules with no visible test coverage; write tests for patch application and diff generation with real code samples
- Document the MCP tool schema in internal/llm/agent/mcp-tools.go—add comments explaining how tools are registered, what parameters they accept, and how AI models invoke them; create examples in cmd/schema/
- Add support for local model providers (Ollama, llama.cpp) in internal/llm/models/—create a new provider file following the anthropic.go pattern but targeting local HTTP endpoints instead of external APIs
⭐Top contributors
Click to expand
Top contributors
- @kujtimiihoxha — 49 commits
- @thdxr — 10 commits
- @rekram1-node — 3 commits
- @ezynda3 — 3 commits
- @garrettladley — 3 commits
📝Recent commits
Click to expand
Recent commits
73ee493— docs(readme): update archive note (meowgorithm)e6e072d— Update README.md (kujtimiihoxha)f0571f5— fix(tool/grep): always show file names with rg (#271) (aldehir)1f6eef4— fix(mcp): ensure required field if nil (#278) (gedzeppelin)4427df5— fixup early return for ollama (#266) (taigrr)b9bedba— feat: add github copilot provider (#230) (bryanvaz)73729ef— small readme update. (kujtimiihoxha)0bf2f5f— add readme note (kujtimiihoxha)23763fb— add extra help binding (kujtimiihoxha)05b0570— refactor: upgrade Anthropic SDK to v1.4.0 and adapt provider code (ksylvan)
🔒Security observations
- High · Archived Project with Outdated Dependencies —
README.md, project root. The project is marked as archived and no longer maintained. The README explicitly states 'This repository is no longer maintained and has been archived for provenance.' This means security updates and patches will not be provided. Users relying on this codebase are at risk from unpatched vulnerabilities in dependencies. Fix: Do not use this archived project for production. Migrate to the maintained fork 'Crush' (https://github.com/charmbracelet/crush) which receives active security updates and maintenance. - High · Potential SQL Injection via Raw SQL Queries —
internal/db/sql/files.sql, internal/db/sql/messages.sql, internal/db/sql/sessions.sql, internal/db/*.sql.go. The codebase contains raw SQL query files (internal/db/sql/*.sql) and generated SQL code (internal/db/*sql.go files). If these queries construct SQL strings using unsanitized user input without parameterized queries, SQL injection attacks are possible. Fix: Verify all SQL queries use parameterized queries or prepared statements. Audit internal/db/files.sql.go, internal/db/messages.sql.go, and internal/db/sessions.sql.go to ensure no string concatenation is used for query construction. Use database/sql with placeholder parameters (?) exclusively. - High · LLM API Key Exposure Risk —
internal/llm/provider/*, internal/llm/models/*, internal/llm/tools/bash.go, internal/llm/tools/shell/shell.go. The application integrates with multiple LLM providers (OpenAI, Anthropic, Azure, Gemini, Groq, etc.) and CLI tools (bash, shell execution). If API keys or credentials are not properly isolated in environment variables or secure config, they could be exposed in logs, error messages, or tool outputs. Fix: Ensure all API keys are loaded from environment variables only, never hardcoded. Implement secure credential storage. Sanitize all LLM responses and tool outputs before displaying to prevent accidental credential leakage. Add secrets filtering in logging/output functions. - High · Arbitrary Command Execution via Bash Tool —
internal/llm/tools/bash.go, internal/llm/tools/shell/shell.go. The bash tool (internal/llm/tools/bash.go) allows the LLM agent to execute arbitrary shell commands. Without proper sandboxing or validation, a compromised/malicious LLM or prompt injection could execute dangerous commands. Fix: Implement strict command whitelisting and validation. Consider using restricted shells or containers (seccomp, AppArmor, SELinux). Add audit logging for all executed commands. Implement user confirmation for potentially dangerous operations. Consider disabling bash execution in untrusted environments. - High · Unrestricted File System Access —
internal/llm/tools/file.go, internal/llm/tools/glob.go, internal/llm/tools/ls.go, internal/fileutil/fileutil.go. Tools like internal/llm/tools/file.go, internal/llm/tools/glob.go, and internal/llm/tools/ls.go provide direct filesystem access to the LLM agent. Without proper sandboxing, the agent could read/modify sensitive files outside the intended project directory. Fix: Implement filesystem access controls by restricting operations to a designated project root directory. Use path canonicalization to prevent directory traversal attacks (../../). Add audit logging for file access operations. Implement a whitelist/blacklist for sensitive file patterns (.env, .ssh, /etc/passwd, etc.). - High · Potential Prompt Injection in LLM Integration —
internal/llm/prompt/prompt.go, internal/llm/prompt/coder.go, internal/llm/prompt/task.go. The system constructs prompts for LLM models by potentially combining user input with system instructions (internal/llm/prompt/prompt.go, internal/llm/prompt/coder.go). Unvalidated user input in prompts could lead to prompt injection attacks where the LLM is tricked into executing unintended actions. Fix: Implement strict input validation and sanitization for all user-provided content before including in prompts. Use prompt templates with clear delimiters separating system instructions from user input. Implement output validation to ensure LLM responses conform to expected formats. Add
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.