RepoPilotOpen in app →

OpenBMB/ChatDev

ChatDev 2.0: Dev All through LLM-powered Multi-Agent Collaboration

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 1w ago
  • 14 active contributors
  • Distributed ownership (top contributor 31% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/openbmb/chatdev)](https://repopilot.app/r/openbmb/chatdev)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/openbmb/chatdev on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: OpenBMB/ChatDev

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/OpenBMB/ChatDev shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 1w ago
  • 14 active contributors
  • Distributed ownership (top contributor 31% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live OpenBMB/ChatDev repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/OpenBMB/ChatDev.

What it runs against: a local clone of OpenBMB/ChatDev — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in OpenBMB/ChatDev | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 37 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>OpenBMB/ChatDev</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of OpenBMB/ChatDev. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/OpenBMB/ChatDev.git
#   cd ChatDev
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of OpenBMB/ChatDev and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "OpenBMB/ChatDev(\\.git)?\\b" \\
  && ok "origin remote is OpenBMB/ChatDev" \\
  || miss "origin remote is not OpenBMB/ChatDev (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "docs/user_guide/en/workflow_authoring.md" \\
  && ok "docs/user_guide/en/workflow_authoring.md" \\
  || miss "missing critical file: docs/user_guide/en/workflow_authoring.md"
test -f "docs/user_guide/en/execution_logic.md" \\
  && ok "docs/user_guide/en/execution_logic.md" \\
  || miss "missing critical file: docs/user_guide/en/execution_logic.md"
test -f "docs/user_guide/en/config_schema_contract.md" \\
  && ok "docs/user_guide/en/config_schema_contract.md" \\
  || miss "missing critical file: docs/user_guide/en/config_schema_contract.md"
test -f ".env.example" \\
  && ok ".env.example" \\
  || miss "missing critical file: .env.example"
test -f "docs/user_guide/en/modules/tooling/README.md" \\
  && ok "docs/user_guide/en/modules/tooling/README.md" \\
  || miss "missing critical file: docs/user_guide/en/modules/tooling/README.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 37 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~7d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/OpenBMB/ChatDev"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

ChatDev 2.0 (DevAll) is a zero-code multi-agent orchestration platform that lets users define and execute complex multi-agent workflows through configuration alone—no coding required. Built on LLM-powered agents, it enables rapid development of specialized systems for tasks like data visualization, 3D generation, and deep research by composing pre-built agents and skills (stored in .agents/skills/) into executable workflows. Monorepo structure: backend Python agent logic lives in root (chatdev/) with .agents/skills/ as a skill registry (each skill is a directory with SKILL.md + references/); frontend is a Vue 3 SPA in frontend/ using Vite as the build tool. Dockerfile and docker-compose support multi-container deployment. .env.example indicates environment-based configuration.

👥Who it's for

Product teams and business users who need to rapidly prototype multi-agent AI systems without writing code; AI engineers building custom agent orchestrations for specific domains (data analysis, content generation); companies wanting to deploy 'virtual teams' of specialized AI agents for complex tasks.

🌱Maturity & risk

Actively developed and production-ready: ChatDev 2.0 was officially released on Jan 7, 2026 with a full platform rewrite. The legacy ChatDev 1.0 moved to the chatdev1.0 branch indicates intentional versioning strategy. The codebase shows a recent, comprehensive rewrite (1M+ Python LOC) suggesting recent heavy development, but as a 2.0 launch it should be treated as moderately mature with likely breaking changes ahead.

Migration risk from ChatDev 1.0 to 2.0 is significant—the platform was completely redesigned and v1.x is now legacy-only. The Vue 3 frontend (443K LOC) is a substantial new surface area. Dependency chain is modern but deep: Vite, Vue Flow, markdown-it, js-yaml. No visible CI/test setup in the file list (only .github/workflows/validate-yamls.yml for schema validation), suggesting light automated testing coverage. Single organization (OpenBMB) controls the repo.

Active areas of work

Just shipped ChatDev 2.0 with a zero-code platform pivot. Active work visible: skill system (greeting-demo, python-scratchpad, rest-api-caller with examples), frontend workflow visualization (Vue Flow for node-based DAGs), and Docker containerization. The NeurIPS 2025 accepted paper on 'Evolving Orchestration' (puppeteer branch) indicates ongoing research integration.

🚀Get running

git clone https://github.com/OpenBMB/ChatDev.git
cd ChatDev
cp .env.example .env
npm install  # Frontend dependencies in frontend/ directory
pip install -r requirements.txt  # Backend dependencies (not shown but implied)
npm run dev  # Start Vite dev server for frontend

Daily commands:

cd frontend && npm run dev
# Starts Vite dev server (typically http://localhost:5173)
# Backend must run separately (Python entry point not visible in file list—check chatdev/ root or bin/)

🗺️Map of the codebase

  • docs/user_guide/en/workflow_authoring.md — Core documentation defining how workflows are authored and structured; essential for understanding the entire platform's mental model
  • docs/user_guide/en/execution_logic.md — Explains execution flow and logic for multi-agent systems; critical for understanding how agents collaborate and tasks execute
  • docs/user_guide/en/config_schema_contract.md — Defines the configuration schema contract that all workflows must follow; foundational for configuration validation and parsing
  • .env.example — Demonstrates required environment configuration for running the platform; every developer must understand what credentials/settings are needed
  • docs/user_guide/en/modules/tooling/README.md — Overview of the tooling system that enables agents to interact with external systems; core extension mechanism for the platform
  • compose.yml — Docker composition file defining containerized deployment; essential for local development and understanding service dependencies
  • Dockerfile — Container image definition for the entire platform; needed to understand build and runtime requirements

🛠️How to make changes

Add a new Agent Node capability or skill

  1. Create a new skill directory under .agents/skills/your-skill-name/ (.agents/skills/greeting-demo/SKILL.md)
  2. Define skill specification in SKILL.md with inputs, outputs, and execution logic (.agents/skills/python-scratchpad/SKILL.md)
  3. Reference the skill in workflow configuration; update agent node to include your skill as a tool (docs/user_guide/en/nodes/agent.md)
  4. Add examples showing skill usage in workflows (.agents/skills/rest-api-caller/references/examples.md)

Add a new Node type to the workflow editor

  1. Document the new node type specification in docs/user_guide/en/nodes/ (docs/user_guide/en/nodes/passthrough.md)
  2. Define node inputs, outputs, and control flow properties in configuration schema (docs/user_guide/en/config_schema_contract.md)
  3. Update execution logic documentation to describe runtime behavior (docs/user_guide/en/execution_logic.md)
  4. Add node to web UI by updating workflow authoring guide with visual/interaction patterns (docs/user_guide/en/web_ui_guide.md)

Integrate an external service or API as a tool

  1. Create a function tool definition following the function tool specification (docs/user_guide/en/modules/tooling/function.md)
  2. Register the tool in the function catalog or MCP service (docs/user_guide/en/modules/tooling/function_catalog.md)
  3. Add environment configuration for API credentials in .env (.env.example)
  4. Create a skill example demonstrating the tool usage (.agents/skills/rest-api-caller/SKILL.md)

Deploy the platform to production

  1. Copy and configure .env.example to .env with production credentials (.env.example)
  2. Build and push Docker image using Dockerfile (Dockerfile)
  3. Deploy using docker-compose with production overrides (compose.yml)
  4. Run YAML validation to ensure all workflow configs are valid (check/check_yaml.py)

🔧Why these technologies

  • Vue 3 + Vite — Provides reactive, fast frontend for zero-code workflow authoring with live graph visualization via @vue-flow/core
  • LLM-powered Agent Nodes — Core abstraction enabling intelligent task decomposition and dynamic decision-making without hand-coded logic
  • Model Context Protocol (MCP) — Standardized tool communication layer ensuring interoperability between agents and diverse external services
  • WebSocket + SSE — Real-time bidirectional communication for live workflow execution updates and user interaction (human-in-the-loop)
  • Docker + docker-compose — Containerized deployment for reproducibility and multi-service orchestration (frontend, backend, LLM providers)
  • YAML workflow definitions — Human-readable, version-controllable configuration enabling zero-code workflow creation and reusability

⚖️Trade-offs already made

  • Zero-code YAML-based configuration over programmatic API

    • Why: Maximizes accessibility for non-engineers; reduces cognitive load for workflow design
    • Consequence: Limited expressiveness for highly complex custom logic; advanced users must use Python nodes or custom skills
  • Agent-driven reasoning vs. rule-based execution

    • Why: Enables flexibility for novel tasks and emergent behavior without predefined logic
    • Consequence: Less deterministic; results vary by LLM temperature/model; requires monitoring and validation; potential for hallucinations
  • WebSocket real-time updates over polling/async jobs

    • Why: Provides immediate feedback to user; better for interactive human-in-the-loop workflows
    • Consequence: Requires persistent connections; higher server resource usage; can

🪤Traps & gotchas

Environment variables required: .env.example exists—you must copy and populate it with LLM API keys (likely OpenAI, Claude, or internal model endpoints). Frontend-backend separation: The Vite dev server (npm run dev) runs the UI but a separate Python backend process must be running for agent execution (not visible in npm scripts, likely via python -m chatdev or similar). Skill discovery: The .agents/skills/ directory is a registry—changes may require server restart or dynamic reloading to take effect. YAML schema validation: CI validates YAML (workflow definitions), suggesting strict schema; malformed workflow configs will fail validation before execution.

🏗️Architecture

💡Concepts to learn

  • Agent Skill Registry — ChatDev 2.0's core extension mechanism; understanding how skills are discovered, versioned, and composed is essential to building new agents without coding
  • Multi-Agent Orchestration — The foundational paradigm—how multiple specialized LLM agents coordinate via message-passing and task delegation; ChatDev is built entirely around this pattern
  • Workflow DAG Execution — Workflows in ChatDev are represented as directed acyclic graphs (visible via vue-flow visualization); understanding task dependencies, parallelization, and state flow is critical to composing complex multi-agent systems
  • LLM Prompt Routing & Agentic Loops — Agents in ChatDev are likely powered by prompt-based routing (deciding which skill to invoke via LLM reasoning); this differs from traditional RPC and requires understanding LLM output parsing, retry loops, and hallucination mitigation
  • Zero-Code/No-Code Platform Pattern — DevAll is explicitly zero-code—it shifts configuration-as-code complexity to visual composition; understanding YAML workflow definition, schema validation, and how domain-specific languages (DSLs) enable non-programmers to build complex systems is key
  • Containerized Microservices Orchestration — The Dockerfile and docker-compose indicate each agent or skill may run in isolation; understanding container health checks, networking between agents, and environment-based configuration (.env) is needed for production deployment
  • Vue Flow Graph Visualization — The frontend uses @vue-flow/core for visual workflow building; understanding how DAGs are rendered, how nodes (agents) and edges (dependencies) are manipulated, and how the visual model syncs with the execution backend is essential for UI development
  • OpenBMB/BMTools — Tool/skill registry system for LLM agents; shares the pattern of modular, discoverable capabilities that ChatDev 2.0 uses in .agents/skills/
  • geekan/MetaGPT — Alternative multi-agent software development framework; direct competitor solving similar use case (agent orchestration for development) but with different architecture
  • langchain-ai/langchain — LLM framework that ChatDev likely depends on for agent reasoning, memory, and tool-calling infrastructure
  • dify-ai/dify — No-code LLM workflow platform with visual editor; similar target user (zero-code AI builders) and likely inspiration for DevAll's frontend UX

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add validation tests for YAML workflow configs in check/

The repo has a check/check_yaml.py and check/check_workflow.py module for validating YAML configs, but there's no corresponding test suite. Given that ChatDev 2.0 is a zero-code platform where users define agents through configuration files, robust validation testing is critical. This would catch regressions when schema changes are made and help contributors understand the validation contract.

  • [ ] Create tests/check/test_check_yaml.py with test cases for valid/invalid YAML structures
  • [ ] Create tests/check/test_check_workflow.py covering workflow validation edge cases (missing required fields, circular dependencies, invalid agent references)
  • [ ] Reference docs/user_guide/en/config_schema_contract.md to ensure tests cover all documented schema constraints
  • [ ] Update Makefile or add GitHub Actions workflow to run these tests on PR submissions

Add E2E tests for skill loading and execution from .agents/skills/

The .agents/skills/ directory structure shows a pluggable skill system (greeting-demo, python-scratchpad, rest-api-caller) with SKILL.md documentation and examples, but there's no test coverage verifying skills are correctly discovered, loaded, and executed. This is core to ChatDev 2.0's extensibility model and needs validation.

  • [ ] Create tests/integration/test_skill_loading.py to verify skill discovery from .agents/skills/ directory
  • [ ] Add test cases for skill metadata parsing (SKILL.md) and validation of required skill fields
  • [ ] Create tests/integration/test_skill_execution.py that runs example workflows from skills/*/references/examples.md
  • [ ] Document expected skill structure in docs/user_guide/en/modules/ to formalize the contract

Add Node.js/Frontend unit tests with coverage reporting for Vue components

The package.json shows Vue 3 components, Vue Router, and @vue-flow (DAG visualization) dependencies for the frontend, but no test runner is configured (no vitest, jest, or cypress listed in devDependencies). Given the complex UI for workflow orchestration (suggested by assets/CommandDash.png, assets/agentverse.png), component testing is essential for maintaining stability during refactoring.

  • [ ] Add vitest and @vue/test-utils as devDependencies to package.json
  • [ ] Create tests/frontend/ directory with test setup files (vitest.config.js, test utilities)
  • [ ] Add unit tests for Vue components in src/components/ (create files if missing) covering Vue Flow workflow visualization and config UI
  • [ ] Add 'test' and 'coverage' scripts to package.json and configure coverage thresholds in vitest config
  • [ ] Consider adding GitHub Actions workflow (.github/workflows/frontend-tests.yml) to run on PR

🌿Good first issues

  • Add example skills to .agents/skills/: Create a weather-api-caller skill (following the pattern of rest-api-caller/SKILL.md + references/examples.md) with a working example that fetches weather data and documents the schema
  • Improve skill documentation: Write a SKILL_DEVELOPMENT_GUIDE.md in .agents/skills/ explaining the SKILL.md format, required fields, and how to add references/examples.md entries—currently only example skills exist without developer docs
  • Add frontend tests: Create Jest/Vitest tests for Vue components in frontend/src/ (test files missing from file list despite 443K Vue LOC), starting with workflow DAG visualization tests using vue-flow

Top contributors

Click to expand

📝Recent commits

Click to expand
  • c85e1de — Merge pull request #611 from voidborne-d/fix/reload-exclude-warehouse (zxrys)
  • 1ce86df — build(deps): add watchfiles to pyproject + refresh uv.lock (voidborne-d)
  • 646f0d6 — fix(server): cover nested paths and warn when watchfiles missing (voidborne-d)
  • 7c69226 — fix(server): scope --reload watching to source dirs (#569) (voidborne-d)
  • 68fdac8 — Merge pull request #604 from NA-Wen/main (zxrys)
  • 378b96c — update contributors (NA-Wen)
  • b9b2c3d — update contributors (NA-Wen)
  • 831e0f2 — Merge pull request #603 from NA-Wen/main (zxrys)
  • 5dcb54a — update contributor list (NA-Wen)
  • f62de70 — Merge pull request #598 from kartik-mem0/feat/mem0-memory-store (NA-Wen)

🔒Security observations

  • High · API Keys Exposed in Environment Configuration — .env.example, .env.docker. The .env.example file contains template API keys and documentation for sensitive credentials (BASE_URL, API_KEY, SERPER_DEV_API_KEY, JINA_API_KEY). While this is an example file, it demonstrates how secrets are managed. Risk exists if developers commit actual .env files or if the template is used directly without proper secret management in production. Fix: 1) Ensure .env files are in .gitignore (verify in .gitignore). 2) Use environment-specific secret management (AWS Secrets Manager, HashiCorp Vault, etc.). 3) Implement CI/CD secret scanning. 4) Document that API_KEY values should never be committed. 5) Rotate any exposed keys immediately.
  • High · Markdown Parser with Potential XSS Risk — package.json (dependencies: markdown-it, markdown-it-anchor). The project uses 'markdown-it' (v14.1.0) and 'markdown-it-anchor' for rendering markdown. These libraries can potentially render unsafe HTML if not properly sanitized, which could lead to XSS vulnerabilities when processing untrusted markdown input from users or external sources. Fix: 1) Implement output sanitization using 'DOMPurify' or similar library. 2) Configure markdown-it with strict options (disable dangerous rules). 3) Validate and sanitize all user-provided markdown input before rendering. 4) Use Content Security Policy (CSP) headers. 5) Regularly audit markdown rendering in the codebase for dangerous patterns like 'v-html' in Vue components.
  • High · YAML Parsing Without Validation — package.json (js-yaml dependency), check/check_yaml.py, .github/workflows/validate-yamls.yml. The project uses 'js-yaml' (v4.1.0) for YAML parsing. YAML can execute arbitrary JavaScript if not properly configured. The file structure shows workflow YAML files (.github/workflows/validate-yamls.yml, check/check_yaml.py) indicating YAML is a critical input format. Unsafe YAML parsing could lead to arbitrary code execution. Fix: 1) Configure js-yaml with safe parsing options (safe: true). 2) Implement strict schema validation for all YAML inputs. 3) Whitelist allowed YAML keys and structures. 4) Add input validation before YAML parsing. 5) Use 'yaml' package as alternative with safer defaults if available.
  • Medium · Missing Security Headers in Web UI — Frontend configuration (Vite setup not shown, but inferred from package.json and docs/user_guide/en/web_ui_guide.md). The Vite-based Vue frontend (frontend/public) lacks documented security headers (CSP, X-Frame-Options, X-Content-Type-Options, etc.). This increases XSS and clickjacking risks. Fix: 1) Configure Vite to set security headers. 2) Implement Content-Security-Policy header. 3) Add X-Frame-Options: DENY. 4) Add X-Content-Type-Options: nosniff. 5) Configure CORS properly for REST API calls. 6) Document security headers in deployment guides.
  • Medium · Unrestricted REST API Caller Skill — .agents/skills/rest-api-caller/. The project includes a 'rest-api-caller' skill (.agents/skills/rest-api-caller/) that allows agents to make arbitrary HTTP requests. Without proper validation and sandboxing, this could be exploited to perform SSRF attacks, access internal services, or leak sensitive data. Fix: 1) Implement URL whitelist/blacklist validation (prevent localhost, internal IPs). 2) Validate HTTP methods (GET/POST only if possible). 3) Add rate limiting. 4) Log all REST API calls. 5) Implement timeouts and size limits. 6) Disable access to internal network ranges. 7) Review examples.md for unsafe usage patterns.
  • Medium · Python Code Execution in 'python-scratchpad' Skill — .agents/skills/python-scratchpad/, docs/user_guide/en/nodes/python.. The 'python-scratchpad' skill (.agents/skills/python-scratchpad/) allows agents to execute arbitrary Python code. Without proper sandboxing, this could lead to arbitrary code execution, file system access, or information disclosure. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · OpenBMB/ChatDev — RepoPilot