RepoPilotOpen in app β†’

FoundationAgents/MetaGPT

🌟 The Multi-Agent Framework: First AI Software Company, Towards Natural Language Programming

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained β€” safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI β€” clean foundation to fork and modify.

Learn fromHealthy

Documented and popular β€” useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture β€” runnable as-is.

  • βœ“Last commit 4mo ago
  • βœ“11 active contributors
  • βœ“MIT licensed
Show all 7 evidence items β†’
  • βœ“CI configured
  • βœ“Tests present
  • ⚠Slowing β€” last commit 4mo ago
  • ⚠Concentrated ownership β€” top contributor handles 59% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README β€” live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/foundationagents/metagpt)](https://repopilot.app/r/foundationagents/metagpt)

Paste at the top of your README.md β€” renders inline like a shields.io badge.

β–ΈPreview social card (1200Γ—630)

This card auto-renders when someone shares https://repopilot.app/r/foundationagents/metagpt on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: FoundationAgents/MetaGPT

Generated by RepoPilot Β· 2026-05-07 Β· Source

πŸ€–Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale β€” STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI Β· unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/FoundationAgents/MetaGPT shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything β€” but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO β€” Healthy across all four use cases

  • Last commit 4mo ago
  • 11 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Slowing β€” last commit 4mo ago
  • ⚠ Concentrated ownership β€” top contributor handles 59% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

βœ…Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live FoundationAgents/MetaGPT repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale β€” regenerate it at repopilot.app/r/FoundationAgents/MetaGPT.

What it runs against: a local clone of FoundationAgents/MetaGPT β€” the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in FoundationAgents/MetaGPT | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≀ 136 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> β€” paste this script from inside your clone of <code>FoundationAgents/MetaGPT</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of FoundationAgents/MetaGPT. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/FoundationAgents/MetaGPT.git
#   cd MetaGPT
#
# Then paste this script. Every check is read-only β€” no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of FoundationAgents/MetaGPT and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "FoundationAgents/MetaGPT(\\.git)?\\b" \\
  && ok "origin remote is FoundationAgents/MetaGPT" \\
  || miss "origin remote is not FoundationAgents/MetaGPT (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift β€” was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "config/config2.yaml" \\
  && ok "config/config2.yaml" \\
  || miss "missing critical file: config/config2.yaml"
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f ".github/workflows/unittest.yaml" \\
  && ok ".github/workflows/unittest.yaml" \\
  || miss "missing critical file: .github/workflows/unittest.yaml"
test -f "Dockerfile" \\
  && ok "Dockerfile" \\
  || miss "missing critical file: Dockerfile"
test -f "MANIFEST.in" \\
  && ok "MANIFEST.in" \\
  || miss "missing critical file: MANIFEST.in"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 136 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~106d)"
else
  miss "last commit was $days_since_last days ago β€” artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) β€” safe to trust"
else
  echo "artifact has $fail stale claim(s) β€” regenerate at https://repopilot.app/r/FoundationAgents/MetaGPT"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

⚑TL;DR

MetaGPT is a multi-agent framework that orchestrates LLM-based roles (product manager, architect, engineer, etc.) to autonomously generate complete software artifacts from a single natural language requirement. Given one-line input, it outputs user stories, data structures, APIs, code, and documentation by simulating a full software company's SOP (Standard Operating Procedure) workflow. Monorepo structured as: metagpt/ core framework (roles, agents, workflows), config/ with provider configs (openai, aws-bedrock, google-gemini examples), examples/ containing runnable demonstrations, .github/workflows/ defining CI pipelines (unittest.yaml, fulltest.yaml, build-package.yaml). Entry points via config/config2.yaml and devcontainer support in .devcontainer/docker-compose.yaml.

πŸ‘₯Who it's for

AI engineers and product teams building autonomous software generation systems; companies wanting to prototype software workflows via natural language; researchers exploring multi-agent LLM coordination and agentic workflows.

🌱Maturity & risk

Actively developed and production-capable. The repo shows 3.2M lines of Python code, recent major releases (Feb 2025 MGX product launch, Jan 2025 ICLR 2025 oral presentation for AFlow paper), comprehensive CI/CD (GitHub Actions for unittest/fulltest/build-package), and structured documentation across multiple languages. However, it's a complex framework still iterating on core patterns (SPO and AOT papers just published Feb 2025).

Moderate risk due to framework complexity and tight coupling of role orchestration logic. Dependencies include vision libraries (opencv-python 4.6.0.66, pyshine 0.0.9) pinned to older versions; multi-LLM support (OpenAI, Anthropic, Bedrock, Groq, Google Gemini) creates provider lock-in risk if API changes. Single organizational maintainer (FoundationAgents). Breaking changes possible during rapid feature development (recent multi-paper releases indicate active research phase).

Active areas of work

Heavy active development: Feb 2025 launched MGX (metagpt.dev) natural language programming product; published SPO and AOT papers with reference implementations in examples/; Jan 2025 AFlow paper accepted to ICLR 2025 with oral presentation (top 1.8%). GitHub workflows show frequent unit and full test runs, suggesting continuous integration on multi-LLM provider support.

πŸš€Get running

Clone and install: git clone https://github.com/FoundationAgents/MetaGPT.git && cd MetaGPT && pip install --upgrade -e . (editable install). Requires Python 3.9–3.11 (not 3.12+). Configure LLM via config/config2.yaml or example configs in config/examples/ (e.g., openai-gpt-4-turbo.yaml). Test installation: examine examples/ for runnable demos.

Daily commands: No traditional 'dev server'β€”this is a framework for generating artifacts. Run examples via Python: python examples/<script_name>.py after installing. Configure LLM endpoint in config/config2.yaml. Docker dev environment: devcontainer setup in .devcontainer/docker-compose.yaml with postCreateCommand.sh initialization.

πŸ—ΊοΈMap of the codebase

  • config/config2.yaml β€” Primary configuration file for MetaGPT runtime settings, LLM providers, and agent parametersβ€”every contributor must understand the config schema
  • README.md β€” Main entry point documenting the Multi-Agent Framework philosophy, getting started, and high-level architecture
  • .github/workflows/unittest.yaml β€” Test harness defining quality gates and test execution strategy for all pull requests
  • Dockerfile β€” Container image definition showing all runtime dependencies and entry point for the framework
  • MANIFEST.in β€” Package distribution manifest specifying which files are included in PyPI releases
  • pyproject.toml β€” Python package metadata and dependency specifications (inferred as critical for any Python project)

🧩Components & responsibilities

  • CLI Entry Point (Python argparse, config loader) β€” Parse user input, load framework configuration, initialize multi-agent team, and orchestrate execution flow
    • Failure mode: Malformed config β†’ initialization fails; missing LLM API key β†’ graceful error with setup instructions
  • Configuration Manager (YAML parser, environment variables, config schema validation) β€” Load YAML configs from config/ directory, validate provider settings, inject credentials from environment variables
    • Failure mode: Invalid YAML β†’ parse error; missing provider API key β†’ runtime error on first LLM call
  • Multi-Agent Team (Python classes, role definition DSL, prompt templating) β€” Instantiate role-based agents, assign responsibilities, coordinate prompt generation and response processing
    • Failure mode: Role conflict β†’ undefined behavior; LLM timeout β†’ agent task incomplete; invalid response format β†’ parsing error
  • LLM Provider Abstraction (Provider adapters, HTTP clients, streaming response handlers) β€” Abstract away provider-specific API differences (OpenAI vs. Anthropic vs. AWS Bedrock) with unified interface
    • Failure mode: Provider API rate limit β†’ exponential backoff retry; provider outage β†’ all agents blocked
  • Workspace Manager (File I/O, directory creation, artifact writing) β€” Create isolated project directories, persist agent-generated code/docs/diagrams, manage workspace cleanup
    • Failure mode: Disk full β†’ write fails; permission denied β†’ unable to create workspace; concurrent

πŸ› οΈHow to make changes

Add Support for a New LLM Provider

  1. Create a new YAML configuration file following the pattern of existing providers (config/examples/openai-gpt-4-turbo.yaml)
  2. Add provider-specific environment variables and authentication details to config schema (config/config2.yaml)
  3. Document the provider setup in the appropriate language README (docs/README_CN.md)
  4. Add integration tests for the new provider in the test suite (.github/workflows/unittest.yaml)

Create a New Multi-Agent Use Case Example

  1. Create a new workspace directory under docs/resources/workspace/ (e.g., docs/resources/workspace/my_use_case/) (docs/resources/workspace/content_rec_sys)
  2. Add generated output artifacts (PDFs, PNGs, SVGs) in resources/ subdirectory (docs/resources/workspace/content_rec_sys/resources/seq_flow.png)
  3. Document the use case and results in the main ROADMAP or README (docs/ROADMAP.md)
  4. Include configuration example showing role assignments and agent interactions (config/config2.yaml)

Add a New CI/CD Test Pipeline

  1. Create a new workflow file in .github/workflows/ following existing patterns (.github/workflows/unittest.yaml)
  2. Define test triggers (on push, PR, schedule) and job matrix for Python versions (.github/workflows/unittest.yaml)
  3. Add pre-commit hooks if new linting rules are needed (.pre-commit-config.yaml)
  4. Update coverage configuration to track new test modules (.coveragerc)

πŸ”§Why these technologies

  • Python β€” Primary language for agent logic, LLM integrations, and cross-platform support
  • YAML Configuration β€” Declarative provider and agent configuration without code changes
  • Docker & Docker Compose β€” Reproducible development and deployment environments across machines
  • GitHub Actions β€” CI/CD automation for testing, building, and releasing packages to PyPI
  • Multiple LLM Providers β€” Pluggable architecture supporting OpenAI, Anthropic, Google, AWS Bedrock, and open-source models

βš–οΈTrade-offs already made

  • Multi-language documentation (EN, CN, FR, JA)

    • Why: Maximize global accessibility and community adoption
    • Consequence: Higher maintenance burden for docs updates across all languages
  • Extensive LLM provider examples in config/examples/

    • Why: Lower barrier to entry for new providers
    • Consequence: Config directory grows with each new provider; requires docs for each
  • Workspace-based artifact generation

    • Why: Agents generate code, docs, diagrams as outputs
    • Consequence: Large disk footprint; requires cleanup mechanisms for old workspaces

🚫Non-goals (don't propose these)

  • Real-time collaboration across distributed teams (asynchronous task execution only)
  • Support for non-Python agent implementations
  • Windows native support (Docker recommended)
  • Local-only LLM inference without provider APIs
  • Built-in database persistence (agents generate files, not query databases)

πŸͺ€Traps & gotchas

Python version constraint: 3.9–3.11 only (3.12+ breaks dependencies, likely due to pinned opencv-python 4.6.0.66). LLM API keys required in config/config2.yaml or environment (no fallback dummy provider). devcontainer PostgreSQL/Redis setup in docker-compose suggests some examples need external servicesβ€”check example requirements. Multi-agent orchestration has ordering dependencies; workflow execution order matters and may cause silent failures if SOP rules conflict.

πŸ—οΈArchitecture

πŸ’‘Concepts to learn

  • Software Company Simulation via LLM Roles β€” Core philosophy of MetaGPTβ€”understanding how roles (PM, architect, engineer) are modeled as LLM agents and orchestrated to execute SOP is fundamental to extending or debugging the framework
  • Standard Operating Procedure (SOP) as Executable Code β€” MetaGPT materializes workflows as SOP rules; grasping how business processes become prompt instructions and agent constraints is critical for workflow design
  • Multi-Agent Message Passing and Workflow DAGs β€” Agents communicate asynchronously via message queues and tasks form directed acyclic graphs; understanding task dependencies and message ordering prevents deadlocks and orchestration bugs
  • Provider-Agnostic LLM Abstraction β€” MetaGPT abstracts OpenAI, Anthropic, Bedrock, Groq behind a unified interface; knowing how provider configs map to client implementations aids adding new LLM sources
  • Agentic Workflow Orchestration (AFlow) β€” Recent ICLR 2025 paper accepted for oral presentation; AFlow automates workflow generation from natural language, representing the cutting edge of MetaGPT's research direction
  • Role-Based Prompt Engineering and Context Windows β€” Each role (PM, architect) has specialized prompt templates and context constraints; optimizing prompts per role and managing context window overflow across multi-turn agent conversations is non-trivial
  • Configuration-Driven Agent Instantiation β€” YAML configs in config/examples/ declaratively specify provider, model, temperature, and role behavior; understanding config schema prevents runtime initialization errors and enables rapid experimentation
  • openai/swarm β€” Lightweight multi-agent orchestration framework from OpenAI; simpler alternative for basic agent coordination without full SOP simulation
  • langchain-ai/langgraph β€” LangChain's agentic workflow framework; overlapping multi-agent coordination patterns but with tighter LangChain ecosystem integration
  • microsoft/autogen β€” Microsoft's AutoGen for multi-agent conversation patterns; similar problem space (agent conversation) but less SOP/role-focused than MetaGPT
  • anthropics/anthropic-sdk-python β€” Companion SDK used by MetaGPT for Anthropic Claude integration; required for config/examples/anthropic-claude-3-5-sonnet.yaml provider
  • FoundationAgents/mgx β€” Commercial product built on MetaGPT framework (mgx.dev launched Feb 2025); reference implementation of production MetaGPT deployment

πŸͺ„PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for multi-agent LLM provider configs

The repo has 9+ example LLM provider configs (openai, anthropic, aws-bedrock, google-gemini, groq, etc.) in config/examples/, but no automated tests validating that each provider's configuration is correctly parsed and functional. This prevents configuration drift and ensures new providers are properly integrated.

  • [ ] Create tests/integration/test_llm_provider_configs.py to validate each YAML in config/examples/
  • [ ] Add parametrized tests for config parsing using pytest.mark.parametrize across all provider configs
  • [ ] Add mock LLM calls to verify provider-specific authentication and model initialization
  • [ ] Reference .github/workflows/unittest.yaml to integrate into existing test suite

Add security scanning workflow for dependencies

While .github/workflows/ contains build, test, and pre-commit workflows, there's no automated dependency vulnerability scanning. Given the project's reliance on external LLM APIs and multiple dependencies (pyshine, opencv-python), a security scanning workflow would catch supply chain risks early.

  • [ ] Create .github/workflows/security-scan.yaml with GitHub's dependency scanning or pip-audit
  • [ ] Add SBOM (Software Bill of Materials) generation using tools like cyclonedx-python or syft
  • [ ] Configure to run on schedule (weekly) and on each PR to requirements.txt/setup.py changes
  • [ ] Reference SECURITY.md to document the scanning policy and reporting process

Add CLI validation tests for config parsing edge cases

The docs/install/ directory documents CLI installation, and config/config2.example.yaml exists, but there are no tests validating CLI argument parsing, config merging, or handling of invalid configs. This is critical for a framework where users interact heavily via CLI.

  • [ ] Create tests/cli/test_config_validation.py to test config2.yaml parsing with pytest
  • [ ] Add tests for edge cases: missing required fields, invalid API keys, conflicting provider settings
  • [ ] Add tests for config merging logic (config2.yaml overriding defaults, environment variable precedence)
  • [ ] Reference .github/workflows/unittest.yaml to ensure tests run in CI pipeline

🌿Good first issues

  • Add type hints and docstring coverage to agent communication message-passing code in metagpt/ coreβ€”improves maintainability and IDE support for contributors
  • Expand config/examples/ with missing provider templates (e.g., Replicate, Mistral API) and write validation tests in .github/workflows/ to catch config drift
  • Write integration tests for SPO and AOT papers' reference implementations in examples/ currently lacking automated test coverageβ€”see .github/workflows/unittest.yaml for test harness

⭐Top contributors

Click to expand

πŸ“Recent commits

Click to expand
  • 11cdf46 β€” Merge pull request #1897 from Ruyuan37/windows_terminal_adaptation (better629)
  • de17c62 β€” [terminal.py] Add Windows Terminal support to terminal.py (Ruyuan37)
  • fc6e843 β€” Update README.md (better629)
  • 1dfce07 β€” Merge pull request #1786 from garylin2099/simplify_rz (better629)
  • 5aae56e β€” Merge pull request #1820 from cmodi-meta/llama-api-support (better629)
  • 2f0c7fb β€” Merge pull request #1849 from GasolSun36/feature/bugfix-config-model (better629)
  • cfb578f β€” pre-commit fix (GasolSun36)
  • a05eed2 β€” fix bugs for test (GasolSun36)
  • 46feec4 β€” fix_bug_for_config_model (GasolSun36)
  • a855e66 β€” Merge pull request #1848 from better629/main (better629)

πŸ”’Security observations

  • High Β· Outdated OpenCV Dependency with Known Vulnerabilities β€” Dependencies/Package file - opencv-python==4.6.0.66. opencv-python version 4.6.0.66 is significantly outdated (released in 2022). This version contains multiple known CVEs including memory corruption issues, denial of service vulnerabilities, and potential code execution flaws. Fix: Update to the latest stable version of opencv-python (4.8.x or newer). Run 'pip install --upgrade opencv-python' and test thoroughly with your codebase.
  • High Β· Obsolete pyshine Dependency β€” Dependencies/Package file - pyshine==0.0.9. pyshine==0.0.9 is an extremely old version with minimal maintenance history. The package has limited security scrutiny and may contain unpatched vulnerabilities. Fix: Evaluate if pyshine is still necessary. If required, upgrade to the latest version or consider alternative well-maintained libraries.
  • Medium Β· No Version Pinning in pip Installation β€” Dockerfile - RUN pip install. The Dockerfile uses 'pip install --no-cache-dir -r requirements.txt' without hash verification. While dependencies are pinned in requirements.txt, there's no cryptographic validation of package integrity during installation. Fix: Use pip's --require-hashes flag with hash-pinned requirements for supply chain security: 'pip install --no-cache-dir --require-hashes -r requirements.txt'
  • Medium Β· Chromium Installation Without Verification β€” Dockerfile - apt install chromium. The Dockerfile installs chromium via apt without verifying package signatures or versions. This could be vulnerable to man-in-the-middle attacks or compromised packages. Fix: Specify explicit package versions: 'apt install -y chromium=VERSION' and consider using a specific base image digest instead of tag for reproducibility.
  • Medium Β· Docker Image Runs with Default User Permissions β€” Dockerfile - missing USER directive. The Dockerfile does not create a non-root user for running the application. Running containers as root increases blast radius of potential container escapes. Fix: Add a non-root user: 'RUN useradd -m metagpt' and 'USER metagpt' before the CMD instruction.
  • Medium Β· Base Image Uses 'slim' Tag Without Digest β€” Dockerfile - FROM statement. The base image 'nikolaik/python-nodejs:python3.9-nodejs20-slim' uses a tag without SHA256 digest, making it vulnerable to tag mutation attacks where the tag could be reassigned to a malicious image. Fix: Use image digests: 'FROM nikolaik/python-nodejs:python3.9-nodejs20-slim@sha256:DIGEST' for reproducible and verified builds.
  • Medium Β· Insecure npm Package Installation β€” Dockerfile - npm install -g @mermaid-js/mermaid-cli. npm install without lock file can result in different dependency versions across builds. The mermaid-cli package and its dependencies are not pinned. Fix: Use npm ci with package-lock.json: 'npm ci -g @mermaid-js/mermaid-cli' instead of npm install.
  • Low Β· No Security Headers or Content Security Policy Documented β€” config/examples/ and docs/. While the project appears to be a framework, no documented security headers or CSP policies are visible in the configuration examples. Fix: Document security best practices for deployments and provide example configurations with appropriate security headers.
  • Low Β· Limited Version Support Window β€” SECURITY.md. SECURITY.md indicates only the current major version is supported. This is a very narrow support window for a framework used by multiple parties. Fix: Consider maintaining security patches for at least 2-3 previous minor versions to allow reasonable upgrade timelines for users.
  • Low Β· Config Files in Repository Without Encryption β€” undefined. The repository contains example config files including 'config/config2.yaml' and 'config/vault.example.yaml'. While marked as Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals β€” see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals Β· FoundationAgents/MetaGPT β€” RepoPilot