RepoPilotOpen in app →

affaan-m/everything-claude-code

The agent harness performance optimization system. Skills, instincts, memory, security, and research-first development for Claude Code, Codex, Opencode, Cursor and beyond.

WAIT

Single-maintainer risk — review before adopting

  • Last commit 2d ago
  • 4 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • Small team — 4 top contributors
  • Single-maintainer risk — top contributor 96% of commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Embed this verdict

[![RepoPilot: WAIT](https://repopilot.app/api/badge/affaan-m/everything-claude-code)](https://repopilot.app/r/affaan-m/everything-claude-code)

Paste into your README — the badge live-updates from the latest cached analysis.

Onboarding doc

Onboarding: affaan-m/everything-claude-code

Generated by RepoPilot · 2026-05-04 · Source

Verdict

WAIT — Single-maintainer risk — review before adopting

  • Last commit 2d ago
  • 4 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Small team — 4 top contributors
  • ⚠ Single-maintainer risk — top contributor 96% of commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

TL;DR

Everything Claude Code (ECC) is a performance optimization system for AI agent harnesses — specifically Claude Code, Codex, Cursor, OpenCode, and Gemini. It provides a structured layer of skills (reusable agent capability modules stored under .agents/skills/), instincts, memory optimization, security scanning via ecc-agentshield, and MCP configurations that sit on top of AI coding agents to dramatically improve their output quality and consistency. It ships as two npm packages (ecc-universal and ecc-agentshield) and a GitHub App with 150+ installs. The repo is a monorepo with the core reusable agent layer under .agents/ (skills as self-contained directories each containing a SKILL.md and an agents/openai.yaml spec), npm packages (ecc-universal, ecc-agentshield) with TypeScript source compiled to dist/, and a .agents/plugins/marketplace.json registry that catalogs available skills. A commands/, prompts/, and instructions/ directory structure (referenced in package.json files) holds the runtime content delivered to agent harnesses.

Who it's for

Developers who use AI agent harnesses (Claude Code, Cursor, OpenCode, Codex) daily to build real products and want production-grade, repeatable agent behavior — specifically engineers who are frustrated by inconsistent LLM output and want to inject structured skills, memory, and security guardrails without writing prompt engineering from scratch every time.

Maturity & risk

The repo claims 140K+ stars, 21K+ forks, and 170+ contributors, with v2.0.0-rc.1 currently in release-candidate stage, indicating active but still pre-stable development. It has been in intensive daily use for 10+ months per the README, and the package.json shows TypeScript build infrastructure and a prepublishOnly pipeline. Verdict: actively developed and battle-tested in real product workflows, but the rc.1 version tag means breaking changes are still possible.

Single primary maintainer (affaan-m) is the dominant risk — 170 contributors exist but the npm packages are under one author account. The ecc-universal peerDependency on @opencode-ai/plugin >=1.0.0 ties it to a third-party package (@opencode-ai/plugin ^1.4.3) that is itself relatively new and could introduce breaking changes. The v2.0.0-rc.1 tag means the public Hermes operator layer is not yet stable, so production use of the newest features carries real upgrade risk.

Active areas of work

The active focus is the v2.0.0-rc.1 release, which introduces the 'Hermes operator story' — a named public persona/workflow layer built on top of the reusable ECC core. The marketplace.json plugin registry under .agents/plugins/ and the growing skills library (visible across 50+ skill directories) suggest ongoing skill authoring is the primary development activity.

Get running

git clone https://github.com/affaan-m/everything-claude-code.git cd everything-claude-code npm install npm run build

To use ecc-universal as a plugin in OpenCode:

npm install ecc-universal

Or install agentshield:

npm install ecc-agentshield

Daily commands: npm run build # compiles TypeScript src → dist/ npm run clean # rm -rf dist/

No dev server — this is a library/plugin package. To test a skill locally, point your Claude Code or OpenCode harness at the local .agents/skills/ directory.

Map of the codebase

  • .agents/plugins/marketplace.json — Central registry of all available agent plugins; defines what skills/tools are discoverable and installable across the harness ecosystem.
  • .agents/skills/everything-claude-code/SKILL.md — The meta-skill that documents ECC's own conventions, serving as the canonical reference for how all other skills must be structured.
  • .agents/skills/security-review/SKILL.md — Defines the security review skill that gates code changes, making it load-bearing for the agent harness's safety model.
  • .agents/skills/deep-research/SKILL.md — Specifies the research-first development workflow that underpins ECC's core philosophy of consulting docs before coding.
  • .agents/skills/coding-standards/SKILL.md — Establishes shared coding conventions enforced across all agent-generated code; every contributor must align to these standards.
  • .agents/skills/eval-harness/SKILL.md — Defines the evaluation harness skill used to benchmark and validate agent output quality and performance regressions.
  • .agents/skills/mcp-server-patterns/SKILL.md — Documents the Model Context Protocol server integration patterns that allow ECC to connect to external tool providers.

How to make changes

Add a New Skill

  1. Create a new directory under .agents/skills/ named after your skill using kebab-case (e.g., my-new-skill/). (.agents/skills/my-new-skill)
  2. Write SKILL.md following the conventions established in the meta-skill, including purpose, trigger conditions, step-by-step workflow, and output format. (.agents/skills/my-new-skill/SKILL.md)
  3. Create an agents/ subdirectory and add an openai.yaml manifest declaring model, system_prompt, tools, and any required env variables, mirroring the pattern in deep-research. (.agents/skills/my-new-skill/agents/openai.yaml)
  4. Register the new skill in the marketplace manifest with its name, description, path, and tags so it is discoverable. (.agents/plugins/marketplace.json)

Add a New Agent Runtime Target

  1. Choose an existing skill directory as a reference (e.g., .agents/skills/deep-research/agents/) and create a new YAML file for the target runtime (e.g., claude.yaml, cursor.yaml). (.agents/skills/deep-research/agents/openai.yaml)
  2. Define runtime-specific fields such as model identifiers, context window limits, tool-call format, and any harness-specific headers in the new YAML. (.agents/skills/deep-research/agents/claude.yaml)
  3. Update the marketplace.json entry for the skill to list the new runtime target under its supported_agents or runtimes array. (.agents/plugins/marketplace.json)

Add a Reference Document to a Skill

  1. Create a references/ subdirectory inside the target skill folder, following the brand-voice skill as the established pattern. (.agents/skills/brand-voice/references)
  2. Add the reference document (Markdown or JSON schema) to the references/ directory with a descriptive filename. (.agents/skills/brand-voice/references/voice-profile-schema.md)
  3. Update SKILL.md to explicitly instruct the agent to load and consult the reference document during execution, citing its relative path. (.agents/skills/brand-voice/SKILL.md)

Add a New Workflow Orchestration Skill

  1. Study the dmux-workflows skill as the canonical multi-step orchestration example to understand sequencing and sub-agent delegation patterns. (.agents/skills/dmux-workflows/SKILL.md)
  2. Create the new skill directory and SKILL.md with explicit phase definitions, sub-agent spawn instructions, and output aggregation steps. (.agents/skills/my-workflow/SKILL.md)
  3. Add the openai.yaml agent manifest with tool definitions for each sub-agent call or external API interaction in the workflow. (.agents/skills/my-workflow/agents/openai.yaml)
  4. Register in the marketplace with workflow category tag so orchestrators can discover and chain it with other skills. (.agents/plugins/marketplace.json)

Why these technologies

  • Markdown (SKILL.md) — Human-readable, git-diffable skill definitions that LLMs consume natively without parsing overhead; no build step required.
  • YAML (openai.yaml agent manifests) — Declarative, concise format for model configuration and tool schemas that maps directly to OpenAI Assistants API and compatible runtimes.
  • JSON (marketplace.json) — Machine-parseable skill registry enabling programmatic discovery, filtering, and installation by agent orchestrators and CLI tooling.
  • TypeScript / ESM (ecc-universal npm package) — Type-safe plugin API with explicit ESM exports enables tree-shaking and clean integration into modern JS runtimes including Bun and Node 18+.
  • Multi-harness design (Claude Code, Codex, Cursor, OpenCode) — Targeting multiple agent runtimes through per-skill agent manif

Traps & gotchas

The ecc-universal package has a peerDependency on @opencode-ai/plugin >=1.0.0 which must be installed in the consuming project — it will silently fail or error if missing. The agents/openai.yaml skill specs are written for the OpenAI agent spec format, meaning they require adaptation if you are using Claude Code's native CLAUDE.md format directly. Node.js >=18.0.0 is required per engines in package.json. The package uses ESM ("type": "module") so CommonJS require() interop will break.

Architecture

Concepts to learn

  • Agent Harness — ECC's entire purpose is to sit on top of agent harnesses — understanding what a harness is (the runtime shell that wraps an LLM with tools, context, and hooks) is foundational to understanding what ECC optimizes.
  • Model Context Protocol (MCP) — ECC ships MCP configurations that connect agent harnesses to external tool servers — MCP is the protocol that standardizes how LLMs call external tools.
  • OpenAI Agent Spec (YAML) — Every skill in .agents/skills/*/agents/openai.yaml uses this format to declare sub-agent behavior — contributors must understand this schema to author new skills.
  • Prompt Injection / Agent Security — The ecc-agentshield package exists specifically to scan for prompt injection attacks targeting agents — a non-obvious attack surface that becomes critical when agents have tool access.
  • Skill Composition in Multi-Agent Systems — ECC's architecture decomposes agent capability into discrete, composable skills (each a self-contained SKILL.md + YAML) rather than monolithic system prompts — this pattern determines how the entire .agents/skills/ directory is structured.
  • ESM-only npm packages — The package.json sets "type": "module" making ecc-universal pure ESM — this causes hard-to-debug interop failures in CommonJS projects and requires Node 18+.

Related repos

  • paul-gauthier/aider — Aider is a direct alternative AI coding agent harness with its own conventions and prompt engineering system, targeting the same developer audience.
  • continuedev/continue — Continue is an open-source AI coding assistant (VS Code/JetBrains) that also uses a skills/context injection model for improving agent consistency.
  • modelcontextprotocol/servers — ECC ships MCP configurations — this is the official MCP server registry that those configurations connect to.
  • anthropics/claude-code — The primary target harness ECC is built to optimize — understanding Claude Code's native hook and CLAUDE.md system is essential context.
  • opencode-ai/opencode — ECC v2 ships ecc-universal explicitly as an OpenCode plugin with a peer dependency on @opencode-ai/plugin, making this the direct companion runtime.

PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add a unified skill validation test suite for all SKILL.md files under .agents/skills/

The repo contains 20+ skill directories (e.g. .agents/skills/deep-research, .agents/skills/exa-search, .agents/skills/fal-ai-media), each with a SKILL.md and a paired openai.yaml. There are currently no automated tests to verify that every skill directory has both required files, that the YAML is well-formed, or that SKILL.md follows a consistent schema. A broken or malformed skill silently fails at runtime, which is especially risky for a marketplace-published plugin. A test suite here would catch regressions when contributors add new skills.

  • [ ] Create src/tests/validate-skills.test.ts (or test/validate-skills.spec.ts) using Node's built-in test runner or Bun test, since the project already targets Bun/Node >=18
  • [ ] For each directory under .agents/skills/, assert that SKILL.md exists and is non-empty
  • [ ] For each directory under .agents/skills/, assert that agents/openai.yaml exists and parses as valid YAML (use 'js-yaml' or the built-in YAML parser available in Bun)
  • [ ] Assert that openai.yaml contains required top-level keys (e.g. 'name', 'description', 'model') by cross-referencing the schema implied by existing files like .agents/skills/deep-research/agents/openai.yaml
  • [ ] Add a 'test' script to package.json pointing at the new suite
  • [ ] Verify .agents/plugins/marketplace.json lists every skill directory — fail if a skill folder exists but is absent from the marketplace index

Add a GitHub Actions CI workflow that builds the TypeScript package and validates the skill corpus on every PR

The package.json has a 'build': 'tsc' script and a 'prepublishOnly' hook, but there is no .github/workflows/ directory visible in the file listing, meaning no automated build or lint runs on pull requests. For a multi-contributor open-source plugin that is published to npm as 'ecc-universal', a broken dist/ or a TypeScript error introduced by a contributor would only surface at publish time. A targeted CI workflow would catch both TS compilation failures and the skill-file integrity issues described in the first PR idea.

  • [ ] Create .github/workflows/ci.yml
  • [ ] Set it to trigger on 'pull_request' and 'push' to main/master
  • [ ] Add a 'build' job: checkout → setup Node 20 (engines field specifies >=18) → npm ci → npm run build → assert dist/ is non-empty
  • [ ] Add a 'validate-skills' job that runs the new test suite from PR #1 (or a lightweight shell script using yq/python -c yaml.safe_load if the test suite is not yet merged)
  • [ ] Add a 'lint-json' step that validates .agents/plugins/marketplace.json with 'node -e "JSON.parse(require('fs').readFileSync('.agents/plugins/marketplace.json','utf8'))"' to prevent malformed marketplace entries
  • [ ] Cache node_modules using actions/cache keyed on package-lock.json to keep CI fast

Generate and commit per-skill README documentation from each SKILL.md + openai.yaml pair, and wire them into a top-level skills index

The repo's README links to docs/ translations but there is no browsable index of the 20+ skills in .agents/skills/. A new contributor or user evaluating the marketplace cannot easily discover what 'eval-harness', 'dmux-workflows', 'content-engine', or 'brand-voice' actually do without cloning the repo and reading individual SKILL.md files. Adding a codegen script that reads every SKILL.md and the 'description' field from the corresponding openai.yaml and emits a docs/skills/README.md index table would make the project significantly more approach

Good first issues

  1. Add missing test coverage: there are no visible test files (*.test.ts or *.spec.ts) in the file structure — adding unit tests for the TypeScript plugin loader in dist/plugins/index.js would be a concrete, high-value contribution. 2. The brand-voice skill has a references/voice-profile-schema.md but no other skills have reference docs — documenting a schema for the deep-research or e2e-testing skills' reference inputs would improve consistency. 3. The marketplace.json plugin registry likely lacks automated validation — adding a JSON Schema and a CI lint step to validate new skill registrations against it would prevent malformed entries.

Top contributors

Recent commits

  • 841beea — fix: handle dotted reserved snapshot names (affaan-m)
  • 61992f7 — fix: harden loop-status snapshot writes (affaan-m)
  • 2715315 — fix: avoid loop-status index snapshot collision (affaan-m)
  • 7627926 — fix: preserve loop-status output on snapshot errors (affaan-m)
  • 20154dd — feat: write loop-status snapshots (affaan-m)
  • bb40978 — fix: show correct gateguard hook recovery id (affaan-m)
  • 7c5452f — fix: keep gateguard destructive gate strict (affaan-m)
  • cfe770a — fix: add gateguard recovery escape hatch (affaan-m)
  • 4c8499d — docs: clarify loop-status exit-code watch constraint (affaan-m)
  • 85dfb5e — test: isolate loop-status missing transcript fixture (affaan-m)

Security observations

  • High · Incomplete .env.example — GITHUB_USER value partially exposed — .env.example. The .env.example file contains a truncated but partially filled GITHUB_USER value ('your-git...'), suggesting the actual .env file may have been accidentally committed or that a real value was copy-pasted into the example template. This pattern increases the risk of credential leakage if the full .env is ever committed. Fix: Ensure .env.example contains only placeholder values (e.g., GITHUB_USER=your-github-username). Audit git history with tools like truffleHog or git-secrets to confirm no real credentials were ever committed. Add .env to .gitignore and enforce pre-commit hooks.
  • High · API Keys Defined Without Rotation or Vault Strategy — .env.example. ANTHROPIC_API_KEY and GITHUB_TOKEN are defined as plain environment variables with no mention of secret rotation, vault integration (e.g., HashiCorp Vault, AWS Secrets Manager), or minimum-privilege scoping. In an AI agent harness context, these keys are likely consumed at runtime by automated agents, making key compromise especially impactful. Fix: Use a secrets management solution (e.g., AWS Secrets Manager, Doppler, Vault). Enforce short-lived tokens where possible (e.g., GitHub fine-grained PATs with expiry). Document required token scopes to enforce least-privilege. Never pass secrets as build args in Docker.
  • High · Marketplace Plugin Loading Without Integrity Verification — .agents/plugins/marketplace.json. The file .agents/plugins/marketplace.json implies dynamic plugin loading from a marketplace. If plugins are fetched and executed at runtime without cryptographic integrity checks (e.g., SHA-256 hash pinning, signature verification), a supply chain attack could inject malicious agent behavior — especially dangerous in an AI coding agent that has filesystem and shell access. Fix: Implement content hash pinning for all marketplace plugin entries. Verify signatures before loading. Sandbox plugin execution. Maintain an allowlist of approved plugin sources and versions. Consider a code review gate before any new plugin is added to the marketplace manifest.
  • High · Agent YAML Configurations May Enable Arbitrary Tool/Command Execution — .agents/skills/*/agents/openai.yaml. Numerous openai.yaml files under .agents/skills/*/agents/ likely define tool schemas or system prompts for AI agents. Without strict input validation and output sandboxing, these agent definitions could be exploited via prompt injection to execute unintended commands, exfiltrate data, or abuse integrated tools (e.g., exa-search, fal-ai-media, mcp-server-patterns). Fix: Apply strict input/output validation for all agent tool calls. Implement prompt injection defenses (e.g., delimiters, instructional hierarchy). Run agents in sandboxed environments with restricted filesystem and network access. Log all tool invocations for auditability. Review each SKILL.md for overly broad tool permissions.
  • Medium · Outdated or Unpinned Peer Dependency — package.json. The peerDependency '@opencode-ai/plugin': '>=1.0.0' uses an unbounded version range. This means any future major version (including potentially breaking or vulnerable versions) of the plugin will satisfy the dependency constraint. Combined with devDependency pinned to '^1.4.3', there is a mismatch that could introduce unexpected behavior in production environments. Fix: Pin peerDependencies to a specific compatible range (e.g., '>=1.4.0 <2.0.0'). Regularly audit dependencies with 'npm audit'. Consider using lockfile pinning and Dependabot/Renovate for automated dependency updates with security scanning.
  • Medium · No Docker Security Hardening Visible — Docker configuration (not fully visible). The .env.example references a DOCKER_PLATFORM override, implying Docker is used in the workflow. However, no Dockerfile or docker-compose configuration is provided for review. Common Docker misconfigurations in AI agent environments include running as root, mounting sensitive host directories, exposing unnecessary ports, and missing resource limits — all of which are especially risky for an autonomous coding agent. Fix: Ensure containers run as non-root users. Avoid mounting host directories like /var/run/docker.sock unless strictly necessary. Use read-only filesystems where possible. Apply seccomp/AppArmor profiles. Set memory and CPU limits. Use multi-stage builds

LLM-derived; treat as a starting point, not a security audit.

Where to read next


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

WAIT · affaan-m/everything-claude-code — RepoPilot Verdict