RepoPilotOpen in app →

alchaincyf/nuwa-skill

你想蒸馏的下一个员工,何必是同事。蒸馏任何人的思维方式——心智模型、决策启发式、表达DNA。Distill how anyone thinks.

Mixed

Solo project — review before adopting

weakest axis
Use as dependencyMixed

single-maintainer (no co-maintainers visible); no tests detected…

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2w ago
  • MIT licensed
  • Solo or near-solo (1 contributor active in recent commits)
  • No CI workflows detected
  • No test directory detected
What would change the summary?
  • Use as dependency MixedHealthy if: onboard a second core maintainer; add a test suite

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/alchaincyf/nuwa-skill?axis=fork)](https://repopilot.app/r/alchaincyf/nuwa-skill)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/alchaincyf/nuwa-skill on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: alchaincyf/nuwa-skill

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/alchaincyf/nuwa-skill shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Solo project — review before adopting

  • Last commit 2w ago
  • MIT licensed
  • ⚠ Solo or near-solo (1 contributor active in recent commits)
  • ⚠ No CI workflows detected
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live alchaincyf/nuwa-skill repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/alchaincyf/nuwa-skill.

What it runs against: a local clone of alchaincyf/nuwa-skill — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in alchaincyf/nuwa-skill | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 44 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>alchaincyf/nuwa-skill</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of alchaincyf/nuwa-skill. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/alchaincyf/nuwa-skill.git
#   cd nuwa-skill
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of alchaincyf/nuwa-skill and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "alchaincyf/nuwa-skill(\\.git)?\\b" \\
  && ok "origin remote is alchaincyf/nuwa-skill" \\
  || miss "origin remote is not alchaincyf/nuwa-skill (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "README.md" \\
  && ok "README.md" \\
  || miss "missing critical file: README.md"
test -f "SKILL.md" \\
  && ok "SKILL.md" \\
  || miss "missing critical file: SKILL.md"
test -f "examples/steve-jobs-perspective/SKILL.md" \\
  && ok "examples/steve-jobs-perspective/SKILL.md" \\
  || miss "missing critical file: examples/steve-jobs-perspective/SKILL.md"
test -f "examples/elon-musk-perspective/SKILL.md" \\
  && ok "examples/elon-musk-perspective/SKILL.md" \\
  || miss "missing critical file: examples/elon-musk-perspective/SKILL.md"
test -f "examples/feynman-perspective/SKILL.md" \\
  && ok "examples/feynman-perspective/SKILL.md" \\
  || miss "missing critical file: examples/feynman-perspective/SKILL.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 44 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~14d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/alchaincyf/nuwa-skill"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

nuwa-skill is an AI-powered system that distills how any person thinks—their mental models, decision heuristics, and expression patterns—into reusable Claude Skill agents. Given a name (Elon Musk, Feynman, Naval Ravikant, etc.), it autonomously researches, extracts, and validates their thinking style, then creates an interactive agent that applies their cognitive frameworks to answer user questions. Monorepo structure: examples/ contains completed skill instantiations (one per notable person: elon-musk-perspective/, feynman-perspective/, etc.), each with SKILL.md (the distilled agent prompt) and references/ (research artifacts: writings, conversations, decision patterns, timelines). Root SKILL.md documents the framework itself. Python/Shell tools likely in the build pipeline handle the research → distillation → validation flow, but core orchestration code not visible in the top 60 files.

👥Who it's for

Product managers, entrepreneurs, investors, and knowledge workers who want on-demand access to the decision-making frameworks and problem-solving approaches of influential thinkers. Users interact via Claude Code by naming a person, then ask those distilled agents for perspective on their own challenges.

🌱Maturity & risk

Actively developed but pre-1.0. The repo has working examples (Elon Musk, Feynman, Naval, Ilya Sutskever, MrBeast, Andrej Karpathy perspectives) with populated SKILL.md files and reference research structures, indicating the core pipeline is functional. However, no CI/test infrastructure visible, no explicit version tags, and the Python codebase (14.4k LOC) lacks visible test files—suggest experimental-to-alpha maturity with real working output but limited automation/reliability scaffolding.

Single-maintainer risk (alchaincyf as sole contributor visible in top-level directory). Depends on Claude API and unversioned external services (no lock files visible for Python dependencies). The research-to-agent pipeline relies on LLM consistency and manual SKILL.md curation—brittle and prone to hallucination if the reference research is incomplete. Multilingual README maintenance burden (5 languages) with no tooling visible suggests doc rot risk.

Active areas of work

The repo is actively building out person-specific skill examples (evidence: 5 complete examples with detailed research breakdowns). Recent activity appears focused on breadth—adding more notable figures (Karpathy, MrBeast, Sutskever) rather than deepening infrastructure. The Bloome integration mention (a SaaS wrapper) suggests commercialization exploration. No visible PRs or issue tracker data in the file list, so commit recency is unclear.

🚀Get running

git clone https://github.com/alchaincyf/nuwa-skill.git
cd nuwa-skill
# Install via npm/npx (skills.sh compatible):
npx skills add alchaincyf/nuwa-skill
# Or read examples/elon-musk-perspective/SKILL.md to understand the output format

Daily commands: No traditional 'run dev server' pattern. To use an existing distilled skill: open Claude Code and reference the person (e.g., > 用马斯克的视角帮我分析...). To create a new skill: > 蒸馏一个[人名] in Claude Code, which triggers the research pipeline. The pipeline outputs to examples/[person]-perspective/SKILL.md.

🗺️Map of the codebase

  • README.md — Entry point explaining the project's core mission: distilling how anyone thinks by extracting mental models, decision heuristics, and expression DNA
  • SKILL.md — Framework documentation defining the 6-layer research structure used to build all perspective SKILLs (writings, conversations, expression-dna, external-views, decisions, timeline)
  • examples/steve-jobs-perspective/SKILL.md — Complete reference example showing the canonical SKILL template with all 6 research layers fully populated
  • examples/elon-musk-perspective/SKILL.md — Alternative perspective example demonstrating variations in how different thinkers' mental models are documented
  • examples/feynman-perspective/SKILL.md — Science communicator's perspective showing expression DNA and decision-making patterns extraction methodology
  • examples/naval-perspective/references/quality-validation.md — Quality assurance framework for validating distilled perspectives maintain accuracy and consistency
  • examples/mrbeast-perspective/SKILL.md — Modern creator perspective example with supplementary scripts (scripts/) showing how to systematically analyze video content and business decisions

🛠️How to make changes

Add a New Perspective (Distill a New Thinker)

  1. Create a new directory under examples/ following the naming convention: examples/{person-name}-perspective/ (examples/steve-jobs-perspective)
  2. Copy the 6-layer research structure from SKILL.md into your new perspective's SKILL.md file (examples/{person-name}-perspective/SKILL.md)
  3. Create references/ subdirectory and populate research/ folder with 6 markdown files: 01-writings.md, 02-conversations.md, 03-expression-dna.md, 04-external-views.md, 05-decisions.md, 06-timeline.md (examples/{person-name}-perspective/references/research/)
  4. Fill each research file with curated sources and extracted patterns following the framework in SKILL.md (examples/{person-name}-perspective/references/research/01-writings.md)
  5. Write the final SKILL.md summary synthesizing all 6 research layers into coherent mental models, decision heuristics, and expression DNA (examples/{person-name}-perspective/SKILL.md)
  6. If applicable, add validation notes by creating a quality-validation.md in references/, following the pattern in naval-perspective (examples/{person-name}-perspective/references/quality-validation.md)

Extract Patterns from Media (Video/Content Analysis)

  1. For content creators, create a scripts/ directory alongside references/ (examples/mrbeast-perspective/scripts/)
  2. Add shell script to fetch transcripts/subtitles from source media (e.g., fetch_youtube_subtitles.sh) (examples/mrbeast-perspective/scripts/fetch_youtube_subtitles.sh)
  3. Create Python scripts for pattern analysis (titles, engagement, visual design) targeting your analysis domain (examples/mrbeast-perspective/scripts/analyze_titles.py)
  4. Feed extracted patterns into the research markdown files, organizing by the 6-layer framework (examples/{person-name}-perspective/references/research/03-expression-dna.md)

Extend with Domain-Specific Analysis

  1. For specialized perspectives (e.g., cognitive frameworks), add custom reference files like munger-perspective's 25-biases.md (examples/munger-perspective/references/25-biases.md)
  2. Create additional markdown files under references/ for domain-specific deep dives (e.g., system thinking, financial models) (examples/munger-perspective/references/查理芒格思想体系深度调研-20260404.md)
  3. Link these custom analyses from the main SKILL.md to provide readers with entry points to specialized content (examples/munger-perspective/SKILL.md)

🔧Why these technologies

  • Markdown Documentation — Human-readable format enables both AI and human readers to understand and extend perspectives; supports version control via Git; language-agnostic for easy translation
  • 6-Layer Research Structure — Systematic decomposition of thinking into: primary works, real-time dialogue, communication style, external validation, decision patterns, and temporal evolution—comprehensively captures mental models
  • GitHub Repository — Distributed collaboration, community contributions, version history, and accessibility for both manual and AI-driven skill extraction
  • Python Scripts (for media analysis) — Enables automated extraction of patterns from structured data (video metadata, transcripts, engagement metrics)

⚖️Trade-offs already made

  • Manual research-layer population rather than fully automated extraction

    • Why: Nuanced thought patterns, hidden assumptions, and subtle heuristics require human judgment and contextual understanding to accurately capture
    • Consequence: Higher quality and interpretability, but slower scaling; not suitable for real-time perspective generation
  • Open-ended markdown format rather than structured JSON schema

    • Why: Maximizes flexibility for researchers to document unique aspects of each person's thinking without schema constraints
    • Consequence: Harder to programmatically parse and compare across perspectives, but more expressive and human-friendly
  • Emphasis on capturing 'expression DNA' (how they communicate) alongside mental models

    • Why: Enables AI agents to not just think like the person but also communicate with their characteristic style, tone, and metaphors
    • Consequence: Requires additional analysis layer (03-expression-dna.md) but creates richer, more authentic perspective distillations
  • Timeline layer (06-timeline.md) to track evolution of thinking

    • Why: Mental models and heuristics shift over time; capturing this evolution prevents
    • Consequence: undefined

🪤Traps & gotchas

  1. Reference research freshness: The distilled agent is only as good as the research in examples/[person]-perspective/references/. If research is sparse or outdated, the agent will hallucinate or regress to generic persona mimicry—no validation pipeline visible. 2. No explicit test suite for consistency: No tests to verify the agent actually uses the claimed mental models vs. just matching surface-level tone. The 'verification' step mentioned in the README is not visible in the file tree. 3. Language assumption: Python codebase; some scripts likely require UTF-8 and may break on Windows without proper encoding config. References heavily use Chinese; if contributing non-Chinese perspectives, documentation is sparse. 4. Claude API rate limits: No visible queueing, retry, or circuit-breaker logic in the accessible files; bulk research requests could hit rate limits.

🏗️Architecture

💡Concepts to learn

  • Mental Model Extraction — The core of nuwa-skill: identifying and codifying how a person thinks (e.g., Musk's 'first-principles reasoning,' Naval's 'opportunity cost thinking') from unstructured references, then embedding those models as LLM system prompts. Without understanding this, you can't improve the distillation quality.
  • System Prompt Engineering for Persona Consistency — The SKILL.md files are not generic Q&A responders—they're carefully crafted Claude system prompts that instruct the model to reason like a specific person. This requires understanding how prompt structure affects reasoning patterns and authenticity.
  • Expression DNA (Communication Patterns) — nuwa-skill distinguishes between what a person thinks (mental models) and how they express it (tone, metaphors, rhetorical style). This distinction is critical for authentic persona imitation; the 'expression DNA' research files capture this.
  • Few-Shot Prompt Learning — The examples in README.md (Naval on desires, Jobs on strategy) serve as few-shot demonstrations for the distilled agent. Understanding why these examples are placed in SKILL.md and how they ground the agent's behavior is essential for improving distillation.
  • Research-Driven AI Customization — Rather than fine-tuning or RAG, nuwa-skill uses a pipeline of qualitative researchpattern extractionprompt synthesis. This is a data-efficient alternative to traditional model adaptation and scales across different figures without retraining.
  • Decision Pattern Analysis (Heuristics) — Files like examples/elon-musk-perspective/references/马斯克决策模式与行为分析.md catalog recurring decision heuristics (shortcuts used under uncertainty). Extracting these patterns is harder than collecting quotes but yields more robust agent behavior.
  • Claude Skills / Tool Use Integration — nuwa-skill operates within the Claude Skills ecosystem (npm install via skills.sh). Understanding how Claude Code exposes and invokes skills is necessary to deploy and iterate on distilled agents in practice.
  • titanwings/colleague-skill — Direct predecessor mentioned in README: proves that distilling a single person's thinking into a Claude skill is feasible. nuwa-skill extends this from 'one colleague' to 'any notable person' with systematic research pipelines.
  • openai/gpt-4-system-prompts — Similar use of system prompts to embed cognitive frameworks; useful reference for how to structure SKILL.md files and reasoning chains in LLM prompts.
  • anthropics/constitutional-ai — Shares the goal of encoding values and decision patterns into an AI system; constitutional AI's principle-based approach parallels how nuwa-skill extracts 'mental models' from research.
  • dair-ai/Prompt-Engineering-Guide — Comprehensive guide to prompt engineering; critical resource for understanding why SKILL.md structure matters and how to improve agent consistency via prompt design.
  • lgrammel/ai-sdk — Multi-LLM abstraction layer; if nuwa-skill wanted to support GPT-4, Gemini, or Llama alongside Claude, this is the ecosystem tool to integrate.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Create automated validation pipeline for SKILL.md files across all examples

The repo has 6 example perspectives (Karpathy, Musk, Feynman, Sutskever, MrBeast, Munger, Naval, Paul Graham) each with a SKILL.md file. Currently there's no automated validation to ensure consistency in structure, required sections, or quality gates. Adding a Python/Node script with GitHub Actions would catch missing research files, incomplete reference sections, and inconsistent formatting before merge.

  • [ ] Create scripts/validate-skills.py to parse each examples/*/SKILL.md and verify: presence of all 6 research categories, non-empty reference files, and required metadata
  • [ ] Add GitHub Action .github/workflows/validate-skills.yml to run on PRs touching examples/ directory
  • [ ] Document validation rules in SKILL.md with a 'Validation Checklist' section
  • [ ] Test against current examples to ensure they pass (fix any that don't)

Build research methodology guide with verification templates in examples/

Each example has different research file naming and structure (some use numbered files like 01-writings.md, others use descriptions like 25-biases.md, some are in Chinese). New contributors cannot easily understand what constitutes complete research for a new perspective. Creating a standardized template with examples would dramatically improve contribution quality.

  • [ ] Create examples/RESEARCH_TEMPLATE.md with 6 required research categories: 1) Writings & Publications, 2) Conversations & Interviews, 3) Expression DNA, 4) External Views & Criticism, 5) Key Decisions & Actions, 6) Timeline & Evolution
  • [ ] Add examples/template-perspective/ directory with skeleton structure and markdown templates
  • [ ] Reference this template in main README.md and create examples/README.md with contribution guidelines
  • [ ] Update examples/*/SKILL.md files to link to RESEARCH_TEMPLATE.md for clarity

Add integration tests for perspective distillation accuracy using examples as test cases

The repo mentions examples like Karpathy, Musk, Feynman etc. are distilled perspectives, but there's no test suite validating that the SKILL.md output accurately reflects the research data. Creating unit/integration tests would catch regressions and ensure the distillation quality remains high as the methodology evolves.

  • [ ] Create tests/test_skill_accuracy.py with test cases that: read examples/*/SKILL.md content, verify key claims are backed by files in references/, check for citations or evidence trails
  • [ ] Add tests/fixtures/ with sample research data to test distillation logic without external dependencies
  • [ ] Create GitHub Action .github/workflows/test-skills.yml to run accuracy tests on PR changes to examples/
  • [ ] Document expected accuracy metrics in SKILL.md (e.g. '80% of SKILL.md claims must be traceable to reference files')

🌿Good first issues

  • Add a 'conversational examples' reference template to examples/[person]-perspective/references/ with 3-5 scripted dialogues showing how the person's mental models apply to common domain-specific problems (e.g., a product manager asking a distilled PM how to prioritize features). This will ground the distillation in concrete patterns rather than abstract reasoning.
  • Create a test/validation harness: write 5-10 'ground truth' Q&A pairs for an existing distilled skill (e.g., examples/feynman-perspective/) where we know the correct answer from his writings, then measure whether the current SKILL.md agent answers correctly. Document the pass rate and identify gaps in the research.
  • Build a reference checklist Markdown template (e.g., RESEARCH_TEMPLATE.md) that lists the 6 required research categories (writings, conversations, expression-dna, external-views, decisions, timeline) with guiding questions ('What are 3 pivotal decisions this person made?' 'How do they typically reject ideas?'). Place it in examples/ as documentation for future skill creators.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • ea4b9ab — docs: use Bloome original copy verbatim (alchaincyf)
  • f27bacd — docs: add Bloome partner showcase module in 效果示例 section (alchaincyf)
  • 25bb2de — add: hero banner (alchaincyf)
  • fedfebf — docs: add hero animation to README (alchaincyf)
  • 26cc17e — add: 达尔文.skill promotion with banner and install command (alchaincyf)
  • 5a1e1e2 — fix: add missing research files to x-mastery-mentor example (alchaincyf)
  • f6eec0f — docs: move Star History chart above examples for visibility (alchaincyf)
  • c32587c — fix: update 小红书 display name from 花生 to 花叔 (alchaincyf)
  • 7ed0ab4 — chore: rename brand from AI进化论-花生 to 花叔 (alchaincyf)
  • 14a0191 — fix SKILL.md: add scripts/ to directory structure diagram (alchaincyf)

🔒Security observations

This repository is primarily a documentation and research project with minimal direct security risks. The main concerns are the absence of a formal dependency manifest file, unreviewed Python scripts in the examples directory, and potential XSS risks if markdown content is rendered in web contexts. The codebase lacks executable code that handles sensitive data, user input validation, or authentication mechanisms. No hardcoded secrets, SQL injection vectors, or exposed credentials were detected. For a documentation/research repository, the security posture is reasonable, but adding dependency tracking and reviewing the utility scripts would improve overall security hygiene.

  • Low · Missing Dependency Manifest — Repository root. No package.json, requirements.txt, or dependency manifest file is present in the repository. This makes it impossible to verify if any dependencies have known security vulnerabilities. The codebase appears to be documentation-heavy with Python scripts but lacks dependency tracking. Fix: Create appropriate dependency manifest files (package.json for Node.js projects, requirements.txt for Python projects, etc.) and maintain them with pinned versions. Use tools like npm audit, pip-audit, or Dependabot to monitor for vulnerabilities.
  • Low · Python Scripts Without Security Analysis — examples/mrbeast-perspective/scripts/. Several Python scripts are present (analyze_titles.py, fetch_youtube_subtitles.sh, retention_curve_checker.py, thumbnail_audit.py) but no security review is available. These scripts may contain unvalidated inputs or unsafe operations. Fix: Review all scripts for input validation, use argparse with type checking, avoid shell execution (subprocess.run), and consider using secure libraries. Add unit tests and security linting (bandit, pylint).
  • Low · Shell Script Without Input Validation — examples/mrbeast-perspective/scripts/fetch_youtube_subtitles.sh. The fetch_youtube_subtitles.sh script may execute external commands or download content without proper validation or error handling. Fix: Add input validation, error handling, and ensure URLs are properly escaped. Consider rewriting in Python with requests library for better security control.
  • Low · Markdown Files May Contain Unvalidated Content — examples/*/references/. Multiple markdown files contain research data and references that could potentially be rendered without sanitization if used in web contexts (e.g., Andrej Karpathy perspective, Elon Musk research files). Fix: If these markdown files are ever rendered in a web application, use a secure markdown parser with XSS protection enabled (e.g., markdown-it with DOMPurify for JavaScript).

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · alchaincyf/nuwa-skill — RepoPilot