langgenius/dify
Production-ready platform for agentic workflow development.
Mixed signals — read the receipts
- ✓Last commit today
- ✓5 active contributors
- ✓Distributed ownership (top contributor 34%)
- ✓Other licensed
- ✓CI configured
- ✓Tests present
- ⚠Small team — 5 top contributors
- ⚠Non-standard license (Other) — review terms
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Embed this verdict
[](https://repopilot.app/r/langgenius/dify)Paste into your README — the badge live-updates from the latest cached analysis.
Onboarding doc
Onboarding: langgenius/dify
Generated by RepoPilot · 2026-05-04 · Source
Verdict
WAIT — Mixed signals — read the receipts
- Last commit today
- 5 active contributors
- Distributed ownership (top contributor 34%)
- Other licensed
- CI configured
- Tests present
- ⚠ Small team — 5 top contributors
- ⚠ Non-standard license (Other) — review terms
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
TL;DR
Dify is a production-ready LLM application development platform that provides a visual workflow builder for constructing agentic pipelines, a prompt IDE, RAG (Retrieval-Augmented Generation) pipeline management, and a model provider abstraction layer supporting 100+ LLMs. It solves the problem of operationalizing AI applications by combining a Python/Flask backend with a Next.js frontend into a deployable platform for building chatbots, agents, and automated workflows without gluing together disparate SDKs. Monorepo split into a Python backend (likely under api/ or similar) and a TypeScript/Next.js frontend (web/), with agent skill definitions organized under .agents/skills/ covering backend-code-review, component-refactoring, e2e-cucumber-playwright, frontend-code-review, frontend-query-mutation, and frontend-testing. The .claude/skills/ directory mirrors .agents/skills/, suggesting Claude-based AI assistance is wired into the development workflow itself.
Who it's for
AI engineers and product teams at companies who need to prototype, deploy, and manage LLM-powered applications in production — specifically developers who want visual workflow orchestration over raw API calls, and MLOps teams needing observability, versioning, and multi-model management without building infrastructure from scratch.
Maturity & risk
Dify is highly mature: it has millions of Docker Hub pulls (visible in the README badge), active daily commits, a structured monorepo with dedicated skill agents under .agents/, Gherkin-based E2E tests, and a .coveragerc indicating backend test coverage tracking. Verdict: production-ready and actively developed with a large contributor base tracked via Linux Foundation Insights.
The primary risks are complexity and surface area — the codebase is enormous (~58M total language bytes across TypeScript and Python) with many external LLM provider integrations, each a potential point of API breakage. The .devcontainer setup and multi-service Docker architecture mean local development requires significant resource overhead. No single-maintainer risk — the repo is backed by LangGenius as an organization with Linux Foundation health tracking.
Active areas of work
Active investment in developer tooling: the .agents/ skill system for AI-assisted code review and refactoring is being built out with dedicated SKILL.md definitions and reference docs for frontend testing patterns, query/mutation contracts, and component splitting. The frontend-testing skill has template assets (component-test.template.tsx, hook-test.template.ts, utility-test.template.ts) suggesting a push to standardize test coverage.
Get running
git clone https://github.com/langgenius/dify.git && cd dify && cp api/.env.example api/.env && docker compose up -d # starts all services via Docker
For frontend dev: cd web && npm install && npm run dev
For backend dev: cd api && pip install -r requirements.txt && flask run
Daily commands: docker compose up -d # full stack via Docker Compose (recommended)
OR for isolated frontend:
cd web && npm run dev # Next.js dev server on localhost:3000
OR for isolated backend:
cd api && flask run --debug # Flask API on localhost:5001
Map of the codebase
- .agents/skills/backend-code-review/references/architecture-rule.md: Defines the enforced architectural constraints for the Python backend — critical reading before any backend contribution.
- .agents/skills/backend-code-review/references/db-schema-rule.md: Documents database schema conventions and rules for SQLAlchemy model changes.
- .agents/skills/frontend-query-mutation/references/contract-patterns.md: Specifies how frontend data fetching (React Query) contracts must be structured — prevents API integration bugs.
- .agents/skills/frontend-testing/assets/component-test.template.tsx: Canonical template for new React component tests — use this as the starting point for any new test file.
- .agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md: Repo-specific Playwright conventions for E2E tests — deviating causes flaky tests in CI.
- .agents/skills/component-refactoring/references/component-splitting.md: Documents when and how to split React components — enforced during code review via the AI agent skill.
- .devcontainer/Dockerfile: Defines the exact dev environment; mismatches between this and local setup are the #1 source of onboarding friction.
- .coveragerc: Controls which Python files are included/excluded in coverage reports — check this before adding new backend modules.
How to make changes
New LLM provider: add under api/core/model_runtime/model_providers/. New workflow node type: add under api/core/workflow/nodes/ and corresponding UI under web/app/components/workflow/nodes/. Frontend component changes: web/app/components/. New E2E test: follow .agents/skills/e2e-cucumber-playwright/references/ and add feature files. Backend DB schema changes: api/models/ with SQLAlchemy, following .agents/skills/backend-code-review/references/db-schema-rule.md.
Traps & gotchas
- Multiple external services must be running simultaneously (PostgreSQL, Redis, at minimum one vector DB, and at least one LLM API key configured in api/.env) — the app will silently fail or partially load without them. 2) The .agents/ and .claude/skills/ directories are AI-assisted development tooling that expect specific agent runtimes (OpenAI API key or Claude) — they are not part of the app runtime but are used in the development workflow. 3) The Next.js frontend expects specific environment variables (NEXT_PUBLIC_API_PREFIX etc.) that must match the backend URL; misconfiguration causes all API calls to fail with no obvious error. 4) Node version and Python version constraints are strict — check .devcontainer/Dockerfile for exact pinned versions.
Concepts to learn
- Retrieval-Augmented Generation (RAG) — Dify's knowledge base feature is built on RAG — understanding how documents are chunked, embedded, and retrieved is essential for debugging any knowledge pipeline behavior.
- Agentic Workflow DAG — Dify's workflow engine executes nodes as a directed acyclic graph — knowing how DAG execution ordering and branching works explains why node outputs flow the way they do.
- Server-Sent Events (SSE) — Dify streams LLM token output to the frontend using SSE rather than WebSockets — understanding SSE is required to debug streaming response issues or add new streaming endpoints.
- Repository Pattern (SQLAlchemy) — The backend enforces a strict repository pattern for database access (documented in .agents/skills/backend-code-review/references/repositories-rule.md) — violating it will fail code review.
- Vector Embedding and ANN Search — All RAG and semantic search in Dify relies on approximate nearest-neighbor search over vector embeddings — necessary to understand why different vector DB backends (Qdrant, Weaviate, Pinecone) produce different retrieval results.
- Celery Distributed Task Queue — Long-running operations (document indexing, async workflow steps) are offloaded to Celery workers backed by Redis — misunderstanding this causes confusion about why some operations appear to complete instantly while others are async.
- Cucumber/Gherkin BDD — E2E tests in Dify use Gherkin feature files executed via Playwright (see .agents/skills/e2e-cucumber-playwright/) — contributors adding new features must write Gherkin scenarios, not raw Playwright scripts.
Related repos
FlowiseAI/Flowise— Direct alternative — also provides a visual LLM workflow builder with a Node.js backend, solving the same no-code AI pipeline problem.langchain-ai/langchain— Core ecosystem dependency — Dify's backend agent and chain execution layers build on LangChain primitives.lobehub/lobe-chat— Companion/alternative in the same space — a production chat UI with multi-model support that some teams use alongside or instead of Dify's frontend.run-llama/llama_index— Ecosystem companion — LlamaIndex powers many RAG pipeline patterns that Dify's knowledge base feature implements; understanding it helps debug retrieval behavior.microsoft/promptflow— Close alternative from Microsoft — also targets agentic workflow orchestration for LLMs with a DAG-based execution model.
PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add frontend unit tests for query/mutation hooks using the existing frontend-testing skill templates
The repo has detailed testing infrastructure under .agents/skills/frontend-testing/assets/ with templates for component, hook, and utility tests, plus references covering async testing, mocking, and domain components. However, the .claude/skills/frontend-query-mutation and .agents/skills/frontend-query-mutation/references/ suggest there are query/mutation hooks that lack test coverage. Adding tests using the provided hook-test.template.ts and component-test.template.tsx would directly leverage the existing skill scaffolding and improve reliability of API-interacting hooks.
- [ ] Review
.agents/skills/frontend-query-mutation/references/runtime-rules.mdandcontract-patterns.mdto identify all custom query/mutation hooks in the frontend codebase - [ ] Use
.agents/skills/frontend-testing/assets/hook-test.template.tsas the base template for each hook test file - [ ] Follow
.agents/skills/frontend-testing/references/async-testing.mdfor handling async query/mutation states (loading, error, success) - [ ] Follow
.agents/skills/frontend-testing/references/mocking.mdto mock API calls and external dependencies consistently - [ ] Follow
.agents/skills/frontend-testing/references/checklist.mdto ensure full coverage per hook (idle, loading, success, error states) - [ ] Verify tests pass and update
.coveragercif needed to include the new test paths
Add E2E Cucumber+Playwright tests for agentic workflow creation and execution flows
The repo has a fully configured E2E skill under .agents/skills/e2e-cucumber-playwright/ with an OpenAI agent config at .agents/skills/e2e-cucumber-playwright/agents/openai.yaml and best-practice references for both Cucumber and Playwright. As a 'production-ready platform for agentic workflow development', the core workflow creation and execution user journeys are the highest-risk paths but likely lack structured BDD E2E coverage. Adding Cucumber feature files and Playwright step definitions for these flows would directly validate the platform's primary value proposition.
- [ ] Review
.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.mdto understand the expected feature file structure and tagging conventions for this repo - [ ] Review
.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.mdfor page object patterns and selector strategies used in this project - [ ] Write a
workflow_creation.featureCucumber file covering: creating a new workflow, adding nodes, connecting nodes, saving the workflow - [ ] Write a
workflow_execution.featureCucumber file covering: triggering a workflow, inspecting run logs, handling errors in execution - [ ] Implement Playwright step definitions for each scenario step, following the page object model described in the best-practices reference
- [ ] Register the new feature files with the test runner config referenced in
.agents/skills/e2e-cucumber-playwright/agents/openai.yaml
Refactor large frontend domain components using the component-splitting and hook-extraction skill references
The repo contains .agents/skills/component-refactoring/references/component-splitting.md, hook-extraction.md, and complexity-patterns.md, which are skill references specifically authored for this codebase — indicating maintainers have already identified that some components are too large and complex. A new contributor can apply these documented patterns to split high-complexity workflow editor or agent configuration components into smaller, focused sub-components and extracted custom hooks, making the codebase easier to test and maintain.
- [ ] Read
.agents/skills/component-refactoring/references/complexity-patterns.mdto understand the complexity thresholds and anti-patterns identified for this repo - [ ] Use the patterns from
complexity-patterns.mdto identify the top 2-3 largest/most complex components in the workflow editor or agent configuration UI directories - [ ] Apply
Good first issues
- Add missing unit tests for frontend utility functions following the template at .agents/skills/frontend-testing/assets/utility-test.template.ts — the checklist at .agents/skills/frontend-testing/references/checklist.md identifies specific gaps. 2) Write Gherkin feature files for workflow node interaction scenarios using the conventions in .agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md — E2E coverage for new node types is typically sparse. 3) Improve inline documentation for the backend architecture rules in .agents/skills/backend-code-review/references/repositories-rule.md by adding concrete before/after code examples for the repository pattern.
Top contributors
- @lyzno1 — 19 commits
- @hjlarry — 15 commits
- @dependabot[bot] — 10 commits
- @asukaminato0721 — 8 commits
- @CodingOnStar — 4 commits
Recent commits
b43ebf5— fix: preserve single-run input variable types (#35710) (Jingyi-Dify)853b859— chore(deps-dev): bump the dev group in /api with 6 updates (#35782) (dependabot[bot])8f3e42e— refactor(web): migrate workflow node actions menu (#35785) (lyzno1)1359c03— refactor(web): migrate legacy tooltip to infotip (#35774) (lyzno1)4b7dc17— fix: skip empty documents before vector embedding (#35763) (princepal9120)81090ef— refactor(web): convert ValidatedStatus enum to as-const in key-valida… (#35749) (guangyang1206)d92c336— fix(web): secure external form help links (#35751) (aliworksx08)cd9daef— chore(deps): bump anthropics/claude-code-action from 1.0.110 to 1.0.111 in the github-actions-dependencies group (#35781 (dependabot[bot])2876839— chore(deps): bump the google group in /api with 2 updates (#35779) (dependabot[bot])7ba408e— fix: IDOR on consoleGET /account/avatar(#35771) (NeatGuyCoding)
Security observations
- High · Potential Hardcoded Secrets in Devcontainer Configuration —
.devcontainer/post_create_command.sh, .devcontainer/post_start_command.sh. The .devcontainer directory contains setup scripts (post_create_command.sh, post_start_command.sh) that may embed or reference credentials, API keys, or environment variables directly in the container initialization scripts. Devcontainer configurations are often checked into source control and can inadvertently expose secrets. Fix: Ensure no hardcoded secrets exist in devcontainer scripts. Use environment variable injection from a secrets manager or .env files excluded from version control. Audit these files and rotate any exposed credentials. - High · SQL Injection Risk via Raw Query Patterns —
.agents/skills/backend-code-review/references/sqlalchemy-rule.md, .agents/skills/backend-code-review/references/db-schema-rule.md. The presence of SQLAlchemy-specific rules in .agents/skills/backend-code-review/references/sqlalchemy-rule.md and db-schema-rule.md suggests the codebase uses SQLAlchemy ORM. However, production-ready platforms often include raw SQL fallbacks or text() clauses that, if improperly parameterized, can lead to SQL injection. The architecture of an agentic workflow platform likely processes dynamic user inputs that could be interpolated into queries. Fix: Enforce use of parameterized queries and SQLAlchemy ORM models exclusively. Never interpolate user input directly into SQL strings. Use SQLAlchemy's text() with bound parameters only. Implement automated SAST scanning for raw SQL usage in CI/CD pipeline. - High · XSS Risk via dangerouslySetInnerHTML in Frontend Components —
.agents/skills/frontend-code-review/references/business-logic.md, frontend components. The frontend code review skill references (business-logic.md, code-quality.md) and the agentic workflow nature of the platform suggest user-generated content is rendered in the UI. Platforms that render LLM outputs, markdown, or HTML content are particularly susceptible to XSS if dangerouslySetInnerHTML is used without proper sanitization of AI-generated or user-supplied content. Fix: Audit all uses of dangerouslySetInnerHTML in React components. Apply DOMPurify or equivalent sanitization library before rendering any user-supplied or LLM-generated HTML content. Implement a strict Content Security Policy (CSP) header. - High · Missing Security Headers Configuration —
Infrastructure/web server configuration. No explicit security header configuration files (e.g., nginx.conf, security middleware config) are visible in the provided file structure. Production-ready platforms must configure headers such as Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Strict-Transport-Security, and Referrer-Policy to prevent clickjacking, MIME sniffing, and other client-side attacks. Fix: Configure the following HTTP security headers at the reverse proxy or application layer: Content-Security-Policy, X-Frame-Options: DENY, X-Content-Type-Options: nosniff, Strict-Transport-Security (HSTS), Referrer-Policy: strict-origin-when-cross-origin, Permissions-Policy. Use tools like securityheaders.com to validate. - Medium · Devcontainer Dockerfile May Contain Insecure Base Image or Configurations —
.devcontainer/Dockerfile. The .devcontainer/Dockerfile may use a base image that is not regularly updated, potentially containing known CVEs. Development containers sometimes disable security features (e.g., running as root, disabling certificate verification) for convenience, which can create a false sense of security if these patterns leak into production builds. Fix: Use minimal, regularly updated base images (e.g., distroless or official slim variants). Ensure the container does not run as root. Regularly scan images with tools like Trivy, Snyk, or Docker Scout. Separate dev and prod Dockerfiles clearly. - Medium · CI/CD Workflow Files May Expose Sensitive Environment Variables —
undefined. The .github/workflows directory contains multiple workflow files (api-tests.yml, build-push.yml, db-migration-test.yml, autofix.yml). GitHub Actions workflows that use secrets improperly (e.g., printing env variables, using pull_request_target trigger with untrusted code, or overly permissive GITHUB_TOKEN scopes) can lead to secret exfiltration Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.