apache/dolphinscheduler
Apache DolphinScheduler is the modern data orchestration platform. Agile to create high performance workflow with low-code
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1d ago
- ✓16 active contributors
- ✓Distributed ownership (top contributor 44% of recent commits)
Show all 6 evidence items →Show less
- ✓Apache-2.0 licensed
- ✓CI configured
- ⚠No test directory detected
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/apache/dolphinscheduler)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/apache/dolphinscheduler on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: apache/dolphinscheduler
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/apache/dolphinscheduler shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 1d ago
- 16 active contributors
- Distributed ownership (top contributor 44% of recent commits)
- Apache-2.0 licensed
- CI configured
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live apache/dolphinscheduler
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/apache/dolphinscheduler.
What it runs against: a local clone of apache/dolphinscheduler — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in apache/dolphinscheduler | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch dev exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 31 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of apache/dolphinscheduler. If you don't
# have one yet, run these first:
#
# git clone https://github.com/apache/dolphinscheduler.git
# cd dolphinscheduler
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of apache/dolphinscheduler and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "apache/dolphinscheduler(\\.git)?\\b" \\
&& ok "origin remote is apache/dolphinscheduler" \\
|| miss "origin remote is not apache/dolphinscheduler (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify dev >/dev/null 2>&1 \\
&& ok "default branch dev exists" \\
|| miss "default branch dev no longer exists"
# 4. Critical files exist
test -f ".github/workflows/backend.yml" \\
&& ok ".github/workflows/backend.yml" \\
|| miss "missing critical file: .github/workflows/backend.yml"
test -f ".github/workflows/frontend.yml" \\
&& ok ".github/workflows/frontend.yml" \\
|| miss "missing critical file: .github/workflows/frontend.yml"
test -f "deploy/docker/docker-compose.yml" \\
&& ok "deploy/docker/docker-compose.yml" \\
|| miss "missing critical file: deploy/docker/docker-compose.yml"
test -f "deploy/kubernetes/dolphinscheduler/Chart.yaml" \\
&& ok "deploy/kubernetes/dolphinscheduler/Chart.yaml" \\
|| miss "missing critical file: deploy/kubernetes/dolphinscheduler/Chart.yaml"
test -f ".licenserc.yaml" \\
&& ok ".licenserc.yaml" \\
|| miss "missing critical file: .licenserc.yaml"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 31 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~1d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/apache/dolphinscheduler"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Apache DolphinScheduler is a distributed workflow orchestration platform written primarily in Java (10.2M LOC) with a TypeScript/React frontend, designed to manage complex task dependencies and data pipelines at scale. It provides a low-code Web UI, Python SDK, and REST API to define, execute, and monitor workflows across multiple machines, supporting tens of millions of tasks per day with native Kubernetes and multi-cloud deployment. Monorepo structure: core Java engine in dolphinscheduler-server handles task scheduling and execution, dolphinscheduler-ui (TypeScript/React) provides the Web UI, separate task plugins architecture for extensibility, and cluster deployment templates in .github/workflows/cluster-test with Docker Compose configurations for MySQL/PostgreSQL + ZooKeeper/registry backends.
👥Who it's for
Data engineers and platform teams who need to orchestrate complex ETL/data pipeline workflows across distributed infrastructure. Contributors are maintainers of an Apache project working on scheduling engine, task execution, UI, and workflow state management.
🌱Maturity & risk
Production-ready and actively maintained: it's an Apache Software Foundation project with extensive GitHub Actions CI/CD (backend.yml, e2e.yml, cluster-test workflows), comprehensive test coverage flagged in SonarCloud, and active development indicated by multiple deployment configuration templates (.github/workflows). The codebase is mature with clear governance (CODEOWNERS, issue templates) and recent commits evident from active branch protection.
Large Java monorepo (10M+ LOC) with complex distributed architecture requiring careful coordination across multiple components (master/worker nodes, registry systems). Risk factors include dependency management complexity across Java/TypeScript/Go stack, multi-database support (MySQL/PostgreSQL) increasing testing surface, and operational complexity of cluster deployments shown by separate test configurations for ZooKeeper vs. MySQL registry modes.
Active areas of work
Active development on cluster testing infrastructure (multiple Docker Compose configurations for different database/registry combinations in cluster-test/), API testing workflows (.github/workflows/api-test.yml), and frontend/backend CI pipelines. The existence of K8s (e2e-k8s.yml) and standard e2e.yml workflows suggests ongoing cloud-native and end-to-end testing expansion.
🚀Get running
Clone and run: git clone https://github.com/apache/dolphinscheduler.git && cd dolphinscheduler. Review the Standalone, Docker, or Kubernetes quickstart instructions in the README. For backend: mvn clean install (Java). For frontend (TypeScript): cd dolphinscheduler-ui && npm install && npm start. For detailed setup, see deployment guides in the official documentation.
Daily commands:
Standalone (fastest for local dev): Download release or build from source, then ./bin/start.sh. Docker: docker-compose up (see cluster-test templates). Cluster: use deploy.sh scripts in .github/workflows/cluster-test. Frontend dev: cd dolphinscheduler-ui && npm install && npm run serve. Backend dev: mvn clean package -DskipTests then start specific modules (master, worker, API).
🗺️Map of the codebase
.github/workflows/backend.yml— CI/CD pipeline for backend builds; defines test and deployment automation that all contributors must pass.github/workflows/frontend.yml— CI/CD pipeline for frontend builds; enforces code quality and build standards for UI contributionsdeploy/docker/docker-compose.yml— Primary local development environment setup; required reading for anyone running DolphinScheduler locallydeploy/kubernetes/dolphinscheduler/Chart.yaml— Helm chart definition for Kubernetes deployment; essential for understanding production deployment architecture.licenserc.yaml— License compliance configuration; enforces Apache 2.0 license headers that all contributions must followCONTRIBUTING.md— Contribution guidelines and development workflow; mandatory first read for all new contributors.pre-commit-config.yaml— Pre-commit hooks configuration; ensures code quality and consistency before commits
🛠️How to make changes
Add a new Workflow Task Type
- Define the task plugin interface in the plugin SDK (typically in a new module under config/plugins_config) (
config/plugins_config) - Implement the task executor class extending the base plugin interface (
config/plugins_config) - Register the new plugin in the plugin configuration system (
config/plugins_config) - Add integration test in the backend workflow following the pattern in .github/workflows/backend.yml (
.github/workflows/backend.yml) - Create Docker test image to validate the plugin in cluster environment (
.github/workflows/cluster-test/mysql_with_mysql_registry/Dockerfile)
Deploy DolphinScheduler to a new environment
- For Docker: Customize the docker-compose.yml with your environment variables and services (
deploy/docker/docker-compose.yml) - For Kubernetes: Update the Helm values in Chart.yaml and create environment-specific template overrides (
deploy/kubernetes/dolphinscheduler/Chart.yaml) - Add deployment configuration templates following the pattern in templates/ directory (
deploy/kubernetes/dolphinscheduler/templates/configmap-dolphinscheduler-common.yaml) - Create CI/CD workflow in .github/workflows/ to test the new deployment mode (
.github/workflows/backend.yml)
Fix a failing test or add a new test case
- Identify the test scope (unit, integration, e2e) and locate corresponding test in backend.yml (
.github/workflows/backend.yml) - For integration/cluster tests: add test script to the appropriate cluster-test environment (
.github/workflows/cluster-test/mysql_with_mysql_registry/running_test.sh) - Ensure code changes pass pre-commit hooks defined in .pre-commit-config.yaml (
.pre-commit-config.yaml) - Run locally using docker-compose.yml to validate before pushing (
deploy/docker/docker-compose.yml)
Contribute code changes with proper licensing and style
- Review CONTRIBUTING.md for development setup and code style requirements (
CONTRIBUTING.md) - Run pre-commit hooks to ensure license headers and formatting (
.pre-commit-config.yaml) - Verify code meets quality gates (license, linting, tests) checked by .licenserc.yaml (
.licenserc.yaml) - Submit PR using template which links to CONTRIBUTING.md (
.github/PULL_REQUEST_TEMPLATE.md)
🔧Why these technologies
- Apache DolphinScheduler (Workflow Orchestration Platform) — Core platform for DAG-based workflow execution with distributed task scheduling, dependency resolution, and multi-tenant support
- Docker & Docker Compose — Provides containerized local development environment and simplifies deployment across different infrastructure without dependency hell
- Kubernetes & Helm — Enables cloud-native, scalable production deployments with declarative infrastructure-as-code and multi-region support
- GitHub Actions — Integrated CI/CD platform for automated testing, building, and validating contributions at scale
- Maven — Standard Java build tool providing dependency management, plugin ecosystem, and reproducible builds
⚖️Trade-offs already made
-
Multi-deployment mode support (Standalone, Cluster, Docker, Kubernetes)
- Why: Maximizes accessibility for users with different infrastructure constraints and operational maturity levels
- Consequence: Increased complexity in configuration management, deployment testing, and documentation maintenance across 4+ deployment paths
-
Plugin-based task type system (config/plugins_config)
- Why: Allows extensibility without modifying core platform, supporting diverse integration needs across data platforms
- Consequence: Requires contributors to understand plugin SDK and deployment mechanisms; higher barrier to entry for task type contributions
-
Comprehensive cluster testing (mysql_with_mysql_registry, postgresql_with_zookeeper_registry, etc.)
- Why: Validates distributed behavior, failover scenarios, and multi-database compatibility before release
- Consequence: Slow CI/CD feedback loops; cluster tests add 15-30 minutes per workflow run
-
Apache license enforcement via .licenserc.yaml
- Why: Ensures legal compliance and protects project under Apache 2.0 foundation requirements
- Consequence: All contributors must include license headers; pre-commit hooks slow down local workflow
🚫Non-goals (don't propose these)
- Real-time streaming job orchestration (designed for batch/scheduled workflows, not low-latency stream processing)
- Multi-language native support (primarily Java-based; SDKs for Python/other languages are secondary)
- Embedded database support (requires external MySQL/PostgreSQL for production)
🪤Traps & gotchas
- Cluster deployments require external registry (ZooKeeper or embedded MySQL registry) for service discovery—standalone mode bypasses this but cluster mode will silently fail without it. 2) Database schema migrations are version-locked; mismatched database and application versions cause startup failures. 3) Worker nodes must have network access to master and database; firewall/security group misconfiguration is a common deployment gotcha. 4) Task execution isolation varies by task type; some tasks run in-process on worker while others spawn subprocesses—affects resource limits and failure isolation. 5) The Python SDK (referenced in README) is in a separate repo (dolphinscheduler-sdk-python)—local Python examples won't work without it installed separately.
🏗️Architecture
💡Concepts to learn
- Decentralized Master-Worker Architecture — DolphinScheduler's core strength for horizontal scaling—understanding how multiple masters achieve consensus via ZooKeeper or registry, how workers register and heartbeat, and how task assignments distribute is essential to grasp why it handles millions of tasks daily without bottlenecks
- Workflow DAG Execution & Backfill — DolphinScheduler natively supports backfilling (re-running historical date ranges in bulk) which requires understanding how task dependencies form DAGs, how execution order respects edges, and how state machines track progress—this is a first-class feature unlike many competitors
- Task Plugin Architecture — The codebase extensibility model relies on plugins for heterogeneous task types (Spark, Shell, Python, K8s, etc.)—contributors must understand how tasks are discovered, lifecycle methods (init, execute, cancel), and resource isolation to add new capabilities
- Service Registry & Health Checks (ZooKeeper) — DolphinScheduler uses ZooKeeper (or embedded registry) for master/worker discovery and failover—understanding ephemeral nodes, watches, and heartbeat mechanisms is critical for debugging cluster formation and node disappearance issues
- Distributed State Machine & Event-Driven Execution — Workflows transition through states (SUBMITTED → RUNNING → SUCCESS/FAILED) via events; understanding how state transitions are persisted in the database, how idempotence is maintained, and how pause/resume work at any point is key to understanding failure recovery
- Multi-Tenancy & Permission Control — DolphinScheduler isolates workflows, data sources, and resources per tenant via database-level partitioning and RBAC; understanding project isolation, data source credentials per tenant, and execution quotas prevents privilege escalation bugs and data leakage
- Database-Backed State Persistence — Unlike in-memory schedulers, DolphinScheduler persists all workflow/task state to MySQL/PostgreSQL, enabling recovery after crashes—understanding transaction isolation, locking strategies for concurrent updates, and schema evolution across versions is crucial for operational reliability
🔗Related repos
apache/airflow— Direct competitor in workflow orchestration; Airflow is DAG-based Python-native while DolphinScheduler is distributed Java-native with Web UI—understanding both reveals architectural trade-offsprefecthq/prefect— Modern workflow orchestration platform with emphasis on cloud-native deployment and Pythonic API; different design philosophy on execution model vs. DolphinScheduler's decentralized master-workerapache/dolphinscheduler-sdk-python— Official Python SDK for DolphinScheduler allowing programmatic workflow definition—essential companion repo for developers using Python to define workflowsuber/cadence— Distributed workflow engine with fault tolerance focus; complements understanding of how DolphinScheduler approaches task retry, state recovery, and distributed consensusgetdbt/dbt-core— Data transformation framework that integrates with orchestrators like DolphinScheduler via task plugins—understanding dbt tasks is common in real DolphinScheduler deployments
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive CI workflow for schema validation across all supported databases
The repo has schema-check directories for MySQL and PostgreSQL in .github/workflows/schema-check/, but there's no automated GitHub Actions workflow file that orchestrates these checks. Currently, the docker-compose files and test scripts exist but aren't triggered by CI. This would catch schema migration issues early and ensure consistency across all database backends.
- [ ] Create .github/workflows/schema-check.yml that triggers on PR changes to database schema files
- [ ] Reference existing .github/workflows/schema-check/mysql/docker-compose-base.yaml and running-test.sh
- [ ] Reference existing .github/workflows/schema-check/postgresql/docker-compose-base.yaml and running-test.sh
- [ ] Add matrix strategy to test both MySQL and PostgreSQL in parallel
- [ ] Ensure workflow fails if schema validation scripts return non-zero exit codes
Create issue template validation GitHub Action
The repo has 6 issue templates (.github/ISSUE_TEMPLATE/*.yml) but no CI validation to ensure they're properly formatted or that issues follow the expected structure. Adding a sanity check action would catch malformed templates and ensure consistent issue metadata across bug reports, feature requests, DSIP requests, etc.
- [ ] Extend or enhance .github/actions/sanity-check/action.yml to validate YAML syntax in .github/ISSUE_TEMPLATE/
- [ ] Add checks to ensure required fields (title, description, labels) are present in each template
- [ ] Create a workflow job that runs on changes to .github/ISSUE_TEMPLATE/ files
- [ ] Test that the action properly validates all 6 existing templates (bug-report.yml, feature-request.yml, dsip-request.yml, etc.)
Add end-to-end test coverage for cluster deployment with multiple registry backends
The repo has 4 different cluster-test configurations (.github/workflows/cluster-test/) covering MySQL+MySQL registry, MySQL+ZooKeeper, PostgreSQL+PostgreSQL, and PostgreSQL+ZooKeeper, but no unified CI workflow orchestrates them. Creating a workflow would ensure all deployment combinations work correctly and catch environment-specific issues.
- [ ] Create .github/workflows/cluster-deploy-test.yml that runs cluster-test scenarios on PR or schedule
- [ ] Use matrix strategy to test all 4 combinations: mysql_with_mysql_registry, mysql_with_zookeeper_registry, postgresql_with_postgresql_registry, postgresql_with_zookeeper_registry
- [ ] Reference existing docker-compose-cluster.yaml, deploy.sh, and running_test.sh in each directory
- [ ] Add conditional logic to run on changes to deployment or registry configuration files
- [ ] Ensure workflow validates successful job execution using existing start-job.sh scripts
🌿Good first issues
- Add missing unit test coverage for dolphinscheduler-dao SQL migration modules—current test coverage gaps in database layer shown by PLpgSQL (22K LOC) and multiple database variant configs with minimal test parity across MySQL/PostgreSQL/HCL provider modules
- Expand cluster-test coverage: add test configuration for other supported databases (currently only MySQL and PostgreSQL variants exist in .github/workflows/cluster-test/)—would improve reliability across database backends
- Document the task plugin development workflow with a concrete example: the codebase supports custom task types but there's no clear entry point in the file structure—adding a reference implementation (e.g., dolphinscheduler-task-example/) with comments would help contributors extend the platform
⭐Top contributors
Click to expand
Top contributors
- @SbloodyS — 44 commits
- @ruanwenjun — 20 commits
- @njnu-seafish — 20 commits
- @leocook — 3 commits
- @Mrhs121 — 2 commits
📝Recent commits
Click to expand
Recent commits
9c5808c— [Chore][API] Remove deprecated ProjectService#checkProjectAndAuth (#18218) (ruanwenjun)fe36805— [Chore] Hotfix ut ci (#18221) (SbloodyS)0055001— [Chore] Unit-Test optimize (#18214) (SbloodyS)756f2f2— [Fix-18201][TaskPlugin] Fix RemoteShell task NullPointerException and… (#18210) (leocook)5f11cc2— [Fix][API] Add missing project authorization on view-gantt/view-variables and trigger workflow APIs (#18212) (ruanwenjun)7611941— [Fix-18177][Task Plugin] Fix AliyunServerlessSpark plugin dependency conflicts and improve exception handling (#18180) (includetts)dd8857b— [Chore] Recover python e2e test in ci (#18209) (SbloodyS)5c85e15— [Chore] Unit-Test performance optimize (#18213) (SbloodyS)9563c66— [Chore] Fix UT will be skip at CI (#18205) (ruanwenjun)710236b— [Fix-18197][Master] Fix master failover lock leak (#18207) (ruanwenjun)
🔒Security observations
- High · Potential Hardcoded Credentials in Configuration Files —
.github/workflows/cluster-test/*, .github/workflows/schema-check/*. The presence of multiple Docker Compose files and configuration scripts (.sh files) in the codebase may contain hardcoded database credentials, API keys, or other sensitive information. Files like dolphinscheduler_env.sh and docker-compose-*.yaml in cluster-test and schema-check directories are common locations for credential exposure. Fix: Audit all .sh and .yaml configuration files for hardcoded secrets. Use environment variable injection, secret management systems (HashiCorp Vault, AWS Secrets Manager), or Docker secrets instead of hardcoding credentials. Implement pre-commit hooks to detect and prevent credential commits. - High · Docker Images Built from Untrusted Base Images —
.github/workflows/cluster-test/*/Dockerfile, .github/workflows/schema-check/*/Dockerfile. Multiple Dockerfiles are present in the workflow directories without visible base image pinning or verification. Using 'latest' tags or unpinned base images increases supply chain attack risk and may pull vulnerable images automatically. Fix: Pin base images to specific versions (e.g., 'ubuntu:22.04' instead of 'ubuntu:latest'). Use image scanning tools (Trivy, Snyk) in CI/CD pipelines. Consider using minimal base images (Alpine, distroless). Verify image signatures and SBOMs when available. - Medium · Missing Security Headers Configuration —
Frontend/API configuration files (not fully visible in structure). No evidence of security header configuration (CSP, HSTS, X-Frame-Options, etc.) visible in the codebase structure. This is critical for a web-based orchestration platform that handles sensitive workflow data. Fix: Implement comprehensive security headers in API responses and frontend servers. Add Content-Security-Policy, Strict-Transport-Security, X-Content-Type-Options, X-Frame-Options headers. Configure CORS properly with whitelist-based approach. - Medium · Potential SQL Injection Risks in Workflow Engine —
Core database query layer (not fully visible - likely in backend source). As a data orchestration platform with database integration, the codebase likely contains dynamic SQL query construction. Without visible parameterized query enforcement in the file structure, there's risk of SQL injection vulnerabilities. Fix: Enforce parameterized queries and prepared statements throughout the codebase. Use ORM frameworks that provide built-in SQL injection protection. Implement input validation and sanitization for all user-supplied parameters. Conduct thorough code review of all database interaction layers. - Medium · Exposed Docker Ports in Docker Compose Files —
.github/workflows/cluster-test/*/docker-compose-*.yaml, .github/workflows/schema-check/*/docker-compose-base.yaml. Docker Compose files in test configurations may expose internal services on ports without proper network segmentation or authentication, potentially allowing unauthorized access in development/CI environments that could leak to production. Fix: Review all exposed ports in Docker Compose files. Use internal networks for inter-service communication. Implement proper firewall rules and network policies. Never expose admin/management ports externally. Use port mapping restrictions (127.0.0.1:port). - Medium · Missing Dependency Vulnerability Scanning —
Project root - dependency files not provided. No visible dependency lock files (package-lock.json, pom.xml, requirements.txt content) provided for analysis. The OWASP dependency check workflow exists but actual dependency versions and vulnerability status are not visible. Fix: Implement automated dependency scanning in CI/CD (already present with owasp-dependency-check.yaml - ensure it's properly configured). Use tools like Dependabot, Snyk, or Maven Security Scanner. Regularly update dependencies and address high/critical vulnerabilities promptly. - Low · Excessive Workflow Permissions in GitHub Actions —
.github/workflows/*.yml, .github/workflows/*.yaml. Multiple GitHub Actions workflows are present. Without reviewing their content, there's potential risk of excessive permissions granted to CI/CD jobs that could be exploited if workflows are compromised. Fix: Apply principle of least privilege to all GitHub Actions. Specify minimal required permissions for each workflow. Use OIDC token-based authentication instead of long-lived credentials. Regularly audit workflow permissions and update CI/CD configurations. - Low · undefined —
undefined. undefined Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.