RepoPilotOpen in app →

kubernetes-retired/dashboard

General-purpose web UI for Kubernetes clusters

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 4mo ago
  • 6 active contributors
  • Apache-2.0 licensed
Show 4 more →
  • CI configured
  • Tests present
  • Slowing — last commit 4mo ago
  • Single-maintainer risk — top contributor 88% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/kubernetes-retired/dashboard)](https://repopilot.app/r/kubernetes-retired/dashboard)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/kubernetes-retired/dashboard on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: kubernetes-retired/dashboard

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/kubernetes-retired/dashboard shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 4mo ago
  • 6 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 4mo ago
  • ⚠ Single-maintainer risk — top contributor 88% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live kubernetes-retired/dashboard repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/kubernetes-retired/dashboard.

What it runs against: a local clone of kubernetes-retired/dashboard — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in kubernetes-retired/dashboard | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 138 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>kubernetes-retired/dashboard</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of kubernetes-retired/dashboard. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/kubernetes-retired/dashboard.git
#   cd dashboard
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of kubernetes-retired/dashboard and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "kubernetes-retired/dashboard(\\.git)?\\b" \\
  && ok "origin remote is kubernetes-retired/dashboard" \\
  || miss "origin remote is not kubernetes-retired/dashboard (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "Makefile" \\
  && ok "Makefile" \\
  || miss "missing critical file: Makefile"
test -f "charts/kubernetes-dashboard/Chart.yaml" \\
  && ok "charts/kubernetes-dashboard/Chart.yaml" \\
  || miss "missing critical file: charts/kubernetes-dashboard/Chart.yaml"
test -f ".github/workflows/ci.yml" \\
  && ok ".github/workflows/ci.yml" \\
  || miss "missing critical file: .github/workflows/ci.yml"
test -f "CONTRIBUTING.md" \\
  && ok "CONTRIBUTING.md" \\
  || miss "missing critical file: CONTRIBUTING.md"
test -f "DEVELOPMENT.md" \\
  && ok "DEVELOPMENT.md" \\
  || miss "missing critical file: DEVELOPMENT.md"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 138 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~108d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/kubernetes-retired/dashboard"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Kubernetes Dashboard is a general-purpose web UI for Kubernetes clusters built with Go, TypeScript, and Angular that enables users to manage applications, troubleshoot issues, and administer cluster resources through a browser. As of v7.0.0, it uses a multi-container architecture with Kong gateway (DBless) connecting separate API, auth, metrics-scraper, and web containers, deployed exclusively via Helm charts. Multi-module monorepo structure: Go API modules under /api, /auth, /metrics-scraper; TypeScript frontend under /web; shared Go libraries in /certificates, /client, /csrf, /errors, /helpers, /types; Helm chart templates in /charts/kubernetes-dashboard/templates/ with separate deployments for each container and Kong gateway configuration; CI/CD workflows per component in /.github/workflows/.

👥Who it's for

Kubernetes cluster operators and developers who need a visual interface to inspect and manage workloads, deployments, pods, services, and cluster settings without kubectl; DevOps teams managing multi-tenant or production Kubernetes environments.

🌱Maturity & risk

The project is archived and no longer maintained (see README). While it was production-ready with extensive CI/CD pipelines (.github/workflows/), Helm charts, and Prometheus metrics integration, active development has ceased due to lack of maintainers. The maintainers recommend using Headlamp (kubernetes-sigs/headlamp) instead.

High risk: the codebase is archived with no active maintenance, so security patches and bug fixes will not be released. The multi-container Kong gateway dependency (hard requirement since v7.0.0) introduces operational complexity and potential single points of failure. Pinned Kubernetes API versions (k8s.io/* v0.32.0) will eventually become incompatible with newer clusters.

Active areas of work

Nothing—the project is archived. The last activity was related to migrating away from manifest-based installation to Helm-only, dropping support for appVersion in favor of separate versioning per module, and standardizing on Kong gateway as the API proxy.

🚀Get running

Clone the repo: git clone https://github.com/kubernetes-retired/dashboard.git. Build with the Makefile: make (targets exist in ./Makefile). Or deploy directly via Helm: helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ && helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard.

Daily commands: Development: make build to compile Go modules and frontend, or make serve if a serve target exists in Makefile. Production: Deploy via Helm with helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --namespace kubernetes-dashboard --values charts/kubernetes-dashboard/values.yaml. Requires a running Kubernetes cluster and kubectl context configured.

🗺️Map of the codebase

  • Makefile — Build orchestration and local development entry point; defines all compilation, testing, and deployment targets
  • charts/kubernetes-dashboard/Chart.yaml — Helm chart metadata; defines the primary deployment mechanism and version management for the dashboard
  • .github/workflows/ci.yml — Main CI/CD pipeline definition; controls automated testing, building, and validation of all contributions
  • CONTRIBUTING.md — Contributor guidelines; essential reading for understanding the project's workflow, code standards, and governance
  • DEVELOPMENT.md — Local development setup and architecture overview; required for understanding how to build and test the codebase
  • go.mod — Go module dependencies specification; defines all external libraries and Kubernetes API versions the dashboard depends on
  • charts/kubernetes-dashboard/values.yaml — Helm default configuration values; contains configurable parameters for all dashboard components in production

🛠️How to make changes

Add a New Dashboard Component Deployment

  1. Create a new deployment template in charts/kubernetes-dashboard/templates/deployments/ (charts/kubernetes-dashboard/templates/deployments/api.yaml)
  2. Create corresponding service template in charts/kubernetes-dashboard/templates/services/ (charts/kubernetes-dashboard/templates/services/api.yaml)
  3. Add RBAC resources (ServiceAccount, Role, RoleBinding) in charts/kubernetes-dashboard/templates/rbac/ (charts/kubernetes-dashboard/templates/rbac/api/serviceaccount.yaml)
  4. Add configuration parameters to charts/kubernetes-dashboard/values.yaml (charts/kubernetes-dashboard/values.yaml)
  5. Update Chart.yaml with new version and dependencies if needed (charts/kubernetes-dashboard/Chart.yaml)

Add a New REST API Endpoint

  1. Review the API module structure and go-restful patterns in go.mod dependencies (go.mod)
  2. Create endpoint handler using go-restful/v3 framework (referenced in dependencies) (charts/kubernetes-dashboard/templates/config/gateway.yaml)
  3. Register route in gateway configuration (charts/kubernetes-dashboard/templates/config/gateway.yaml)
  4. Add OpenAPI/Swagger documentation using emicklei/go-restful-openapi/v2 (go.mod)

Enable New Monitoring or Security Feature

  1. Create or update security/monitoring template in charts/kubernetes-dashboard/templates/ (charts/kubernetes-dashboard/templates/security/networkpolicy.yaml)
  2. If metrics-related, update metrics-scraper deployment (charts/kubernetes-dashboard/templates/deployments/metrics-scraper.yaml)
  3. Add Prometheus integration if monitoring: update servicemonitor (charts/kubernetes-dashboard/templates/extras/servicemonitor.yaml)
  4. Update Helm values.yaml with new configuration options (charts/kubernetes-dashboard/values.yaml)
  5. Document in CONTRIBUTING.md and update SECURITY.md if security-related (CONTRIBUTING.md)

Update Go Dependencies or API Version

  1. Update go.mod with new k8s.io/api or k8s.io/client-go versions (go.mod)
  2. Run make target to regenerate code and verify compatibility (Makefile)
  3. Update Chart.yaml apiVersion if Kubernetes versions are incremented (charts/kubernetes-dashboard/Chart.yaml)
  4. Add CI/CD workflow updates if needed for testing (.github/workflows/ci.yml)
  5. Document breaking changes in DEVELOPMENT.md (DEVELOPMENT.md)

🔧Why these technologies

  • Kubernetes client-go (v0.32.0) — Native, official SDK for interacting with Kubernetes API servers and managing watch/informer patterns for real-time updates
  • Helm (Chart templating) — Industry-standard package manager for Kubernetes deployments; enables parameterized multi-environment deployments and version management
  • Go + go-restful/v3 — Statically typed, performant language with REST framework for building the API server component; integrated with Kubernetes ecosystem
  • Prometheus + metrics-scraper — Observability standard in Kubernetes; metrics scraper provides node and cluster metrics without requiring kubelet modifications
  • SockJS (sockjs-go v2.1.0) — WebSocket fallback support for real-time updates in browsers without native WebSocket support; essential for live log streaming and event updates

⚖️Trade-offs already made

  • Archived/retired status with recommendation to use Headlamp instead

    • Why: Lack of active maintainers and community contributors made ongoing support unsustainable
    • Consequence: No new features or security patches; users must fork or migrate to maintained alternatives like Headlamp
  • Monolithic Helm chart containing all components (API, Web, Auth, Metrics)

    • Why: Simplifies deployment and version management; all components released together
    • Consequence: Limited flexibility for independent scaling or customization; requires forking or advanced Helm overlays for partial deployments
  • Metrics via separate scraper component rather than kubelet metrics API

    • Why: Reduces kubelet load and provides dashboard-specific metric collection without modifying cluster infrastructure
    • Consequence: Additional component to manage;

🪤Traps & gotchas

Kong gateway dependency: As of v7.0.0, Kong gateway is a hard runtime requirement—the UI, API, auth, and metrics containers are separate and only work when fronted by Kong (DBless). Kubernetes API pinning: Code is locked to k8s.io v0.32.0; running against newer/older clusters may cause compatibility issues. No manifest installation: Attempting to deploy via kubectl manifests directly will fail—Helm charts are the only supported method. Service account / RBAC: Dashboard requires extensive ClusterRole/ClusterRoleBinding setup to access cluster resources; incomplete RBAC will cause 403 errors. No active maintenance: Security vulnerabilities, bug fixes, and upstream Kubernetes API changes will not be patched.

🏗️Architecture

💡Concepts to learn

  • Kong API Gateway (DBless mode) — As of v7.0.0, Dashboard uses Kong gateway as a required reverse proxy to route requests from the single web UI to multiple backend containers; understanding its configuration in gateway.yaml is essential for deployment and troubleshooting
  • Kubernetes client-go patterns — Dashboard heavily uses k8s.io/client-go to interact with the Kubernetes API server; understanding informers, clients, and watches is needed to modify API handlers or add new resource types
  • OpenAPI 2.0 / Swagger — Dashboard's API is documented via emicklei/go-restful-openapi/v2; specs are auto-generated from Go struct tags and served as /api/v1/openapi.json, used by the frontend to discover endpoints
  • RBAC (Role-Based Access Control) — Dashboard's access to cluster resources is controlled by Kubernetes RBAC; the Helm chart creates ClusterRoles and ServiceAccounts with minimal necessary permissions, and users see only resources they can access
  • Helm templating with Kustomize overlays — The charts/kubernetes-dashboard directory uses Helm templates and supports value overrides; understanding template functions, conditionals, and the ci/ default-values.yaml is needed to customize deployments
  • Multi-container application architecture — Dashboard comprises four separate containers (web, api, auth, metrics-scraper) coordinated by Kong; each is versioned, built, and deployed independently, requiring understanding of container communication and service discovery
  • Certificate management and TLS termination — Dashboard includes a k8s.io/dashboard/certificates module for handling TLS; the k8s.io/dashboard/csrf module prevents CSRF attacks on the API; both are security-critical for production deployments
  • kubernetes-sigs/headlamp — Official recommended successor to Kubernetes Dashboard; actively maintained, simpler architecture, same use case of providing a web UI for Kubernetes cluster management
  • kubernetes/kubernetes — Core Kubernetes project; Dashboard depends on k8s.io/client-go, k8s.io/api, and k8s.io/kubectl from this repo and must track breaking changes in the Kubernetes API
  • kubernetes-sigs/metrics-server — Companion tool that Dashboard's metrics-scraper container queries for node and pod resource usage metrics; part of the observability stack
  • prometheus/prometheus — Dashboard integrates prometheus/client_golang for metrics export; users typically scrape Dashboard metrics with Prometheus for monitoring and alerting
  • Kong/kong — The API gateway used as the reverse proxy connecting Dashboard's four containers (web, API, auth, metrics-scraper); critical runtime dependency since v7.0.0

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add security policy enforcement workflow for container images

The repo has multiple CD workflows (cd-api.yml, cd-web.yml, cd-metrics-scraper.yml, cd-auth.yml, cd-helm.yml) that build Docker images, but there's no evidence of image scanning or policy enforcement in .github/workflows/. Given the security-sensitive nature of a Kubernetes dashboard handling cluster credentials, adding a workflow to scan images with tools like Trivy or Anchore would prevent shipping vulnerable images. This is critical for a security-focused project with SECURITY.md and SECURITY_CONTACTS files.

  • [ ] Create .github/workflows/security-scan-images.yml with Trivy/Anchore scanning step
  • [ ] Configure scan to run on all CD workflows after image builds (api, web, metrics-scraper, auth)
  • [ ] Add policy to fail on HIGH/CRITICAL vulnerabilities in .github/workflows/security-scan-images.yml
  • [ ] Update CONTRIBUTING.md with instructions on running local image scans before submitting PRs

Add missing RBAC test coverage for dashboard service accounts across multiple namespaces

The Helm chart templates create multiple service accounts (charts/kubernetes-dashboard/templates/rbac/*/serviceaccount.yaml for api, web, metrics-scraper) with distinct roles. However, there's no visible test file validating that these RBAC configurations work correctly together, especially for multi-namespace deployments. Adding integration tests to verify RBAC bindings is essential for a tool that manages cluster access.

  • [ ] Create test/rbac_integration_test.go to validate service account creation and role bindings
  • [ ] Test that api, web, and metrics-scraper service accounts have correct permissions independently
  • [ ] Validate ClusterRoleBinding for metrics-scraper doesn't grant excessive cluster-wide permissions
  • [ ] Add test to chart (charts/kubernetes-dashboard/ci/) to verify rbac templates render correctly with different RBAC configurations

Implement changelog validation in PR workflow to enforce release notes documentation

The repo has comprehensive CI/CD workflows (ci.yml, pr.yml) but no validation that user-facing changes include changelog entries. Given that this is an archived but previously maintained project with structured charts and multiple components, enforcing changelog entries prevents missing release notes. This is especially important for the Helm chart users who need clear upgrade paths.

  • [ ] Create .github/workflows/changelog-check.yml that validates CHANGELOG.md or changelog entry for each PR
  • [ ] Configure to require either modification to CHANGELOG.md or a changelog/ directory entry (following common patterns like go-changelog)
  • [ ] Exempt workflow-only and docs-only changes using path filters in the workflow
  • [ ] Add instructions to CONTRIBUTING.md documenting the changelog format and exemption rules

🌿Good first issues

  • The project is archived and unmaintained, so traditional contributions are not accepted. However, a potential fork-friendly task: audit and document which Go module versions (k8s.io/api, k8s.io/client-go) are compatible with Kubernetes 1.30+ and update go.mod accordingly to enable compatibility with newer clusters.
  • Add missing unit test coverage for the auth module (k8s.io/dashboard/auth); inspect api/auth/ for untested code paths and write test files following the pattern found in other modules.
  • Document the Kong gateway routing configuration in detail (currently only in charts/kubernetes-dashboard/templates/config/gateway.yaml); create a new docs/gateway-routing.md explaining how requests flow from the web UI through Kong to each backend service and why this architecture was chosen.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 0ba796d — Add archived status and recommend Headlamp (floreks)
  • 4940626 — chore(chart): release 7.14.0 (#10411) (floreks)
  • 7bb0900 — chore(deps): bump codecov/codecov-action from 5.4.2 to 5.5.1 (#10345) (dependabot[bot])
  • ca3ca61 — chore(deps): bump github.com/spf13/pflag in /modules/web (#10342) (dependabot[bot])
  • 4a2912c — chore(deps): bump github.com/go-viper/mapstructure/v2 (#10327) (dependabot[bot])
  • a519e64 — chore(deps): bump azure/setup-helm from 4.3.0 to 4.3.1 (#10321) (dependabot[bot])
  • 8ec571d — chore(deps): bump github.com/emicklei/go-restful/v3 in /modules/api (#10315) (dependabot[bot])
  • b17bd6e — chore(deps): bump amannn/action-semantic-pull-request from 5 to 6 (#10311) (dependabot[bot])
  • 3139cee — chore(deps): bump actions/checkout from 4 to 5 (#10300) (dependabot[bot])
  • 9ee997a — chore(deps-dev): bump eslint-config-prettier in /modules/web (#10293) (dependabot[bot])

🔒Security observations

The Kubernetes Dashboard project presents significant security concerns primarily due to its archived status with no active maintenance. While the codebase itself may not contain obvious vulnerabilities based on file structure analysis, the lack of security updates, dependency monitoring, and maintenance makes it unsuitable for production use in security-sensitive environments. The project contains well-structured security configurations (RBAC, NetworkPolicy, CSRF protection) but these are stale and unmaintained. Immediate action required: migrate to actively maintained alternatives or establish comprehensive internal security procedures if continued use is necessary. All dependencies require independent security verification.

  • High · Archived Project - No Longer Maintained — README.md, Project Status. The Kubernetes Dashboard project is explicitly archived and no longer maintained due to lack of active maintainers and contributors. This means security vulnerabilities will not be patched, and the project poses significant risk in production environments. Fix: Migrate to actively maintained alternatives such as Headlamp (https://github.com/kubernetes-sigs/headlamp) or other supported Kubernetes UI projects. If continued use is necessary, establish internal security review and patching procedures.
  • Medium · Outdated Go Version — go.mod - go 1.24.0. The go.mod specifies Go 1.24.0, which is a very new version. While not inherently insecure, this may indicate the project is using unstable or untested Go versions. For an archived project, this version may not receive security updates. Fix: Pin to a stable, actively supported Go LTS version. Verify compatibility and test thoroughly before updating.
  • Medium · Potential Unvalidated Dependency Versions — go.mod - k8s.io/dashboard/* modules. Multiple internal module dependencies use placeholder versions (0.0.0-00010101000000-000000000000), indicating local-only or development-mode dependencies. This makes it difficult to verify the actual versions being used and could mask security issues. Fix: Replace placeholder versions with actual version tags from the module repository. Use go mod tidy and ensure all dependencies are properly resolved.
  • Medium · Third-Party Dependencies Without Recent Verification — go.mod - indirect dependencies. The project depends on multiple third-party packages (gin-gonic/gin, prometheus/client_golang, sockjs-go, etc.). As an archived project, these dependencies are not being actively monitored for newly discovered vulnerabilities. Fix: Conduct a security audit of all direct and transitive dependencies. Use tools like 'go list -json -m all | nancy' or Dependabot to identify known vulnerabilities. For production use, maintain a fork with security patches.
  • Low · Missing SECURITY.md Content Truncation — SECURITY.md. The SECURITY.md file appears to be truncated in the provided content, with the final link incomplete. This may indicate documentation issues or incomplete security guidance. Fix: Ensure SECURITY.md is complete and provides clear vulnerability reporting instructions. For archived projects, update to direct users to maintained alternatives and explain the archived status.
  • Low · Helm Chart Security Configuration — charts/kubernetes-dashboard/templates/security/. While Helm templates for security (NetworkPolicy, PDB) exist, the archived status means these configurations may not reflect current security best practices for Kubernetes deployments. Fix: If using these Helm charts, review and update security policies (NetworkPolicy, RBAC rules, Pod Security Standards) to match current Kubernetes security best practices.
  • Low · No Recent Security Audits Visible — Repository root. No recent security audit reports, penetration test results, or documented security reviews are present in the repository structure, which is standard for archived projects. Fix: Before deploying to production, conduct independent security assessment. Consider engaging professional security auditors for critical deployments.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · kubernetes-retired/dashboard — RepoPilot