RepoPilotOpen in app →

kedacore/keda

KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit today
  • 31+ active contributors
  • Distributed ownership (top contributor 22% of recent commits)
Show all 6 evidence items →
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/kedacore/keda)](https://repopilot.app/r/kedacore/keda)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/kedacore/keda on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: kedacore/keda

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/kedacore/keda shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit today
  • 31+ active contributors
  • Distributed ownership (top contributor 22% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live kedacore/keda repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/kedacore/keda.

What it runs against: a local clone of kedacore/keda — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in kedacore/keda | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | Last commit ≤ 30 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>kedacore/keda</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of kedacore/keda. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/kedacore/keda.git
#   cd keda
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of kedacore/keda and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "kedacore/keda(\\.git)?\\b" \\
  && ok "origin remote is kedacore/keda" \\
  || miss "origin remote is not kedacore/keda (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 30 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~0d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/kedacore/keda"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

KEDA (Kubernetes-based Event Driven Autoscaling) is a CNCF graduated project that enables fine-grained autoscaling of Kubernetes workloads based on external event sources, scaling pods to/from zero without external dependencies. It acts as a custom Kubernetes Metrics Server and integrates with the Horizontal Pod Autoscaler (HPA) to trigger scaling based on events from 50+ sources like Azure Service Bus, AWS SQS, Kafka, Prometheus, and HTTP endpoints. Monorepo structure with apis/ directory containing Kubernetes CRD definitions for ScaledObject/ScaledJob, pkg/scalers/ containing 50+ event source adapters, cmd/ with controller/adapter/webhook entrypoints, and pkg/ housing core scaling logic, metrics servers, and authentication. The project follows standard Kubernetes controller patterns using controller-runtime (sigs.k8s.io/controller-runtime v0.22.4) with operator-sdk scaffolding visible in PROJECT file.

👥Who it's for

Kubernetes cluster operators and platform engineers who need event-driven autoscaling for serverless/FaaS workloads, batch jobs, and microservices that should scale based on message queues, metrics, or custom event sources rather than just CPU/memory.

🌱Maturity & risk

Highly mature and production-ready. The project is a CNCF graduated project with comprehensive CI/CD (main-build, nightly-e2e, pr-e2e workflows), extensive test coverage, active maintenance, multiple Dockerfiles for adapters/webhooks, and public Helm chart distribution via ArtifactHub. The Go codebase is substantial (5M+ lines) with proper versioning (v2 module path) and established release processes.

Low risk for core functionality but moderate risk from breadth: 50+ scaler integrations create a large surface area of external dependencies (Azure SDKs, AWS clients, Kafka drivers, etc.), each with their own vulnerability lifecycle. Last activity appears recent based on active workflows and dependabot configuration, but contributors should verify current maintainer capacity given the CNCF graduation status and broad scope.

Active areas of work

Active development on multiple fronts: v1 and v2 parallel builds (v1-build.yml, main-build.yml workflows), nightly e2e testing across multiple architectures (s390x smoke tests), security scanning (CodeQL, Trivy, FOSSA license checks), and dependency management via Dependabot. The .github/workflows directory shows active PR validation, release automation, and community bot workflows (auto-add issues, welcome messages).

🚀Get running

git clone https://github.com/kedacore/keda.git
cd keda
make build
make deploy

Refer to BUILD.md and Makefile for detailed targets. Docker is required; devcontainer support available in .devcontainer/.

Daily commands: Development: make build compiles the operator. make deploy installs to a Kubernetes cluster. make test runs unit tests. See Makefile for additional targets like docker-build, manifests, generate. For local dev, use .devcontainer/ for containerized environment or follow CONTRIBUTING.md setup.

🗺️Map of the codebase

🛠️How to make changes

To add a new scaler: create pkg/scalers/my_scaler.go implementing the Scaler interface, register in pkg/scalers/scaler_factory.go, add tests in pkg/scalers/my_scaler_test.go. For CRD changes, edit files under apis/ and run make manifests generate to update generated code. For authentication, extend pkg/authentication/. See CREATE-NEW-SCALER.md for detailed walkthrough.

🪤Traps & gotchas

  1. Kubernetes version pinning: go.mod replaces k8s.io/* and sigs.k8s.io/controller-runtime to specific versions; mismatches cause reconciliation failures. 2) Cloud SDK authentication: each scaler (Azure, AWS, GCP) expects specific credential setup (AZURE_AUTH_METHOD, AWS_ROLE_ARN, etc.); integration tests require live credentials or mocking. 3) CRD generation: modifying apis/ requires running make manifests generate or the operator will reject invalid manifests. 4) Metrics server registration: the operator registers itself as a metrics provider; conflicts with other custom metrics servers or Prometheus adapters can occur. 5) TriggerAuthentication resolution: scalers reference TriggerAuthentication objects by name/namespace; missing or misconfigured auth secrets silently fail during query.

💡Concepts to learn

  • Kubernetes Custom Resource Definitions (CRDs) — ScaledObject and ScaledJob are custom resources KEDA defines; understanding CRDs is essential to extend KEDA and use it correctly
  • Metrics Server Pattern — KEDA implements the Kubernetes Metrics Server interface so the HPA can query event-based metrics; understanding this protocol is key to grasping how scaling decisions are made
  • Event Source Adapter Pattern — 50+ scalers in pkg/scalers/ all implement a common Scaler interface to abstract different event sources; this loose coupling is central to KEDA's extensibility
  • Horizontal Pod Autoscaler (HPA) Integration — KEDA creates and manages HPA objects on behalf of users; understanding how HPA reads metrics and scales replicas is essential to debug scaling behavior
  • Reconciliation Loop (Operator Pattern) — KEDA is a Kubernetes operator that continuously reconciles desired state (ScaledObject spec) with actual state; understanding this pattern is crucial for contributing controller logic
  • TriggerAuthentication & Secret Management — KEDA decouples credentials from scaling rules via TriggerAuthentication CRD; understanding Kubernetes secret handling and credential injection is required for adding cloud integrations
  • Controller-runtime Framework — KEDA uses sigs.k8s.io/controller-runtime v0.22.4 for reconciliation, caching, and event handling; familiarity with this framework speeds up understanding the codebase
  • kubernetes/autoscaler — Official Kubernetes autoscaling project; KEDA uses HPA and feeds into Kubernetes' metrics server ecosystem
  • kedacore/keda-docs — Official KEDA documentation repository; separate from main code, contains user guides and scaler docs
  • knative/serving — Serverless workload autoscaling on Kubernetes; Knative and KEDA often used together for event-driven functions
  • operator-framework/operator-sdk — Scaffolding and patterns KEDA uses for building the Kubernetes operator (referenced in PROJECT file)
  • prometheus-operator/prometheus-operator — KEDA can trigger scaling based on Prometheus metrics; these projects frequently integrate

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for scaler authentication (Azure/GCP/AWS SDK clients)

The repo has extensive scaler integrations (Azure Event Hubs, Azure Kusto, GCP Monitoring, AWS services) but lacks dedicated unit test files for authentication flows. Testing auth failures, credential rotation, and token refresh would prevent production incidents. The scalers directory structure suggests auth is scattered across scaler implementations.

  • [ ] Audit existing scalers in pkg/scalers/ for auth-related code paths (Azure SDK, GCP client initialization, AWS credential chains)
  • [ ] Create pkg/scalers/authentication_test.go with mocked credential providers for each cloud provider
  • [ ] Add tests for credential validation failures, expired tokens, and fallback authentication methods
  • [ ] Reference specific scalers in tests (e.g., azure_eventhub_scaler.go, gcp_stackdriver_scaler.go) to ensure coverage

Create E2E test workflow template for new scaler validation (template-new-scaler-validation.yml)

The repo has template workflows for smoke tests (template-smoke-tests.yml, template-s390x-smoke-tests.yml, template-versions-smoke-tests.yml) but no standardized pipeline for contributors adding new scalers. This would enforce consistent testing requirements when PRs add scalers to CREATE-NEW-SCALER.md workflow.

  • [ ] Create .github/workflows/template-new-scaler-validation.yml reusable workflow
  • [ ] Include steps: build scaler, run scaler unit tests, validate CRD schema (apis/keda/v1alpha1/)
  • [ ] Add optional conditional for cloud provider e2e tests (with skip flags for PR authors without credentials)
  • [ ] Document in CREATE-NEW-SCALER.md how contributors should invoke this workflow in their PRs

Add integration tests for ScalableObject webhook validation (apis/keda/v1alpha1/scalable_object_webhook_test.go)

The repo has webhook files (cloudeventsource_webhook.go, cloudeventsource_webhook_test.go) but scalable_object.go lacks a corresponding webhook_test.go. Given KEDA's critical role in production autoscaling, webhook validation tests are essential to prevent invalid ScalableObject specs from being accepted, which could cause scaling outages.

  • [ ] Create apis/keda/v1alpha1/scalable_object_webhook_test.go following the pattern of cloudeventsource_webhook_test.go
  • [ ] Test validation of required fields (ScaleTargetRef, triggers array)
  • [ ] Test invalid scaler configurations (missing properties, incompatible combinations)
  • [ ] Test edge cases: empty triggers, duplicate scaler names, invalid cooldown/pollingInterval values
  • [ ] Run with make test to ensure integration with existing test suite

🌿Good first issues

  • Add unit tests for uncovered scaler edge cases: many of the 50+ scalers in pkg/scalers/ likely lack comprehensive error handling tests (network timeouts, malformed responses, auth failures); pick one scaler and add tests for edge cases in a new *_test.go file.
  • Improve error messages in ScaledObject validation: the CRD validation in apis/keda/v1alpha1/scaledobject_validation.go (if exists) or controller could provide more actionable feedback when users misconfigure triggers (e.g., missing required fields per trigger type).
  • Document scaler-specific configuration requirements: CREATE-NEW-SCALER.md is thorough, but the repo lacks a 'scaler reference' doc listing all 50+ scalers with their required TriggerAuthentication fields and example manifests; create a doc in /docs/scalers/ or update README with a table.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 4a465f5 — feature: add custom HTTP client timeout to Metrics API scaler (#7551) (RokeshVS)
  • 11b592c — feat: deprecate authToken from triggerMetadata in InfluxDB scaler (#7723) (rickbrouwer)
  • 9e2332a — fix(pulsar): scope auth header re-application on redirects (#7692) (mateenali66)
  • cead548 — feat: deprecate subscriptionSize in GCP PubSub scaler (#7721) (rickbrouwer)
  • 7d7a8ff — rfac: handle paused scaling directly in reconciler (#7664) (rickbrouwer)
  • ecc8c28 — fix: use %w verb for proper error wrapping (#7515) (archy-rock3t-cloud)
  • 5adb59e — test: add unit tests for condition_types and gvkr_types (#7543) (archy-rock3t-cloud)
  • 5f89276 — feat: Handle rate limit errors by respecting X-RateLimit-Reset and Retry-After headers and returning cached queue length (andrewhibbert)
  • 814beb8 — feat(mssql): add Azure Workload Identity support (#7673) (MO2k4)
  • a1eeed3 — feat(aws-sqs): Add external ID support for cross-account access (#6916) (tangobango5)

🔒Security observations

  • High · Go Version Specification Error — go.mod. The go.mod file specifies 'go 1.26.0', which is an invalid Go version. The latest stable Go version is 1.23.x. This non-existent version could lead to unexpected build behavior, dependency resolution issues, and potential security patches being missed if the build system falls back to an older version. Fix: Update go.mod to specify a valid, currently supported Go version (e.g., 'go 1.23' or 'go 1.24' when released). Verify the build pipeline uses correct version constraints.
  • Medium · Incomplete Dependency Listing — go.mod. The provided go.mod file is truncated with 'github.com/Azure/azure-s' cut off mid-name in the require section. This makes it impossible to fully audit all dependencies for known vulnerabilities. Complete dependency analysis cannot be performed. Fix: Provide the complete and untruncated go.mod file for comprehensive dependency vulnerability scanning. Implement automated vulnerability scanning in CI/CD using tools like 'go list -json -m all' combined with vulnerability databases.
  • Medium · Reliance on Multiple Dependency Management Tools — SECURITY.md, .github/dependabot.yml, .fossa.yml. The SECURITY.md indicates reliance on Renovate, Dependabot, Snyk, Whitesource Bolt, and Trivy for vulnerability detection. While defense-in-depth is good, this creates operational complexity and potential gaps if any tool has configuration issues or if there are tool-specific blind spots. Fix: Consolidate to primary and secondary tools with clear ownership. Document fallback procedures if primary tools fail. Ensure all tools have consistent configuration and coverage of the entire dependency tree.
  • Medium · Replace Statements Without Clear Justification — go.mod (replace section). The go.mod file contains multiple 'replace' directives pinning versions of Kubernetes and Prometheus packages. While necessary for compatibility, missing explanatory comments could lead to accidental removal during maintenance, breaking the build or introducing vulnerabilities. Fix: Add inline comments explaining why each replace directive is necessary, referencing GitHub issues or compatibility matrices. Consider moving to a dedicated dependency policy document.
  • Medium · Distroless Image Used But Base Image Verification Missing — Dockerfile (final stage). The Dockerfile uses 'gcr.io/distroless/static:nonroot' as the final base image, which is good for security. However, there is no image signature verification, SBOM validation, or pinning to specific digests instead of tags. Fix: Pin the distroless image to a specific digest (e.g., 'gcr.io/distroless/static:nonroot@sha256:...'). Implement container image signing verification and SBOM validation in your CI/CD pipeline using tools like Cosign and SLSA framework.
  • Low · Multi-Stage Build Cache Exposure — Dockerfile (builder stage). The Dockerfile uses RUN --mount=type=cache for Go build cache. While this improves build performance, the cache directory (/root/.cache/go-build) could potentially expose sensitive build artifacts if the cache layer is not properly isolated. Fix: Ensure BuildKit secrets and caches are handled via secure mechanisms. Consider using 'RUN --mount=type=cache,sharing=locked' for concurrent builds. Verify cache is not exported in final image.
  • Low · Missing Security Configuration in DevContainer — .devcontainer/devcontainer.json. The .devcontainer/devcontainer.json file is present but contents not provided. DevContainers can introduce security issues if not properly configured (e.g., running as root, mounting sensitive volumes, using untrusted images). Fix: Ensure the devcontainer runs with non-root user, uses image digests rather than tags, implements proper secret handling, and restricts volume mounts to necessary directories only.
  • Low · Webhook Components Require Runtime Security Review — undefined. Multiple webhook files exist (cloudeventsource_webhook.go, scaledjob_webhook.go, scaledobject_webhook.go, triggerauthentication_webhook.go) that handle dynamic Kubernetes resource validation. These are common attack surfaces for injection attacks and require careful security review. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · kedacore/keda — RepoPilot