aquasecurity/kube-bench
Checks whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 6d ago
- ✓9 active contributors
- ✓Apache-2.0 licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ✓Tests present
- ⚠Concentrated ownership — top contributor handles 66% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/aquasecurity/kube-bench)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/aquasecurity/kube-bench on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: aquasecurity/kube-bench
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/aquasecurity/kube-bench shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 6d ago
- 9 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Concentrated ownership — top contributor handles 66% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live aquasecurity/kube-bench
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/aquasecurity/kube-bench.
What it runs against: a local clone of aquasecurity/kube-bench — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in aquasecurity/kube-bench | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | Last commit ≤ 36 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of aquasecurity/kube-bench. If you don't
# have one yet, run these first:
#
# git clone https://github.com/aquasecurity/kube-bench.git
# cd kube-bench
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of aquasecurity/kube-bench and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "aquasecurity/kube-bench(\\.git)?\\b" \\
&& ok "origin remote is aquasecurity/kube-bench" \\
|| miss "origin remote is not aquasecurity/kube-bench (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 36 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~6d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/aquasecurity/kube-bench"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
kube-bench is a Go-based security auditing tool that runs CIS Kubernetes Benchmark checks against a running Kubernetes cluster to verify compliance with security best practices. It executes YAML-defined test configurations against cluster nodes, control planes, and etcd to detect misconfigurations in kubelet flags, API server settings, authentication, RBAC, and network policies. Monolithic single-binary structure: main entry point uses Cobra CLI framework (spf13/cobra), configuration loading via Viper (spf13/viper, magiconair/properties), test definitions in cfg/ organized by benchmark version (cis-1.10, cis-1.11, aks-1.8, ack-1.0, etc.) with YAML files for controlplane/node/etcd/policies, and test execution/reporting logic in unnamed Go packages compiled to the main binary.
👥Who it's for
Kubernetes cluster operators, security engineers, and DevOps teams responsible for hardening Kubernetes deployments who need automated compliance verification against CIS standards across managed services (AWS EKS/ACK, Azure AKS) and self-managed clusters.
🌱Maturity & risk
Production-ready and actively maintained. The project has significant adoption (Docker Pulls badge, GitHub releases), comprehensive CI/CD via GitHub Actions (.github/workflows/), strong test coverage with Ginkgo test framework integration, and recent Go module updates (go 1.26.2). Regular releases and multi-variant Dockerfile support (standard, UBI, FIPS) indicate stable operations.
Low to moderate risk. Dependencies are well-managed through go.mod with stable upstream libraries (cobra CLI, viper config, k8s client libraries v0.35.2). The single-maintainer risk is typical of security tools in the Aqua ecosystem. The modular YAML-based test configuration (cfg/ directory with separate benchmark versions) mitigates breaking changes risk, though Kubernetes API deprecations could require frequent test updates.
Active areas of work
Active development indicated by recent Go module version bumps (v1.26.2), GitHub Actions workflows for build/release/publish, and Docker image variants (Dockerfile, Dockerfile.ubi, Dockerfile.fips.ubi). The .github/workflows/ directory shows automated publishing to container registries and release management. dependabot.yml suggests active dependency maintenance.
🚀Get running
Clone, build, and run locally: git clone https://github.com/aquasecurity/kube-bench && cd kube-bench && go build -o kube-bench . && ./kube-bench run --benchmark cis-1.11. For Kubernetes in-cluster execution: kubectl apply -f job.yaml (supplied in repo root) to run as a Kubernetes Job and check logs with kubectl logs <pod-name>.
Daily commands:
Development: go build -o kube-bench . builds the binary. Run against local cluster: ./kube-bench run --benchmark cis-1.11. In-cluster via Kubernetes: kubectl apply -f job.yaml applies the supplied Job manifest. Docker: docker run --pid=host -v /etc:/etc:ro -v /lib:/lib:ro aquasec/kube-bench:latest run --benchmark cis-1.11 (needs host PID and filesystem access).
🗺️Map of the codebase
- cfg/cis-1.11/controlplane.yaml: Defines Kubernetes API server and control plane security checks (kubelet flags, authentication modes, RBAC); primary source of truth for control plane audit rules
- cfg/cis-1.11/node.yaml: Defines worker node security checks (kubelet configuration, container runtime, kernel parameters); audits node-level security posture
- cfg/cis-1.11/config.yaml: Master configuration file that routes audit logic to specific check executors and defines benchmark metadata for CIS 1.11 version
- job.yaml: Supplied Kubernetes Job manifest for running kube-bench in-cluster with proper host access (PID namespace, volume mounts to /etc, /lib)
- .github/workflows/release.yml: Defines release automation: goreleaser configuration, multi-platform binary builds, Docker image publishing to aquasec/kube-bench registry
🛠️How to make changes
To add new security checks: edit/extend YAML files in cfg/<benchmark-name>/ directories (e.g., cfg/cis-1.11/controlplane.yaml for control plane checks). To add new benchmark version: create new directory under cfg/ with full set (config.yaml, controlplane.yaml, node.yaml, etcd.yaml, policies.yaml). CLI/output changes: modify main Go package. Test definitions: Ginkgo tests in unnamed packages. AWS SecurityHub integration: see dependencies on github.com/aws/aws-sdk-go-v2/service/securityhub.
🪤Traps & gotchas
kube-bench requires elevated privileges (PID namespace access, read access to /etc and /lib on host) to inspect running processes and config files—local binary execution needs sudo or equivalent. YAML benchmark configs (cfg/) must match Kubernetes version being tested or checks will fail/miss vulnerabilities. AWS SecurityHub integration requires AWS credentials and proper IAM permissions if result publishing is enabled. Config file paths are hard-coded to Unix standard locations (/etc/kubernetes/, /var/lib/kubelet/); Windows nodes are not supported. The tool assumes systemd or specific init systems when checking process flags.
💡Concepts to learn
- CIS Benchmarks (Center for Internet Security) — kube-bench's entire test suite is derived from CIS Kubernetes Benchmark standards; understanding CIS methodology (control groups, severity levels, remediation) is essential to interpreting and extending checks
- Kubernetes Pod Security Standards (PSS) / Pod Security Policy (PSP) — Many kube-bench checks (e.g., in policies.yaml) validate PSP/PSS rules; knowing how Kubernetes enforces security policies helps understand what kube-bench is auditing
- RBAC (Role-Based Access Control) — Significant portion of kube-bench checks validate RBAC configuration, service account bindings, and API server authorization modes; essential to Kubernetes security hardening
- Kubelet Configuration & Runtime Flags — kube-bench heavily inspects kubelet process flags, config files (/etc/kubernetes/kubelet.conf), and feature gates; understanding kubelet security surface is core to node hardening checks
- etcd Security (TLS, RBAC, encryption) — kube-bench includes dedicated etcd.yaml checks validating TLS certificates, client authentication, and encryption-at-rest; etcd is Kubernetes' critical security boundary
- Cloud Provider-Specific Compliance (AWS EKS, Azure AKS) — kube-bench includes variant configs (cfg/aks-1.8, cfg/ack-1.0) for managed Kubernetes services with different threat models; understanding shared responsibility models affects which checks apply
🔗Related repos
aquasecurity/trivy— All-in-one cloud-native security scanner that incorporates kube-bench's CIS Kubernetes scanning alongside vulnerability scanning and misconfig detection; official successor/integration pointaquasecurity/trivy-operator— Kubernetes Operator that runs Trivy (including kube-bench CIS checks) continuously in-cluster; operationalizes kube-bench for ongoing compliance monitoringkubernetes/kubernetes— Upstream Kubernetes repository; kube-bench test definitions (cfg/) must track API deprecations, kubelet flag changes, and new security features in each K8s releasecisecurity/kubernetes-benchmark— Official CIS Kubernetes Benchmark specification document (non-code); kube-bench YAML configs are direct implementations of CIS control requirementscloudscape-design/components— Not directly related; search instead for 'polaris' (open-source config auditor) or 'kubesec' as alternative Kubernetes security checkers with different focus
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for all CIS Kubernetes Benchmark versions
The repo contains configuration files for multiple CIS versions (cis-1.10, cis-1.11, cis-1.12, cis-1.20) and cloud providers (aks, ack), but there's no visible test coverage validating that each config file parses correctly and produces expected check results. This would catch regressions when updating benchmark configs and ensure consistency across versions.
- [ ] Create test file pkg/runner/runner_test.go (or extend existing tests) with parameterized tests
- [ ] Add test cases iterating through cfg/ directory for each benchmark version
- [ ] Validate that each config.yaml, controlplane.yaml, etcd.yaml, node.yaml can be loaded without errors
- [ ] Add assertions that expected checks are discovered from each benchmark version
- [ ] Run tests in GitHub Actions workflow (reference .github/workflows/build.yml)
Add schema validation and documentation for benchmark config YAML files
The cfg/ directory contains numerous YAML files defining security checks, but there's no visible schema documentation or validation tool. Contributors adding new benchmarks or updating existing ones have no clear specification of valid fields, required properties, or format constraints. This leads to potential errors and inconsistent configs.
- [ ] Create pkg/config/schema.go with a struct definition documenting the expected YAML schema for benchmark configs
- [ ] Add JSON schema file docs/benchmark-config-schema.json for external tooling/IDE support
- [ ] Create internal validator in pkg/config/validator.go with functions to validate benchmark configs against schema
- [ ] Add validation call to config loading code to reject invalid files early with helpful error messages
- [ ] Add docs/CONFIG.md explaining the benchmark config format with examples from cfg/cis-1.20/
Add workflow tests for cloud-specific benchmark detection logic
The repo supports multiple Kubernetes distributions (CIS, AKS, ACK) with version-specific configs, but there are no visible tests validating the benchmark detection/selection logic. A contributor cannot verify that the correct benchmark version is chosen for their environment without manual testing.
- [ ] Create pkg/benchmarks/detection_test.go with test cases for benchmark auto-detection
- [ ] Add test cases mocking Kubernetes cluster metadata to verify correct CIS/AKS/ACK version selection
- [ ] Add test cases for version comparison logic and fallback behavior when exact version isn't available
- [ ] Add test cases validating cfg/*/config.yaml is correctly parsed to determine benchmark availability
- [ ] Reference GitHub Actions workflow .github/workflows/build.yml to ensure tests run on each commit
🌿Good first issues
- Add YAML schema validation test: create unit tests in a test package that validate all YAML files in cfg/ directories against a JSON schema to catch malformed check definitions early (prevents shipping broken benchmarks)
- Extend missing EKS-specific checks: review cfg/ack-1.0/ and compare against latest AWS EKS security best practices documentation, then add missing check definitions for IAM roles, VPC security group configurations, and KMS encryption—currently incomplete relative to AWS-native security controls
- Add structured JSON output format: implement a new output formatter (--output json flag) that emits machine-parseable JSON with check ID, status (PASS/FAIL/WARN), remediation steps, and affected resources—needed for CI/CD integration and SIEM ingestion
⭐Top contributors
Click to expand
Top contributors
- @dependabot[bot] — 66 commits
- @LaibaBareera — 21 commits
- @eyalke — 4 commits
- @afdesk — 3 commits
- @andypitcher — 2 commits
📝Recent commits
Click to expand
Recent commits
87766d6— ci: free disk space (#2094) (afdesk)ccb3d70— release: prepare v0.15.4 (#2093) (afdesk)9016d5a— ci: set up a new runner (#2092) (afdesk)887760b— Kube Bench selects cis-1.10, 1.1, or 1.12 for K3s depending on its version (v0.10.0, v0.14, v0.15.0) instead of k3s-cis- (eyalke)c395e37— bump kube-bench version to include vuln fixes (#2084) (eyalke)d09ba05— vuln-fixes: (#2083) (eyalke)938c34f— Fix vulns found with Trivy + add kubectl from the distribution package manager (#2078) (eyalke)5c8a5a7— build(deps): bump github.com/aws/aws-sdk-go-v2/config (#2072) (dependabot[bot])275d683— build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#2073) (dependabot[bot])e4f7b7d— Fix RKE2 CIS checks: 1.1.8, 2.3, 2.6, 4.2.1–4.2.12 (#2060) (LaibaBareera)
🔒Security observations
The kube-bench codebase shows moderate security maturity with several concerns. The most critical issue is the invalid Go version (1.26.2) which must be corr
- High · Go Version Mismatch - Unsupported Go 1.26.2 —
go.mod (line 3), Dockerfile (line 1). The go.mod file specifies Go 1.26.2, which does not exist. The latest stable Go version is 1.23.x. This indicates either a misconfiguration or use of an unreleased/invalid version. This could lead to unexpected behavior, missing security patches, and build failures. Fix: Update to a supported Go version such as 1.23.x or the latest stable release. Verify the correct version number and update both go.mod and Dockerfile accordingly. - Medium · Incomplete Dockerfile - Truncated Build Instructions —
Dockerfile (final RUN instruction). The Dockerfile appears truncated in the 'run' stage. The final RUN command for installing openssl is incomplete ('RUN apk update && apk upgrade && apk --n...'), which could result in an incomplete or broken image build. This may leave the container in an insecure state with missing security updates. Fix: Complete the truncated RUN instruction. Ensure all security updates are properly applied before installing additional packages. Verify the complete Dockerfile syntax. - Medium · Potential SQL Injection Risk - GORM Database Dependency —
go.mod (gorm.io/driver/postgres, gorm.io/gorm dependencies). The codebase includes gorm.io/driver/postgres and gorm.io/gorm dependencies without visible input validation patterns in the provided code structure. If user input is directly concatenated into database queries rather than using parameterized queries, this could lead to SQL injection vulnerabilities. Fix: Review all database query construction code to ensure parameterized queries are used exclusively. Use GORM's built-in query methods rather than raw SQL concatenation. Implement input validation and sanitization for all user-provided data used in database operations. - Medium · Alpine Base Image Security Considerations —
Dockerfile (Alpine 3.23.3 base image). The final stage uses Alpine 3.23.3. While Alpine is lightweight, the Dockerfile contains multiple separate RUN commands for security updates (apk upgrade). This can be optimized and may not guarantee all security patches are applied in the correct order or with proper dependency resolution. Fix: Consolidate all apk update/upgrade commands into a single RUN instruction to reduce layer bloat and ensure proper package dependency resolution. Example: 'RUN apk update && apk upgrade && apk add --no-cache procps findutils openssl && rm -rf /var/cache/apk/*' - Low · Missing SBOM and Supply Chain Verification —
.github/workflows/build.yml, .github/workflows/publish.yml. No evidence of Software Bill of Materials (SBOM) generation or dependency verification in the build pipeline. This limits supply chain security transparency and makes it harder to identify vulnerable transitive dependencies. Fix: Implement SBOM generation in CI/CD pipeline using tools like Syft or Cyclone DX. Add dependency scanning using tools like Snyk or Dependabot (already configured but verify active scanning). Document supply chain security practices in CONTRIBUTING.md. - Low · Insecure HTTP in Documentation Links —
README.md (badge and reference links). README.md contains references to security resources and badges that may use HTTP instead of HTTPS, though the primary links appear HTTPS-secured. This is a minor issue but represents inconsistent security practices. Fix: Ensure all links in documentation use HTTPS protocol. Update any HTTP references to their HTTPS equivalents where available. - Low · Missing Container Security Best Practices —
Dockerfile (run stage). The Dockerfile does not explicitly set a non-root user for running the application. Running as root inside containers can escalate the impact of container escapes or vulnerabilities within the application. Fix: Add a non-root user and switch to it before the ENTRYPOINT. Example: 'RUN addgroup -g 1000 kube-bench && adduser -D -u 1000 -G kube-bench kube-bench && USER kube-bench'
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.