RepoPilotOpen in app →

rebuy-de/aws-nuke

Nuke a whole AWS account and delete all its resources.

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • 29+ active contributors
  • Distributed ownership (top contributor 37% of recent commits)
  • MIT licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Stale — last commit 2y ago

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/rebuy-de/aws-nuke)](https://repopilot.app/r/rebuy-de/aws-nuke)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/rebuy-de/aws-nuke on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: rebuy-de/aws-nuke

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/rebuy-de/aws-nuke shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • 29+ active contributors
  • Distributed ownership (top contributor 37% of recent commits)
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Stale — last commit 2y ago

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live rebuy-de/aws-nuke repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/rebuy-de/aws-nuke.

What it runs against: a local clone of rebuy-de/aws-nuke — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in rebuy-de/aws-nuke | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch main exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 599 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>rebuy-de/aws-nuke</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of rebuy-de/aws-nuke. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/rebuy-de/aws-nuke.git
#   cd aws-nuke
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of rebuy-de/aws-nuke and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "rebuy-de/aws-nuke(\\.git)?\\b" \\
  && ok "origin remote is rebuy-de/aws-nuke" \\
  || miss "origin remote is not rebuy-de/aws-nuke (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
  && ok "default branch main exists" \\
  || miss "default branch main no longer exists"

# 4. Critical files exist
test -f "main.go" \\
  && ok "main.go" \\
  || miss "missing critical file: main.go"
test -f "cmd/nuke.go" \\
  && ok "cmd/nuke.go" \\
  || miss "missing critical file: cmd/nuke.go"
test -f "pkg/config/config.go" \\
  && ok "pkg/config/config.go" \\
  || miss "missing critical file: pkg/config/config.go"
test -f "resources/cloudcontrol.go" \\
  && ok "resources/cloudcontrol.go" \\
  || miss "missing critical file: resources/cloudcontrol.go"
test -f "pkg/types/collection.go" \\
  && ok "pkg/types/collection.go" \\
  || miss "missing critical file: pkg/types/collection.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 599 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~569d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/rebuy-de/aws-nuke"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

aws-nuke is a Go CLI tool that comprehensively deletes all resources from an AWS account in a controlled manner. It scans an entire AWS account, enumerates every deletable resource across all regions and services, and removes them with built-in safeguards (dry-run mode, confirmation prompts, blocklists, account alias validation). The core capability is account-level resource cleanup for non-production environments or cost reduction. Monolithic single-binary structure: cmd/ contains CLI logic (nuke.go, scan.go, region.go, queue.go), pkg/config/ handles YAML configuration parsing and filtering, pkg/awsutil/ wraps AWS SDK session management, pkg/types/ defines resource collections, and resources/ contains 100+ individual resource handler files (accessanalyzer-analyzers.go pattern). The tool scans resources, queues deletions, and executes them with concurrency control via golang.org/x/sync.

👥Who it's for

DevOps engineers and cloud administrators who need to completely decommission AWS accounts, clean up test environments, or reduce cloud costs by removing all resources. Specifically useful for organizations managing multiple AWS accounts with temporary environments that need guaranteed cleanup.

🌱Maturity & risk

DEPRECATED but was production-stable. The repository is no longer actively maintained as of the README notice—no new features, PRs, or releases are being accepted. The original codebase (778KB Go code) is well-structured with CI/CD workflows, Makefile automation, and Docker support. Users are directed to the actively maintained fork at ekristen/aws-nuke.

HIGH RISK—this is deprecated software without active maintenance. The project requires extremely careful operation (it irrevocably deletes all AWS resources) and relies on go.mod pinned to aws-sdk-go v1.54.19, which may have unpatched security issues. The blocklist safety mechanism (required non-empty blocklist, 'prod' string check, account alias requirement) provides some protection but cannot prevent accidental production account deletion if safeguards are misconfigured. No ongoing security updates or bug fixes.

Active areas of work

NOTHING—repository is in maintenance-only mode. The README explicitly states no issues are being addressed, PRs are not accepted, and no future releases are planned. The ekristen/aws-nuke fork is the active continuation.

🚀Get running

git clone https://github.com/rebuy-de/aws-nuke.git
cd aws-nuke
make build
./aws-nuke --help

The Makefile provides build targets. Ensure Go 1.21+ is installed (from go.mod). Create a config.yaml using config/example.yaml as a template with your blocklist and account filters.

Daily commands:

make build
./aws-nuke --config config.yml                    # Dry-run (default)
./aws-nuke --config config.yml --no-dry-run        # Actually delete (requires 2x account alias confirmation)
./aws-nuke --config config.yml scan                 # Scan and list resources

See Makefile for targets like make test, make fmt. Docker available: docker build -t aws-nuke .

🗺️Map of the codebase

  • main.go — Entry point for the aws-nuke CLI application; all execution flows start here
  • cmd/nuke.go — Core nuke command implementation that orchestrates resource scanning, filtering, and deletion
  • pkg/config/config.go — Configuration parsing and validation; defines which resources can be deleted and filtering rules
  • resources/cloudcontrol.go — Generic CloudControl API abstraction for nuking resources; handles 200+ AWS service resources
  • pkg/types/collection.go — Core data structure representing AWS resources and their dependency relationships
  • pkg/awsutil/session.go — AWS SDK session management and account authentication; foundation for all AWS API calls
  • cmd/queue.go — Deletion queue and concurrency management; ensures safe parallel deletion with retry logic

🧩Components & responsibilities

  • CLI/cmd layer (Cobra, Logrus) — Parses user inputs, orchestrates workflow (scan → filter → confirm → delete), handles user interaction
    • Failure mode: Invalid flag parsing, user cancellation, or unrecoverable AWS auth failures halt execution
  • Config layer (pkg/config) (YAML, glob pattern matching) — Loads YAML, validates syntax, applies include/exclude patterns, manages per-account and per-region settings
    • Failure mode: Malformed YAML or invalid patterns reject the config; missing account ID halts execution
  • Resource layer (resources/) — Translates AWS API responses into

🛠️How to make changes

Add support for a new AWS resource type

  1. Create a new resource file in resources/ directory following the naming convention (service-resourcetype.go) and implement the Resource interface with Delete(), Properties(), and Filter() methods (resources/example-resource.go)
  2. Register the resource handler in the resource collection initialization code (typically in cmd/nuke.go or a resources registry) (cmd/nuke.go)
  3. Add deletion and filtering tests for the new resource type matching the pattern in resources/cloudformation-stack_test.go (resources/example-resource_test.go)
  4. Update the example config file to demonstrate the new resource in the config/example.yaml with its service name and supported filters (config/example.yaml)

Add a new filtering capability

  1. Define the new filter type and logic in pkg/config/filter.go, extending the Filter struct and matching rules (pkg/config/filter.go)
  2. Update the property extraction logic in pkg/types/properties.go to handle the new filter field extraction (pkg/types/properties.go)
  3. Add test cases in pkg/config/filter_test.go demonstrating the new filter pattern (pkg/config/filter_test.go)
  4. Document the new filter option in config/example.yaml with example usage (config/example.yaml)

Implement a service-specific resource handler

  1. Create a new file in resources/ (e.g., resources/service-resourcetype.go) implementing the Resource interface with custom Delete() logic that calls service-specific API methods (resources/batch-computeenvironments.go)
  2. Import the AWS SDK client for that service and use it directly instead of CloudControl API if needed (resources/apigateway-restapis.go)
  3. Handle service-specific waiters or state transitions in the Delete() method (see resources/appstream-fleetstates.go for state-based example) (resources/appstream-fleetstates.go)
  4. Add unit tests that mock AWS API calls using golang/mock (see resources/cloudformation-stack_test.go) (resources/cloudformation-stack_test.go)

🔧Why these technologies

  • AWS SDK v1 (github.com/aws/aws-sdk-go) — Provides comprehensive AWS API bindings; chosen for breadth of service coverage needed to nuke all resource types
  • Cobra (github.com/spf13/cobra) — Industry-standard Go CLI framework; enables flags, subcommands, and help text generation
  • YAML configuration (gopkg.in/yaml.v3) — Human-readable format for specifying accounts, regions, and per-resource filtering rules
  • golang/mock — Enables unit testing of AWS API calls without hitting real accounts; critical for safety testing
  • Logrus (github.com/sirupsen/logrus) — Structured logging with field support; critical for audit trails during destructive operations

⚖️Trade-offs already made

  • Synchronous CLI instead of async background service

    • Why: Matches use case of one-time or infrequent account cleanup; simpler deployment model
    • Consequence: Long-running operations block the terminal; not suitable for real-time resource deletion APIs
  • Generic CloudControl API abstraction for 200+ resources

    • Why: Reduces code duplication and maintenance burden for similar delete patterns across services
    • Consequence: Some services require custom handlers for complex deletion workflows (multi-step, state-based); cannot express service-specific nuances in generic layer
  • Configuration-based filtering (YAML) rather than CLI flags

    • Why: Supports complex, reusable deletion policies and audit trails; avoids error-prone flag ordering
    • Consequence: Higher learning curve; requires config file management alongside binaries
  • Topological sort + retry queue instead of eventual consistency polling

    • Why: Deterministic ordering respects explicit dependencies; faster feedback on failures
    • Consequence: Does not account for implicit AWS dependencies (e.g., security group rules can block instance termination); requires careful dependency declarations

🚫Non-goals (don't propose these)

  • Does not support partial rollback or undo operations; all deletions are permanent
  • Does not provide a web UI or API; CLI-only interface intended for manual or scripted execution
  • Does not support cross-account deletion; operates on a single AWS account per invocation
  • Does not perform cost analysis or resource cleanup optimization; only filters and deletes based on config rules
  • Not actively maintained (deprecated in favor of ekristen/aws-nuke fork)

🪤Traps & gotchas

CRITICAL: (1) Account alias is mandatory—aws-nuke will refuse to run without one (hardcoded safety check in pkg/awsutil). (2) Config blocklist must be non-empty and contain at least the target account ID or the tool aborts. (3) Account alias must NOT contain substring 'prod' (case-sensitive check). (4) --no-dry-run requires interactive console confirmations twice—runs non-interactively will hang. (5) Some AWS resources may have undocumented deletion dependencies; if resource A depends on B, deleting A first causes hangs. (6) Region filtering in config.yml applies across all resources; no per-resource region overrides. (7) Resource filtering uses glob patterns (pkg/config/filter.go)—shell metacharacters in resource names may cause unexpected matches.

🏗️Architecture

💡Concepts to learn

  • AWS Account Alias & Confirmation Gates — Understanding why aws-nuke requires account aliases, blocklists, and 2x confirmation prompts is essential to avoiding accidental production deletion—this is the primary blast-radius limiter
  • Resource Dependency Ordering & Deletion Queues — aws-nuke uses a queue-based deletion model (cmd/queue.go) where resources with dependencies must be deleted in correct order; understanding this prevents hung deletions and orphaned resources
  • Glob Pattern Filtering for Resource Names — Configuration filters (pkg/config/filter.go) use glob patterns to match resources; misunderstanding glob metacharacters or escape sequences can lead to over-deletion
  • AWS Region Enumeration & Concurrency — aws-nuke scans all enabled regions in parallel and must handle region-specific API endpoints and service availability; cmd/region.go and cmd/queue.go implement this concurrent scanning
  • Handler Registry Pattern for AWS Services — Each AWS service has individual resource handler files (resources/serviceXXX-resourcetype.go) that self-register; understanding this pattern is required to add new resource types
  • Dry-Run Semantics & Non-Destructive Scanning — aws-nuke defaults to dry-run (--no-dry-run flag inverts); understanding the difference between scan vs. delete modes and when confirmation is skipped is critical for safe operation
  • ekristen/aws-nuke — Active maintained fork of this project—the recommended replacement with ongoing development and security updates
  • gruntwork-io/cloud-nuke — Terraform-based alternative for AWS resource cleanup written in Go; focuses on destroy patterns for infrastructure-as-code
  • aws/aws-cli — Official AWS CLI used internally by aws-nuke via aws-sdk-go; essential for understanding AWS API patterns this tool leverages
  • aws/aws-sdk-go — The underlying AWS SDK dependency (v1) that aws-nuke wraps; understanding SDK patterns is required for adding new resource handlers
  • localstack/localstack — Local AWS mock service useful for testing aws-nuke configuration and filters without hitting real AWS accounts

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for pkg/config/filter.go filtering logic

The pkg/config/filter.go file exists but there's only filter_test.go listed. Given that aws-nuke is a destructive tool that nukes AWS resources, the filter logic is critical for preventing accidental deletion of protected resources. The filtering mechanism needs robust test coverage for edge cases like regex patterns, property matching, and exclusion rules. This is high-value because misconfigured filters could lead to data loss.

  • [ ] Examine pkg/config/filter_test.go to identify missing test cases
  • [ ] Add tests for complex filter scenarios (nested properties, multiple conditions, regex edge cases)
  • [ ] Add property matching edge case tests from pkg/types/properties.go integration
  • [ ] Test filter behavior with real config examples from pkg/config/test-fixtures/
  • [ ] Add tests for deprecated filter key handling mentioned in test-fixtures/deprecated-keys-config.yaml

Migrate resource definitions to use CloudControl API where available

The dev/list-cloudcontrol/main.go file indicates CloudControl API investigation. Currently, there are ~40+ individual resource files (resources/-.go) each implementing custom deletion logic via AWS SDK v1. CloudControl API provides a generic way to manage many AWS resources. Converting even 10-15 high-volume resources (like IAM, EC2, RDS) to CloudControl would reduce code duplication, improve maintainability, and ensure aws-nuke stays compatible as AWS evolves resource APIs.

  • [ ] Expand dev/list-cloudcontrol/main.go to identify which resources support CloudControl API
  • [ ] Create a generic CloudControl resource handler in resources/ or pkg/
  • [ ] Migrate 3-5 high-impact resources (e.g., resources/iam-.go, resources/ec2-.go) to use CloudControl
  • [ ] Add feature flag to allow fallback to SDK v1 implementations during transition
  • [ ] Document CloudControl migration pattern for future contributors

Add integration tests with mock AWS API responses for critical deletion paths

The repository has cmd/_test.go and pkg/_test.go files, but given the destructive nature of aws-nuke, there are likely no integration tests simulating actual AWS API responses for the deletion workflow. The cmd/nuke.go, cmd/scan.go, and cmd/queue.go files implement the core deletion orchestration but lack integration-level testing. Adding structured mock responses would catch regressions in deletion sequencing and error handling.

  • [ ] Create tests/integration/ directory with mock AWS responses for common resource types
  • [ ] Use golang/mock (already in go.mod) to generate mocks for AWS SDK calls in cmd/nuke.go deletion flow
  • [ ] Add integration tests for cmd/queue.go task ordering (especially dependency resolution between resources)
  • [ ] Test error handling and rollback scenarios in cmd/nuke.go when deletion fails mid-operation
  • [ ] Document how to add new resource type mocks for contributors testing new resources

🌿Good first issues

  • Add unit tests for cmd/util_test.go coverage gaps—currently only util.go tests exist; cmd/params.go, cmd/log.go, and cmd/version.go lack test files.
  • Implement missing resource handlers for common AWS services—the resources/ folder likely has gaps; audit which services are NOT covered and add handlers (e.g., check if AppStream, Workspaces, Glue Connections are missing).
  • Enhance pkg/config/filter_test.go to cover glob pattern edge cases—test nested wildcards, escaped characters, and verify filter.go regex behavior matches documented behavior in README.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 506e8ca — Update .editorconfig (terraform-cluster-config[bot])
  • 979d2e1 — Add prp-team to CODEOWNERS (terraform-cluster-config[bot])
  • 7e7433c — Disable dependabot (#1267) (der-eismann)
  • 10a0b49 — add deprecation notice (#1260) (svenwltr)
  • 517cd1b — Update .editorconfig (terraform-cluster-config[bot])
  • da05ea3 — Update .editorconfig (terraform-cluster-config[bot])
  • bc5b576 — Add .editorconfig (terraform-cluster-config[bot])
  • c24e393 — Bump github.com/aws/aws-sdk-go from 1.54.7 to 1.54.19 in the golang group across 1 directory (#1246) (dependabot[bot])
  • 18db9cc — Bump the golang group across 1 directory with 2 updates (#1238) (dependabot[bot])
  • 4b8ea2e — Bump github.com/aws/aws-sdk-go from 1.53.10 to 1.53.15 in the golang group (#1229) (dependabot[bot])

🔒Security observations

  • High · Deprecated and Unmaintained Repository — README.md. The repository is explicitly marked as no longer actively maintained. The README contains a deprecation notice recommending users to switch to an actively maintained fork at https://github.com/ekristen/aws-nuke. This means security patches, bug fixes, and updates will not be provided. Fix: Migrate to the actively maintained fork at https://github.com/ekristen/aws-nuke. Ensure all users and deployments are using the maintained version instead of this deprecated repository.
  • High · Outdated Go Version — go.mod, Dockerfile. The project uses Go 1.21, which is now outdated. Go 1.21 is no longer receiving security updates. The latest stable versions include security patches that may be critical for AWS credential handling and authentication. Fix: Update to the latest stable Go version (1.22 or later). Test thoroughly with AWS SDK operations to ensure compatibility.
  • Medium · Outdated AWS SDK — go.mod. The project uses aws-sdk-go v1.54.19, which is the older AWS SDK version. AWS SDK v2 is the current standard and receives more frequent security updates. The v1 SDK has a longer lag time for security patches. Fix: Consider migrating to AWS SDK v2 (github.com/aws/aws-sdk-go-v2) which has better support and more frequent security updates.
  • Medium · No Input Validation Configuration Visible — pkg/config/config.go, config.yml handling. The codebase includes configuration file handling (config.yml, config/example.yaml) and YAML parsing (gopkg.in/yaml.v3), but without visible input validation, there could be risks from malicious YAML configurations that could lead to code execution or resource exhaustion attacks. Fix: Implement strict validation of configuration files, including schema validation, type checking, and bounds verification. Consider using a YAML schema validator.
  • Medium · Docker Image Base Uses Latest Tag — Dockerfile. The Dockerfile uses 'alpine:latest' as the final base image, which can introduce unexpected changes and security issues if the base image is updated with vulnerabilities. Fix: Pin the Alpine base image to a specific version (e.g., 'alpine:3.19') instead of 'latest' to ensure reproducible builds and control over base image security patches.
  • Medium · No Resource Limits in Docker — Dockerfile. The Dockerfile does not specify resource limits (memory, CPU) in the multi-stage build or runtime configuration, which could lead to resource exhaustion during the build process or when running the tool. Fix: Implement resource limits in the container configuration or Docker Compose file to prevent resource exhaustion attacks.
  • Low · Non-root User Without Full Privilege Dropping — Dockerfile (RUN adduser -D aws-nuke, USER aws-nuke). While the Dockerfile creates a non-root user 'aws-nuke', the tool requires AWS credential access which may necessitate elevated privileges in certain scenarios. Verify that the non-root user setup is sufficient for all use cases. Fix: Document privilege requirements clearly. Ensure that AWS credential files and volume mounts have proper permissions for the aws-nuke user.
  • Low · No SBOM or Dependency Scanning — .github/workflows/. While the go.mod specifies dependencies, there is no visible evidence of automated dependency scanning, SBOM generation, or vulnerability scanning in the CI/CD pipeline for known vulnerabilities. Fix: Integrate dependency scanning tools like 'go mod audit', Snyk, or Dependabot in the CI/CD pipeline. Generate and maintain an SBOM for releases.
  • Low · Potential Information Disclosure via Logging — cmd/log.go, pkg/awsutil/. The tool uses sirupsen/logrus for logging. AWS resource deletion is a sensitive operation that may expose resource names, ARNs, or account information in logs. Fix: Ensure that sensitive information (ARNs, resource IDs, account identifiers) is sanitized from logs. Implement log level controls and consider redacting sensitive data.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · rebuy-de/aws-nuke — RepoPilot