RepoPilotOpen in app →

GoogleCloudPlatform/terraformer

CLI tool to generate terraform files from existing infrastructure (reverse Terraform). Infrastructure to Code

Healthy

Healthy across the board

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 8w ago
  • 50+ active contributors
  • Distributed ownership (top contributor 27% of recent commits)
Show 3 more →
  • Apache-2.0 licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/googlecloudplatform/terraformer)](https://repopilot.app/r/googlecloudplatform/terraformer)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/googlecloudplatform/terraformer on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: GoogleCloudPlatform/terraformer

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/GoogleCloudPlatform/terraformer shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 8w ago
  • 50+ active contributors
  • Distributed ownership (top contributor 27% of recent commits)
  • Apache-2.0 licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live GoogleCloudPlatform/terraformer repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/GoogleCloudPlatform/terraformer.

What it runs against: a local clone of GoogleCloudPlatform/terraformer — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in GoogleCloudPlatform/terraformer | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 84 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>GoogleCloudPlatform/terraformer</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of GoogleCloudPlatform/terraformer. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/GoogleCloudPlatform/terraformer.git
#   cd terraformer
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of GoogleCloudPlatform/terraformer and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "GoogleCloudPlatform/terraformer(\\.git)?\\b" \\
  && ok "origin remote is GoogleCloudPlatform/terraformer" \\
  || miss "origin remote is not GoogleCloudPlatform/terraformer (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "main.go" \\
  && ok "main.go" \\
  || miss "missing critical file: main.go"
test -f "cmd/root.go" \\
  && ok "cmd/root.go" \\
  || miss "missing critical file: cmd/root.go"
test -f "cmd/import.go" \\
  && ok "cmd/import.go" \\
  || miss "missing critical file: cmd/import.go"
test -f "go.mod" \\
  && ok "go.mod" \\
  || miss "missing critical file: go.mod"
test -f ".github/workflows/test.yml" \\
  && ok ".github/workflows/test.yml" \\
  || miss "missing critical file: .github/workflows/test.yml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 84 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~54d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/GoogleCloudPlatform/terraformer"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Terraformer is a CLI tool that reverse-engineers existing cloud infrastructure into Terraform configuration files (.tf, .json) and state files (.tfstate), enabling Infrastructure-as-Code adoption without manual rewriting. It supports 50+ providers including AWS, GCP, Azure, Kubernetes, and niche platforms like Okta, Datadog, and PAN-OS, automating the capture of live infrastructure state into declarative Terraform code. Monolithic structure: cmd/ directory contains 50+ provider command files (provider_cmd_aws.go, provider_cmd_google.go, etc.) that plug into a central import/plan command system (cmd/import.go, cmd/plan.go). The build/ directory contains build orchestration (build/main.go handles multi-platform compilation). Linting and CI workflows in .github/workflows/ run tests and releases. No internal pkg/ structure visible; logic is tightly coupled to each provider implementation.

👥Who it's for

DevOps engineers, SREs, and infrastructure teams managing multi-cloud or legacy infrastructure who want to adopt Terraform without manually writing all configuration—especially those migrating from imperative infrastructure management to IaC or auditing undocumented deployed resources.

🌱Maturity & risk

DEPRECATED AND ARCHIVED as of March 16, 2026. The project is no longer maintained, read-only, and receives no security patches or updates. It was actively developed (2.3M lines of Go code, comprehensive CI/CD via GitHub Actions workflows), but is now end-of-life. Do not use for new projects; consider alternatives like Pulumi or cdktf.

Critical risk: This is archived and unmaintained. No security patches will be applied, and dependencies (AWS SDK v2, Azure SDK, Kubernetes client) will become stale. High dependency surface area (50+ provider SDKs) increases vulnerability exposure. The codebase is large (2.3M Go LOC) with complex provider-specific logic across cmd/provider_cmd_*.go files, making forks difficult to maintain. Do not deploy to production without significant vetting and your own patch management.

Active areas of work

Nothing. The repository is archived and in read-only mode as of March 16, 2026. No active development, PRs, or maintenance. The last visible activity was likely release workflow runs and dependency updates before archival.

🚀Get running

Clone and build locally:

git clone https://github.com/GoogleCloudPlatform/terraformer.git
cd terraformer
go mod download
go build -o terraformer ./cmd
./terraformer help

Note: This is archived; use only for exploration or reference. For active projects, consider alternatives.

Daily commands: Build and invoke by provider:

go build -o terraformer ./cmd
./terraformer import google --resources=compute_instances --projects=my-gcp-project
./terraformer import aws --resources=ec2_instances --regions=us-east-1
./terraformer plan  # (preview mode, if implemented)

Authentication via cloud provider SDKs (GCP: GOOGLE_APPLICATION_CREDENTIALS, AWS: AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY, etc.). Output goes to generated/ directory.

🗺️Map of the codebase

  • main.go — Entry point for the Terraformer CLI application; all contributors must understand how the root command is initialized.
  • cmd/root.go — Root command definition for the CLI; establishes the core command structure that all provider commands extend.
  • cmd/import.go — Core import logic that orchestrates the reverse-Terraform workflow; essential for understanding how infrastructure is discovered and imported.
  • go.mod — Go module definition with 50+ cloud provider SDKs; critical for understanding all external dependencies and Go version constraints.
  • .github/workflows/test.yml — CI/CD test pipeline; all contributors must ensure code passes these automated tests before submission.
  • providers — Directory containing all provider implementations; this is where 95% of the codebase logic lives for each cloud provider.

🛠️How to make changes

Add Support for a New Cloud Provider

  1. Create a new provider command file following the naming pattern cmd/provider_cmd_<provider>.go that registers the provider with the root command (cmd/provider_cmd_newprovider.go)
  2. Create a new provider subdirectory under providers/<provider>/ that implements the core provider logic (providers/newprovider)
  3. Implement resource importers that call the cloud provider SDK to discover existing infrastructure (providers/newprovider/resources.go)
  4. Add provider documentation in the docs/ directory following existing provider doc patterns (docs/newprovider.md)
  5. Import the new provider in the appropriate cmd file and register it with the Cobra command framework (cmd/root.go)

Add a New Resource Type to an Existing Provider

  1. Identify the provider's resource implementation directory (e.g., providers/aws/) (providers/aws)
  2. Create a new resource file (e.g., providers/aws/lambda.go) that implements the resource importer interface (providers/aws/newresource.go)
  3. Implement discovery logic that calls the cloud provider SDK to fetch all instances of the resource type (providers/aws/newresource.go)
  4. Register the new resource importer in the provider's resource list (typically in an init or factory function) (providers/aws/resources.go)

Extend the CLI with New Command Flags or Subcommands

  1. Define the new flag or subcommand in cmd/root.go or the appropriate provider command file using Cobra's command builder (cmd/root.go)
  2. Add corresponding variable bindings using cmd.Flags().StringVar() or similar methods (cmd/import.go)
  3. Pass the new flag values to the provider importers through the command context or configuration struct (cmd/import.go)
  4. Update the relevant provider documentation to describe the new flag or subcommand usage (README.md)

Improve Test Coverage for a Provider

  1. Identify the provider's test file or create a new _test.go file in the provider directory (providers/aws/aws_test.go)
  2. Use Go testing conventions and mock cloud provider SDK responses to simulate resource discovery (providers/aws/aws_test.go)
  3. Ensure tests are registered with the CI/CD pipeline by following patterns in .github/workflows/test.yml (.github/workflows/test.yml)

🔧Why these technologies

  • Go 1.24+ — Cross-platform compilation for multi-cloud CLI, strong concurrency for parallel resource discovery, minimal binary size for distribution
  • Cobra framework — Declarative command structure for 50+ provider commands, built-in help/documentation generation, flag binding consistency
  • Cloud Provider SDKs (AWS, GCP, Azure, etc.) — Official APIs for resource discovery and metadata retrieval, automatic credential handling, version compatibility with provider updates
  • Terraform state file format (tfstate JSON) — Direct compatibility with Terraform state management, preserves resource ID mappings for future plan/apply operations
  • GitHub Actions CI/CD — Multi-platform testing (Linux, macOS, Windows), automated release distribution, linting enforcement for code quality

⚖️Trade-offs already made

  • Provider SDKs embedded in single binary instead of plugins

    • Why: Simplifies distribution and eliminates plugin versioning issues
    • Consequence: Large binary size (~100MB+) and slower initial load; all provider dependencies compiled even if only one is used
  • Imperative resource discovery (listing all resources) vs. graph-based dependency traversal

    • Why: Simpler implementation and works with unstructured cloud resources
    • Consequence: Cannot automatically infer resource relationships; generated Terraform may require manual dependency adjustments
  • Generate .tf and .tfstate files separately instead of pure HCL modules

    • Why: Preserves exact state from infrastructure discovery without requiring Terraform validation
    • Consequence: Generated configs may not be immediately plan-safe; users must manually reconcile Terraform syntax or run terraform fmt
  • Single CLI binary supporting all 50+ providers vs. provider-specific CLI tools

    • Why: Unified UX and consistent command patterns across providers
    • Consequence: Tight coupling of providers makes adding a new provider require changes to root command registration

🚫Non-goals (don't propose these)

  • Real-time monitoring of infrastructure changes (one-time import tool, not a state sync daemon)
  • Handling of unmanaged or manually-created sub-resources (focuses on top-level resources discoverable via provider APIs)
  • Terraform plan validation or apply execution (generates configs only; Terraform CLI handles validation)
  • Multi-workspace or state file merging (each import target creates independent state)
  • Encryption or credential management (relies on cloud provider SDK auth chains)
  • Active maintenance (project deprecated as of March 16, 2026; archived and read-only)

🪤Traps & gotchas

Project is archived and unmaintained; these traps are moot but relevant for reference/forks:

  • Provider SDKs require valid cloud credentials (GOOGLE_APPLICATION_CREDENTIALS, AWS_ACCESS_KEY_ID, Azure credentials, etc.) at runtime; missing creds silently fail to discover resources.
  • Each provider command (e.g., terraformer import aws) has different flag syntax and resource filtering logic; no unified interface. Refer to provider-specific docs.
  • Generated Terraform state may contain sensitive values (API keys, passwords) in plaintext; audit generated files before committing.
  • Resource filtering logic is provider-specific and incomplete for some providers; not all resource types may be discoverable even if the provider SDK supports them.
  • Build is single-threaded and slow for large infrastructure (50+ provider SDKs compile); go mod cache can bloat to 1GB+.
  • Tests likely incomplete; high risk of uncaught bugs in less-maintained providers (Okta, Logz.io, Opal, etc.). Verify generated Terraform with terraform validate and terraform plan against real infra.

🏗️Architecture

💡Concepts to learn

  • Infrastructure as Code (IaC) Reverse Engineering — Terraformer's core innovation: automating the inverse of Terraform apply (reading live cloud state and generating declarative config), enabling rapid IaC adoption without manual rewriting.
  • Provider Plugin Pattern (Multi-Cloud Abstraction) — Terraformer's architecture relies on implementing a common interface for each of 50+ cloud providers; understanding this pattern is critical for adding new providers or debugging provider-specific bugs.
  • State File Management (tfstate Serialization) — Terraformer generates both .tf (configuration) and .tfstate (state) files; understanding Terraform's state format, locking, and remote backends is essential for integrating generated code into real deployments.
  • Cloud Provider SDK Pagination and Resource Discovery — Each provider's cmd/provider_cmd_*.go must handle API pagination, filtering, and eventual consistency of cloud provider list operations; bugs here cause incomplete or incorrect resource imports.
  • HCL (HashiCorp Configuration Language) Code Generation — Terraformer must synthesize valid HCL from cloud API responses; understanding HCL syntax, interpolation, and edge cases (quotes, types, references) is critical for correct generated configs.
  • Cross-Platform Go Build Orchestration (multi-build) — build/multi-build/main.go handles compilation for Linux, macOS, Windows across architectures; understanding Go build flags and linker variables is needed for debugging binary generation or extending release workflows.
  • Resource Dependency Graph Inference — Generated Terraform configs must capture implicit resource dependencies (e.g., security group references in EC2 instances); Terraformer must infer these from cloud API responses and emit correct HCL references.
  • gruntwork-io/terragrunt — Complementary IaC tool; wraps Terraform to manage multi-environment deployments. Terraformer generates .tf files; Terragrunt orchestrates their execution and state management.
  • hashicorp/terraform-cdk — Alternative approach to IaC: programmatically define infrastructure in Python/TypeScript and synthesize Terraform, avoiding manual reverse-engineering or code generation complexity.
  • pulumi/pulumi — Modern IaC alternative supporting Python/Go/TypeScript; can import existing cloud resources via pulumi up without separate reverse-engineering tool.
  • GoogleCloudPlatform/config-connector — GCP-specific alternative; deploys infrastructure as Kubernetes CRDs, eliminating need for Terraformer for GCP-only setups.

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for provider command generation across all 40+ providers

The repo contains 40+ provider command files (cmd/provider_cmd_*.go) but the test workflow (test.yml) doesn't show provider-specific integration tests. Each provider has unique API patterns and edge cases. Adding structured integration tests would catch regressions when providers update their SDKs (many are at incompatible versions like Azure v63.4.0+incompatible, AWS v1.36.3). This is critical since the repo is deprecated but still used by many teams.

  • [ ] Create tests/provider_integration_test.go with a test matrix covering 5-10 high-value providers (aws, google, azure, kubernetes, github)
  • [ ] Add provider-specific test fixtures in tests/fixtures/{provider}/ with mock API responses
  • [ ] Update .github/workflows/test.yml to run provider tests with timeouts for flaky cloud APIs
  • [ ] Document in CONTRIBUTING.md how to add tests for new providers in the test matrix

Refactor 40+ provider command files into a provider plugin architecture

The cmd/ directory has massive duplication across cmd/provider_cmd_*.go files (repetitive initialization, flag parsing, error handling). This violates DRY principles and makes maintenance harder. A plugin-based architecture would reduce code by 70%+ and make it easier for contributors to add new providers without copy-pasting. Given the repo is archived, this refactoring would demonstrate best practices for forks.

  • [ ] Create pkg/providers/plugin.go with a Provider interface defining common methods (Name(), Import(), Plan())
  • [ ] Move AWS provider logic from cmd/provider_cmd_aws.go into pkg/providers/aws/provider.go as a plugin
  • [ ] Create cmd/provider_registry.go that auto-discovers and registers providers from pkg/providers/
  • [ ] Update cmd/import.go and cmd/plan.go to use the plugin registry instead of switch statements
  • [ ] Document the plugin architecture in CONTRIBUTING.md with an example for adding a new provider

Add comprehensive documentation for each provider's supported resources with examples

The docs/ directory has placeholder .md files (docs/alicloud.md, docs/aws.md, etc.) but they lack concrete examples of which resources are supported and terraform output samples. Users trying to import infrastructure don't know which resources will actually be converted. Each provider file has different support matrices.

  • [ ] Analyze cmd/provider_cmd_aws.go, cmd/provider_cmd_google.go, cmd/provider_cmd_azure.go to extract which resource types are supported
  • [ ] Update docs/aws.md, docs/google.md, docs/azure.md with a table of supported resources (e.g., EC2 instances, S3 buckets, IAM roles) with support status (✓ full, ~ partial, ✗ unsupported)
  • [ ] Add a 'Before & After' example section showing actual terraform output for each major provider
  • [ ] Create a docs/PROVIDER_STATUS.md master index documenting which providers are actively maintained vs deprecated

🌿Good first issues

  • Add resource filtering support to cmd/provider_cmd_heroku.go and cmd/provider_cmd_launchdarkly.go: both lack --filter flag for selective import (see AWS implementation pattern).
  • Write unit tests for cmd/provider_cmd_kubernetes.go and cmd/provider_cmd_openstack.go in *_test.go files; coverage for these providers is likely absent or minimal given the codebase age.
  • Document provider-specific credential setup in docs/: create docs/credentials-setup.md listing exact env var names, credential file paths, and IAM permission requirements for each of the 50 providers (currently scattered or missing).

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 1807aff — Add archive note to README (slevenick)
  • 069060c — Merge pull request #2060 from slevenick/patch-1 (slevenick)
  • 2525d6d — Update feed test URI reference (slevenick)
  • 5fd6ae9 — build(deps): bump github.com/go-jose/go-jose/v3 from 3.0.3 to 3.0.4 (#1996) (dependabot[bot])
  • c4407e2 — build(deps): bump github.com/newrelic/newrelic-client-go (#2003) (dependabot[bot])
  • 18a2bb7 — build(deps): bump golang.org/x/net from 0.36.0 to 0.38.0 (#2014) (dependabot[bot])
  • 96937ae — build(deps): bump github.com/aws/aws-sdk-go-v2/service/ecs (#2023) (dependabot[bot])
  • be578d5 — build(deps): bump github.com/aws/aws-sdk-go-v2/service/identitystore (#2025) (dependabot[bot])
  • d4eb465 — build(deps): bump github.com/aws/aws-sdk-go-v2/service/emr (#2030) (dependabot[bot])
  • 4e131e7 — Fix ibm instance groups missing mutex (#2024) (gcheang)

🔒Security observations

Failed to generate security analysis.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · GoogleCloudPlatform/terraformer — RepoPilot