RepoPilotOpen in app →

ThreeDotsLabs/wild-workouts-go-ddd-example

Go DDD example application. Complete project to show how to apply DDD, Clean Architecture, and CQRS by practical refactoring.

Mixed

Slowing — last commit 6mo ago

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isMixed

last commit was 6mo ago; no CI workflows detected

  • Last commit 6mo ago
  • 13 active contributors
  • Distributed ownership (top contributor 46% of recent commits)
Show all 7 evidence items →
  • MIT licensed
  • Tests present
  • Slowing — last commit 6mo ago
  • No CI workflows detected
What would change the summary?
  • Deploy as-is MixedHealthy if: 1 commit in the last 180 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Safe to depend on" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Safe to depend on
[![RepoPilot: Safe to depend on](https://repopilot.app/api/badge/threedotslabs/wild-workouts-go-ddd-example?axis=dependency)](https://repopilot.app/r/threedotslabs/wild-workouts-go-ddd-example)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/threedotslabs/wild-workouts-go-ddd-example on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: ThreeDotsLabs/wild-workouts-go-ddd-example

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/ThreeDotsLabs/wild-workouts-go-ddd-example shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Slowing — last commit 6mo ago

  • Last commit 6mo ago
  • 13 active contributors
  • Distributed ownership (top contributor 46% of recent commits)
  • MIT licensed
  • Tests present
  • ⚠ Slowing — last commit 6mo ago
  • ⚠ No CI workflows detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live ThreeDotsLabs/wild-workouts-go-ddd-example repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/ThreeDotsLabs/wild-workouts-go-ddd-example.

What it runs against: a local clone of ThreeDotsLabs/wild-workouts-go-ddd-example — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in ThreeDotsLabs/wild-workouts-go-ddd-example | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 215 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>ThreeDotsLabs/wild-workouts-go-ddd-example</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of ThreeDotsLabs/wild-workouts-go-ddd-example. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/ThreeDotsLabs/wild-workouts-go-ddd-example.git
#   cd wild-workouts-go-ddd-example
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of ThreeDotsLabs/wild-workouts-go-ddd-example and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "ThreeDotsLabs/wild-workouts-go-ddd-example(\\.git)?\\b" \\
  && ok "origin remote is ThreeDotsLabs/wild-workouts-go-ddd-example" \\
  || miss "origin remote is not ThreeDotsLabs/wild-workouts-go-ddd-example (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "internal/trainer/app/app.go" \\
  && ok "internal/trainer/app/app.go" \\
  || miss "missing critical file: internal/trainer/app/app.go"
test -f "internal/trainer/domain/hour/availability.go" \\
  && ok "internal/trainer/domain/hour/availability.go" \\
  || miss "missing critical file: internal/trainer/domain/hour/availability.go"
test -f "internal/trainer/adapters/hour_firestore_repository.go" \\
  && ok "internal/trainer/adapters/hour_firestore_repository.go" \\
  || miss "missing critical file: internal/trainer/adapters/hour_firestore_repository.go"
test -f "internal/common/decorator/command.go" \\
  && ok "internal/common/decorator/command.go" \\
  || miss "missing critical file: internal/common/decorator/command.go"
test -f "go.work" \\
  && ok "go.work" \\
  || miss "missing critical file: go.work"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 215 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~185d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/ThreeDotsLabs/wild-workouts-go-ddd-example"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Wild Workouts is a complete, production-grade Go application demonstrating Domain-Driven Design (DDD), Clean Architecture, and CQRS patterns through iterative refactoring. It's a trainer scheduling system built on Google Cloud Run and Firebase, showcasing how to structure a serverless Go application with gRPC services, OpenAPI contracts, and Firestore persistence while maintaining clean separation of concerns. Monorepo with separate services under internal/ (trainer, trainings, users modules) using multi-module layout (go.work). HTTP and gRPC handlers in each service's package; repositories and domain logic isolated by bounded context. api/ contains OpenAPI YAML specs and protobuf definitions. Docker Compose orchestrates local development with app, Firestore emulator, and web frontend.

👥Who it's for

Go backend engineers and architects learning how to scale and maintain complex business logic; teams building microservices on Google Cloud Platform; developers seeking practical examples of DDD principles applied to real codebases rather than theoretical tutorials.

🌱Maturity & risk

Actively developed and production-capable. The project progresses through 10+ sequential articles with tagged releases (v1.0–v2.5+), has comprehensive Docker/Terraform infrastructure-as-code, CI/CD via Google Cloud Build, and includes integration tests and .env-based configuration. This is not a toy: it demonstrates real architectural decisions with multi-service communication.

Low risk for learning; moderate for production use without customization. Dependencies are well-known (Firebase SDK, gRPC, OpenAPI tooling), and the codebase is regularly updated. However, it's structured as an educational refactoring journey, so code patterns vary across releases—adopting it means committing to understanding which version's patterns match your goals. Google Cloud vendor lock-in via Firebase/Cloud Run is inherent.

Active areas of work

The project is in active refactoring phase. Recent work covers CQRS adoption (article 10+), moving from procedural to event-driven architecture. Each article corresponds to a release tag where specific patterns are introduced (e.g., v2.2 introduced Repository pattern, v2.4 introduced Clean Architecture). The latest visible structure shows handler→application→domain layers.

🚀Get running

git clone https://github.com/ThreeDotsLabs/wild-workouts-go-ddd-example.git
cd wild-workouts-go-ddd-example
cp .env.example .env
make docker-up

Or for local Go development: go work sync && go run ./cmd/... after installing dependencies from go.mod.

Daily commands: Local dev with Docker: make docker-up (starts app on :8080, Firestore emulator, web frontend). Configuration via .env or .test.env/.e2e.env for different test modes. View Makefile for test, lint, and build targets. Services communicate via gRPC (inter-service) and HTTP (external clients).

🗺️Map of the codebase

  • internal/trainer/app/app.go — Core CQRS application setup for the trainer service; defines the command/query dispatcher pattern that underpins the entire DDD architecture.
  • internal/trainer/domain/hour/availability.go — Domain entity defining business rules for hour availability; demonstrates DDD's focus on encapsulating domain logic away from infrastructure concerns.
  • internal/trainer/adapters/hour_firestore_repository.go — Primary persistence adapter showing how repositories bridge domain entities and infrastructure; critical for understanding the clean architecture separation.
  • internal/common/decorator/command.go — Command decorator pattern implementation that provides cross-cutting concerns (logging, metrics) without polluting domain logic.
  • go.work — Go workspace configuration coordinating multiple services (trainer, users, trainings); essential for understanding the polyrepo structure and service boundaries.
  • docker-compose.yml — Local development environment orchestration showing all running services, databases, and emulators; necessary for onboarding and understanding deployment topology.
  • Makefile — Build and test automation entry point; developers must understand available commands for development workflow.

🛠️How to make changes

Add a New Command Handler

  1. Create domain logic method in internal/trainer/domain/hour/availability.go encapsulating the business rule (internal/trainer/domain/hour/availability.go)
  2. Add command struct and handler in internal/trainer/app/command/ directory following naming pattern (e.g., new_action.go) (internal/trainer/app/command/schedule_training.go)
  3. Register handler in internal/trainer/app/app.go by calling Decorator() and exposing via public method (internal/trainer/app/app.go)
  4. Wire HTTP endpoint in service entrypoint to dispatch requests to the command handler (api/openapi/trainer.yml)

Add a New Query Handler

  1. Create query struct and handler in internal/trainer/app/query/ following naming pattern (e.g., new_query.go) (internal/trainer/app/query/available_hours.go)
  2. Inject dependencies (repositories) via handler constructor to access read data (internal/trainer/app/query/hour_availability.go)
  3. Register query handler in internal/trainer/app/app.go using Decorator() pattern (internal/trainer/app/app.go)
  4. Expose query in OpenAPI definition and wire HTTP GET endpoint (api/openapi/trainer.yml)

Add a New Repository Implementation

  1. Create repository interface in domain package defining methods required by application logic (internal/trainer/adapters/hour_firestore_repository.go)
  2. Implement adapter in internal/trainer/adapters/ (e.g., hour_mysql_repository.go for MySQL or Firestore variant) (internal/trainer/adapters/hour_mysql_repository.go)
  3. Add unit tests in *_repository_test.go validating CRUD operations against interface contract (internal/trainer/adapters/hour_repository_test.go)
  4. Inject concrete implementation in app initialization based on configuration (environment variable or flag) (internal/trainer/app/app.go)

Add a New Service Microservice

  1. Create service directory under internal/ with same structure: domain/, app/, adapters/ (internal/trainer/domain/hour/availability.go)
  2. Define OpenAPI specification in api/openapi/ and generate client code with oapi-codegen (api/openapi/trainer.yml)
  3. Add service container to docker-compose.yml with environment variables and port mappings (docker-compose.yml)
  4. Update go.work to include service module and add test targets to Makefile (go.work)

🔧Why these technologies

  • Go — Compiled language with strong standard library, excellent concurrency primitives, and minimal external dependencies. Ideal for microservices and cloud-native applications.
  • Firestore — Serverless NoSQL database with real-time capabilities and automatic scaling. Removes operational overhead for cloud-run deployments.
  • Cloud Run — Serverless container execution eliminates infrastructure management. Pay-per-invocation pricing suits variable workloads.
  • gRPC + Protocol Buffers — High-performance inter-service communication with schema evolution guarantees. Type-safe code generation from .proto files.
  • OpenAPI 3.0 — Standard HTTP API specification enabling client code generation and API documentation. Multi-format support (JSON/YAML).
  • CQRS Pattern — Separates read and write models enabling independent scaling. Simplifies testing by decoupling command side effects from query optimization.

⚖️Trade-offs already made

  • Multiple repository implementations (MySQL, Firestore, In-Memory)

    • Why: Provide flexibility for different deployment scenarios and testing contexts
    • Consequence: Requires maintaining interface contracts across adapters; added code duplication for schema mapping
  • Monorepo with go.work vs separate repositories per service

    • Why: undefined
    • Consequence: undefined

🪤Traps & gotchas

Firebase Emulator: Firestore emulator must be running (via Docker Compose) for local tests; bare go test will fail without .env pointing to emulator. gRPC ports: Trainer and trainings services expect specific ports (see docker-compose.yml); port conflicts will silently break inter-service calls. Module initialization: Project uses go.work for multi-module workspace; go work sync required before go test ./.... Auth headers: All HTTP requests to trainer/trainings need Firebase ID token in Authorization: Bearer header; mocked in tests via internal/common/auth/http_mock.go.

🏗️Architecture

💡Concepts to learn

  • Bounded Contexts (DDD) — Core organizational principle in this codebase; internal/trainer, internal/trainings, internal/users are separate bounded contexts that communicate via contracts (gRPC/OpenAPI), not shared code.
  • Repository Pattern — Abstracts data access layer in Wild Workouts; repos live in internal/{service}/adapters/repositories/ and allow swapping Firestore for PostgreSQL/MongoDB without touching domain logic.
  • CQRS (Command Query Responsibility Segregation) — Latest pattern being introduced (articles 10+); separates read (Query) and write (Command) operations; enables optimization of read/write paths independently.
  • Clean Architecture / Hexagonal Architecture — Layers in Wild Workouts: domain (core logic) → application (use cases) → ports (interfaces) → adapters (HTTP/gRPC/DB); isolates business logic from frameworks.
  • Contract-First API Design (OpenAPI) — YAML specs in api/openapi/ drive code generation; changes to contracts are version-controlled and visible before implementation, enforcing API stability.
  • gRPC and Protocol Buffers — Inter-service communication in Wild Workouts uses gRPC (defined in api/protobuf/); binary serialization and strict contracts prevent coupling between services.
  • Event Sourcing — Emerging pattern in recent articles (v2.5+); instead of storing just state, stores immutable event log; enables temporal queries and audit trails for training scheduling.
  • golang-standards/project-layout — Establishes Go project structure conventions that Wild Workouts follows in internal/ and cmd/ organization.
  • google-cloud-go/firestore — Underlying Firestore Go client library used for persistence; understanding its API helps modify repository implementations.
  • grpc/grpc-go — gRPC implementation for inter-service communication; Wild Workouts extensively uses protobuf stubs from this ecosystem.
  • deepmap/oapi-codegen — Code generator for OpenAPI specs; produces HTTP handlers and clients in Wild Workouts from YAML definitions.
  • ThreeDotsLabs/watermill — Event streaming library by same authors; referenced in CQRS articles as pattern for implementing event-driven communication (not yet in this repo's core code).

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add integration tests for gRPC services (trainer and users)

The repo has proto definitions (api/protobuf/trainer.proto, api/protobuf/users.proto) and generated gRPC code (internal/common/genproto/), but there are no visible integration tests validating the gRPC contract. Given this is a DDD example project meant to teach best practices, comprehensive gRPC integration tests would demonstrate proper service testing patterns and catch breaking changes in the API surface.

  • [ ] Create internal/services/trainer/grpc_integration_test.go with test cases for trainer service methods
  • [ ] Create internal/services/users/grpc_integration_test.go with test cases for users service methods
  • [ ] Add test helpers in internal/common/testutil/ for gRPC client setup and teardown
  • [ ] Integrate tests into Makefile test target and CI pipeline (.github workflows)

Add comprehensive decorator pattern tests for command and query handlers

The repo implements CQRS with decorators (internal/common/decorator/command.go, internal/common/decorator/query.go, internal/common/decorator/logging.go, internal/common/decorator/metrics.go), but testing coverage for the decorator chain is missing. This is critical for a DDD teaching project—contributors should be able to understand how cross-cutting concerns (logging, metrics) compose with handlers through tested examples.

  • [ ] Create internal/common/decorator/command_test.go with tests for command handler decoration (logging, metrics chains)
  • [ ] Create internal/common/decorator/query_test.go with tests for query handler decoration
  • [ ] Add mock implementations in internal/common/decorator/ for testing decorator composition
  • [ ] Document decorator pattern usage in README.md with code examples

Add API contract tests using OpenAPI specs and generated clients

The repo defines OpenAPI specs (api/openapi/trainer.yml, api/openapi/trainings.yml, api/openapi/users.yml) and generates clients from them (internal/common/client/trainer/, internal/common/client/trainings/, internal/common/client/users/). However, there are no visible contract tests validating that actual service responses match the OpenAPI definitions. Adding contract tests would ensure the implementation stays aligned with documentation and teach API-driven development practices.

  • [ ] Create internal/services/trainer/openapi_contract_test.go testing endpoints against api/openapi/trainer.yml
  • [ ] Create internal/services/trainings/openapi_contract_test.go testing endpoints against api/openapi/trainings.yml
  • [ ] Create internal/services/users/openapi_contract_test.go testing endpoints against api/openapi/users.yml
  • [ ] Add OpenAPI validation step to Makefile and CI pipeline using oapi-codegen tool already in dependencies

🌿Good first issues

  • Add integration tests for the internal/trainings service mirroring the pattern in internal/trainer; currently underspecified compared to trainer tests.
  • Document the CQRS command/query split visible in recent releases with concrete examples in internal/{service}/app/command.go and query.go showing a new feature (e.g., user notification preferences).
  • Extend OpenAPI specs (api/openapi/trainings.yml) with missing operation descriptions and example request/response payloads; currently minimal boilerplate.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • f264c71 — Update dependencies, versions, and local environment tools (#79) (m110)
  • f797e11 — go-cleanarch: override infrastructure layer name (#47) (mrsndmn)
  • 6a7d665 — Fix CORS middleware check for allowed origins (#66) (stavhaygn)
  • 56ef6d5 — chore: pkg imported more than once (#67) (testwill)
  • b519c61 — Fix permissions for web/start.sh (m110)
  • a2d4cfe — fix(issue-34): failing to run docker-compose up for the firestore container (#59) (wcomnisky)
  • 4cc4ad7 — Merge pull request #56 from ThreeDotsLabs/added-decorators (roblaszczak)
  • b47996a — Reduced logs noise (roblaszczak)
  • cef2f59 — Added Command/Query decorators (roblaszczak)
  • 7aae9c3 — Update Go to 1.18 (roblaszczak)

🔒Security observations

  • High · Hardcoded Secrets in Environment Files — .env, .test.env, .test.ci.env, .e2e.env, .e2e.ci.env. Multiple .env files are present in the repository (.env, .test.env, .test.ci.env, .e2e.env, .e2e.ci.env). These files are typically used to store sensitive credentials like database passwords, API keys, and authentication tokens. If committed to version control, they expose secrets to anyone with repository access. Fix: 1) Add all .env files to .gitignore (appears to be done based on .gitignore presence, but verify). 2) Use a secrets management tool (HashiCorp Vault, AWS Secrets Manager, etc.). 3) Document required environment variables in .env.example without sensitive values. 4) Audit git history to ensure no .env files were previously committed.
  • High · Docker Service Account File Exposed in Compose — docker-compose.yml (trainer-http and trainer-grpc services). In docker-compose.yml, the line '# - ./service-account-file.json:$SERVICE_ACCOUNT_FILE' is commented out, but it indicates intent to mount a service account file. If uncommented or if the file exists, it could expose Google Cloud credentials with potentially broad permissions. Fix: 1) Never mount service account files directly into containers. 2) Use Google Cloud Application Default Credentials (ADC) or workload identity instead. 3) If file mounting is necessary, ensure strict file permissions (600) and never commit the file to git. 4) Rotate any exposed service account keys immediately.
  • Medium · Potential Dependency Vulnerabilities - Transitive Dependencies — go.mod. The project includes indirect dependencies (marked with // indirect) that may have known vulnerabilities. Notable packages like gopkg.in/yaml.v2 and gopkg.in/yaml.v3 have had security issues in the past. The go.mod specifies go 1.25.0, but some dependencies may not be optimized for security. Fix: 1) Run 'go list -u -m all' to check for available updates. 2) Execute 'go mod tidy' to clean up dependencies. 3) Use 'go install github.com/google/osv-scanner/cmd/osv-scanner@latest' to scan for known vulnerabilities. 4) Regularly update dependencies, especially security patches. 5) Consider using Dependabot or similar automated tools.
  • Medium · Exposed Service Ports on All Network Interfaces — docker-compose.yml (web, trainer-http, trainer-grpc services). While the docker-compose.yml specifies '127.0.0.1' for port bindings (web, trainer-http, trainer-grpc), this is a best practice. However, if Docker daemon runs on a non-localhost network or if port mappings change, services could be exposed. The config shows ports 8080, 3000, 3010 being bound. Fix: 1) Maintain localhost-only binding (127.0.0.1) for development. 2) For production, use proper network segmentation and firewalls. 3) Implement rate limiting and request validation. 4) Use reverse proxy with TLS/SSL encryption. 5) Never expose services directly to the internet without authentication.
  • Medium · Missing Authentication in gRPC Service — docker-compose.yml (trainer-grpc service), internal/common/server/grpc.go. The trainer-grpc service in docker-compose.yml does not show any explicit authentication mechanism. gRPC services without proper authentication can be accessed by any client that can reach the port. Fix: 1) Implement mutual TLS (mTLS) for gRPC services. 2) Add interceptors for authentication/authorization. 3) Use authentication tokens (JWT, OAuth2). 4) Restrict network access to gRPC ports. 5) Review internal/common/server/grpc.go for security implementations.
  • Medium · Potential Unencrypted Inter-Service Communication — docker-compose.yml, internal/common/server/http.go, internal/common/server/grpc.go. The docker-compose.yml shows HTTP services (trainer-http) and gRPC services without explicit TLS/SSL configuration. Inter-service communication in development may be unencrypted. Fix: 1) Enable TLS for all inter-service communication. 2

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.