micro/go-micro
A Go microservices framework
Healthy across all four use cases
Permissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 3w ago
- ✓5 active contributors
- ✓Apache-2.0 licensed
Show 3 more →Show less
- ✓CI configured
- ✓Tests present
- ⚠Single-maintainer risk — top contributor 81% of recent commits
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/micro/go-micro)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/micro/go-micro on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: micro/go-micro
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/micro/go-micro shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 3w ago
- 5 active contributors
- Apache-2.0 licensed
- CI configured
- Tests present
- ⚠ Single-maintainer risk — top contributor 81% of recent commits
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live micro/go-micro
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/micro/go-micro.
What it runs against: a local clone of micro/go-micro — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in micro/go-micro | Confirms the artifact applies here, not a fork |
| 2 | License is still Apache-2.0 | Catches relicense before you depend on it |
| 3 | Default branch master exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 49 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of micro/go-micro. If you don't
# have one yet, run these first:
#
# git clone https://github.com/micro/go-micro.git
# cd go-micro
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of micro/go-micro and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "micro/go-micro(\\.git)?\\b" \\
&& ok "origin remote is micro/go-micro" \\
|| miss "origin remote is not micro/go-micro (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
&& ok "license is Apache-2.0" \\
|| miss "license drift — was Apache-2.0 at generation time"
# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
&& ok "default branch master exists" \\
|| miss "default branch master no longer exists"
# 4. Critical files exist
test -f "client/client.go" \\
&& ok "client/client.go" \\
|| miss "missing critical file: client/client.go"
test -f "client/grpc/grpc.go" \\
&& ok "client/grpc/grpc.go" \\
|| miss "missing critical file: client/grpc/grpc.go"
test -f "broker/broker.go" \\
&& ok "broker/broker.go" \\
|| miss "missing critical file: broker/broker.go"
test -f "auth/auth.go" \\
&& ok "auth/auth.go" \\
|| miss "missing critical file: auth/auth.go"
test -f "client/rpc_client.go" \\
&& ok "client/rpc_client.go" \\
|| miss "missing critical file: client/rpc_client.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 49 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~19d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/micro/go-micro"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
Go Micro is a framework for building distributed microservices in Go, providing RPC and event-driven communication with built-in service discovery (mDNS), client-side load balancing, authentication (JWT), dynamic config loading, and a typed data model layer with CRUD operations across multiple backends (SQLite, Postgres, memory). It abstracts away the complexity of distributed systems by providing sane defaults while remaining fully pluggable. Monolithic framework repo organized by feature domains: /auth (JWT, noop, rules-based access control), /broker (HTTP, NATS), /ai (Anthropic, OpenAI integrations), /config, /store, /registry for service discovery. Core service abstraction exposed via service.Client(), service.Server(), service.Model(). Plugin architecture allows swapping implementations via interfaces (broker.go, auth.go, config.go pattern).
👥Who it's for
Go backend engineers and DevOps teams building microservice architectures who need a complete framework with authentication, service discovery, and data persistence baked in, rather than assembling disparate libraries. Contributors are typically framework developers and Go systems engineers extending plugins (brokers, auth mechanisms, storage backends).
🌱Maturity & risk
Production-ready. The project is at v5.16.0 with comprehensive CI/CD via GitHub Actions (tests.yaml, release.yml), a detailed CHANGELOG.md, SECURITY.md policy, and active maintenance evidenced by structured issue templates (bug_report.md, feature_request.md, performance.md). It is actively used in real-world deployments and sponsored by Anthropic.
Low to moderate risk. The dependency graph is large (visible from go.mod: 40+ indirect deps including nats-io, hashicorp/consul, protobuf, redis) which increases supply-chain exposure, but core dependencies are well-maintained (github.com/golang/protobuf v1.5.4, miekg/dns). Single-maintainer risk exists but is mitigated by public issue templates and documented governance. No breaking changes are evident in recent activity, though v5 suggests past major version churn.
Active areas of work
Active development with AI integration (ai/anthropic, ai/openai added recently), enhanced authentication (auth/rules.go), and structured CI workflows. The presence of ROADMAP.md and ongoing GitHub Actions workflows indicates active feature planning and release management. Recent focus on making authentication and data models first-class citizens.
🚀Get running
git clone https://github.com/go-micro/go-micro.git
cd go-micro
go mod download
make test
Daily commands:
make test # Run all tests via Go test suite
make build # Build binaries (see Makefile)
No dev server; framework is imported as a library. See examples/ directory (implied in structure) for runnable service demos.
🗺️Map of the codebase
client/client.go— Core client interface definition; foundational abstraction for all RPC communication in the frameworkclient/grpc/grpc.go— Primary gRPC transport implementation; handles request/response marshaling and connection poolingbroker/broker.go— Core message broker interface; essential for event-driven communication patterns across servicesauth/auth.go— Authentication framework interface; security-critical for zero-trust networking and service identityclient/rpc_client.go— RPC client implementation with retry logic, backoff, and caching; primary entry point for service callsbroker/nats/nats.go— NATS broker implementation; popular high-performance message transport in distributed Go Micro deploymentscache/cache.go— Caching abstraction layer; enables pluggable cache backends (memory, Redis) for response caching
🛠️How to make changes
Add a New RPC Service
- Define your service interface following the proto/gRPC conventions (
client/grpc/grpc.go) - Create service struct implementing your interface methods (
client/rpc_client.go) - Register your service with a client options configuration (backoff, retry, cache settings) (
client/options.go) - Optionally apply auth rules to your service by adding rules in (
auth/rules.go)
Add Event Publishing with Broker
- Choose or implement a broker backend (NATS, RabbitMQ, etc.) (
broker/broker.go) - Initialize broker in your service startup with appropriate options (
broker/options.go) - Call Publish() to emit events; implement a handler with Subscribe() for consumers (
broker/nats/nats.go) - Add context handling if using NATS to extract correlation IDs or metadata (
broker/nats/context.go)
Implement Custom Cache Backend
- Implement the Cache interface (Get, Set, Delete, Clear methods) (
cache/cache.go) - Create a new backend file (e.g., cache/memcached/memcached.go) with New() constructor (
cache/memory.go) - Add backend-specific options struct and apply builder pattern in (
cache/options.go) - Wire cache into client via WithCache() option in (
client/cache.go)
Enable JWT Authentication
- Create or import RSA key pair for signing (see test samples) (
auth/jwt/token/test/sample_key) - Initialize JWT token provider with your key material (
auth/jwt/token/jwt.go) - Configure auth rules specifying which services/methods require tokens (
auth/rules.go) - Inject auth validator into your client or broker via options (
auth/options.go)
🔧Why these technologies
- gRPC + Protocol Buffers — Efficient binary serialization and HTTP/2 multiplexing for low-latency RPC communication between services
- NATS / RabbitMQ Brokers — Pluggable pub/sub for decoupled event-driven architecture; NATS for high-throughput, RabbitMQ for reliability guarantees
- JWT Authentication — Stateless, zero-trust service-to-service identity and authorization without centralized token server
- Connection Pooling (gRPC) — Reuses long-lived connections to reduce overhead of establishing TLS handshakes per request
- Pluggable Cache (Memory/Redis) — Reduces load on backend services by caching responses at client layer; Redis enables distributed caching across instances
- Exponential Backoff & Retry — Handles transient failures gracefully without overwhelming failing services; implements resilience patterns
⚖️Trade-offs already made
-
Pluggable broker backends instead of single message queue
- Why: Allows users to choose broker matching their operational setup (NATS for speed, RabbitMQ for durability)
- Consequence: Increased code complexity and need to test all backends; potential for vendor lock-in if users pick one broker
-
Client-side retry/backoff with configurable exponential backoff
- Why: Enables per-service retry policies without server changes
- Consequence: Stampede risk if all clients retry simultaneously; requires careful backoff tuning
-
Optional client-side caching layer
- Why: Reduces latency and backend load for frequently accessed data
- Consequence: Cache invalidation complexity; stale data risk if TTL not tuned correctly
-
Auth rules evaluated at client layer before RPC
- Why: Fails-fast without network round-trip if credentials invalid
- Consequence: Requires clients to update rules; server-side auth still needed as defense-in-depth
🚫Non-goals (don't propose these)
- Does not provide API gateway or service mesh (users integrate with Envoy/Istio separately)
- Does not include built-in service discovery (delegated to external etcd/Consul/Kubernetes via resolver plugins)
- Does not handle distributed tracing (users integrate with Jaeger/Zipkin via context propagation)
- Does not implement workflow orchestration or saga patterns
- Does not provide HTTP/REST transport in core (gRPC + optional HTTP broker only)
🪤Traps & gotchas
- Service discovery defaults to mDNS (multicast DNS); this only works on local networks—production deployments require overriding with Consul/etcd via plugins. 2. JWT token test fixtures in auth/jwt/token/test/sample_key* must be present; regenerating breaks existing tests. 3. Protocol Buffers (proto3) code generation is implicit; ensure protoc is installed and .proto files are in the repo root or generated code won't sync. 4. Broker implementations are swappable but HTTP broker (broker/http.go) and NATS broker are the main ones; other brokers live in external plugins repo. 5. Load balancing uses 'random hashed' strategy as default; no explicit config for weighted or least-connections strategies visible in core files.
🏗️Architecture
💡Concepts to learn
- Service Discovery & mDNS (Multicast DNS) — Core to Go Micro's zero-configuration startup; understanding mDNS limitations (local-network only) and when to switch to Consul/etcd is critical for production deployments
- Client-Side Load Balancing — Go Micro implements load balancing at the client (not server), using random-hashed distribution; knowing this affects how you design resilience and failover
- JWT (JSON Web Tokens) for Cryptographic Identity — Go Micro's auth/jwt/token layer uses JWTs for service-to-service authentication; understanding token structure, claims, and key rotation is essential for secure microservices
- Zero-Trust Networking — Go Micro's philosophy of 'every service gets an identity and certificates'; this is the design principle behind the auth and rules subsystems
- Protocol Buffers (protobuf3) & Code Generation — Go Micro uses protobuf for message serialization and RPC contracts; understanding .proto file structure and code generation is necessary to define services
- Pluggable Architecture / Interface-Based Design — Go Micro's entire strength lies in swappable implementations (brokers, auth, storage, registries); this Go pattern using interfaces is key to customization
- Event-Driven Communication (Pub/Sub Broker Pattern) — Beyond RPC, Go Micro provides pub/sub via pluggable brokers; understanding the broker interface and message routing is essential for async microservice patterns
🔗Related repos
grpc/grpc-go— Go Micro uses Protocol Buffers and gRPC patterns; grpc-go is the underlying RPC transport many Go Micro deployments rely onhashicorp/consul— Go Micro defaults to mDNS but production deployments typically swap in Consul for service discovery; Consul API client is already a dependencynats-io/nats.go— Go Micro's event-driven broker can use NATS as a message transport; NATS client is a direct dependency in go.modgo-micro/examples— Official examples repo (implied as 'examples/' in this codebase) containing runnable microservice demos using Go Micro patternsgo-micro/plugins— Community plugins extending Go Micro with additional brokers (Kafka, RabbitMQ), registries, storage backends, and auth mechanisms
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive integration tests for broker/rabbitmq package
The RabbitMQ broker implementation has multiple components (connection.go, channel.go, auth.go) with unit tests only for connection_test.go and rabbitmq_test.go. Missing are integration tests that verify the full lifecycle of connection pooling, channel management, and authentication flows together. This is critical for a message broker that's likely used in production microservices.
- [ ] Review broker/rabbitmq/ structure and existing tests in *_test.go files
- [ ] Create broker/rabbitmq/integration_test.go with tests for: connection pool exhaustion, channel reuse, auth failure scenarios, and message publish/subscribe flows
- [ ] Add test fixtures/docker-compose setup in broker/rabbitmq/test/ directory similar to auth/jwt/token/test/
- [ ] Update .github/workflows/tests.yaml to run integration tests conditionally (with RabbitMQ service container)
Add missing AI provider implementations and standardize ai/ package interface
The ai/ directory has anthropic and openai implementations (ai/anthropic/anthropic.go, ai/openai/openai.go) but ai/model.go and ai/options.go lack comprehensive documentation and type safety. Additionally, there's no ai/README.md detailing how to implement new providers. This blocks contributors from adding LLM integrations for other providers (Claude variants, Llama, Mistral, etc.).
- [ ] Create ai/README.md documenting the Provider interface, required methods, and step-by-step guide for adding new providers
- [ ] Review ai/model.go and ai/options.go, then add godoc comments explaining the Model struct fields and all Option functions
- [ ] Add ai/provider_test.go with a MockProvider implementation to serve as a reference for new contributors
- [ ] Optionally add ai/bedrock/ (AWS) or ai/gemini/ (Google) as a third example implementation to validate the interface design
Add unit tests for auth/rules.go and expand auth/ANALYSIS.md
The auth/rules.go file exists but has only rules_test.go with likely minimal coverage. The auth/ANALYSIS.md file suggests this was analyzed but the documentation is incomplete. Authorization rules are security-critical, and insufficient test coverage (especially edge cases like rule ordering, wildcard matching, and deny overrides) creates risk for microservices relying on this auth layer.
- [ ] Review auth/rules.go implementation and auth/rules_test.go current coverage
- [ ] Expand auth/rules_test.go with test cases for: wildcard patterns in resource/action rules, rule precedence/conflict resolution, grant vs deny semantics, and empty/nil rule scenarios
- [ ] Update auth/ANALYSIS.md with: current rule evaluation algorithm, examples of complex rule sets, and known limitations or planned improvements
- [ ] Add benchmarks (auth/rules_bench_test.go) since rule evaluation may be called frequently in request paths
🌿Good first issues
- Add missing integration tests for broker/http.go and broker/http_test.go: the HTTP broker is tested but test coverage gaps likely exist for edge cases like connection timeouts and malformed message handling.
- Document the data model CRUD layer (service.Model() API): the README mentions it but no docs exist in ai/, auth/, or broker/ directories; add a guide in docs/ or README.md with code examples.
- Implement missing noop implementations for new subsystems: auth/noop/noop.go exists but similar noop brokers (noop/noop.go under broker/) and config providers may be incomplete; add stub no-op implementations for testing.
⭐Top contributors
Click to expand
Top contributors
- @asim — 81 commits
- @Copilot — 15 commits
- @alex-dna-tech — 2 commits
- @jejefferson — 1 commits
- @BombartSimon — 1 commits
📝Recent commits
Click to expand
Recent commits
90f7617— Claude/fix issue 2893 x3rpd (#2895) (asim)79722e0— feat: add prometheus monitoring wrapper (#2894) (asim)49070af— server/grpc: improve graceful stop behavior (#2892) (jejefferson)03f4759— Fix inline style attribute in index.html (asim)f6fb541— Enhance Mu.xyz promotion with styled div (asim)1426c8f— Update README to use emoji for Mu.xyz link (asim)5aedb60— Add Mu.xyz promotional link to README (asim)bd13975— Update container width and add promotional text (asim)213a092— Update default.html (asim)f429195— Remove Blog link from index.html (asim)
🔒Security observations
- High · Outdated Go Version in Dockerfile —
Dockerfile, line: COPY --from=golang:1.26.0-alpine. The Dockerfile specifies Go 1.26.0-alpine, which is a future/non-existent version. This suggests either a typo or build configuration error. The actual Go version should be validated and pinned to a specific, verified release. Using incorrect versions can lead to unexpected behavior and potential security gaps. Fix: Update to a stable, current Go version (e.g., 1.22.x or 1.23.x). Verify the version exists on the official golang Docker registry before deployment. - High · Incomplete Dependency Lock File —
go.mod - Dependencies/Package file. The go.mod file appears truncated ('gi' at the end suggests incomplete content). This incomplete dependency list makes it impossible to perform a complete security audit of all transitive dependencies. Missing or incomplete lock files can lead to unpredictable builds and potential inclusion of vulnerable packages. Fix: Ensure go.mod is complete and properly formatted. Run 'go mod tidy' and 'go mod verify' to validate the dependency tree. Commit a complete go.sum file to lock dependency versions. - Medium · Missing SBOM and Dependency Scanning —
.github/workflows/. While a go.mod file is present, there is no evidence of automated dependency vulnerability scanning in the CI/CD pipeline. The .github/workflows directory shows test.yaml and release.yml but no explicit security scanning workflows (e.g., Dependabot, Snyk, or Go vulnerabilities checker). Fix: Implement automated dependency scanning using GitHub's Dependabot, trivy, or 'go install github.com/google/osv-scanner' in CI/CD pipelines. Generate and maintain an SBOM (Software Bill of Materials). - Medium · Non-Root User Configuration Without Permission Verification —
Dockerfile, lines: RUN adduser... and WORKDIR /micro. The Dockerfile creates a non-root user 'micro' with UID 1001, which is good practice. However, the WORKDIR is set to /micro without explicitly verifying ownership after user creation. If the directory doesn't exist or has incorrect permissions, the container may fail or have unexpected behavior. Fix: Add explicit ownership and permission setting: 'RUN chown -R micro:micro /micro && chmod 755 /micro' after WORKDIR declaration. - Medium · Alpine Base Image Without Security Updates Pinning —
Dockerfile, line: FROM alpine:latest. The Dockerfile uses 'FROM alpine:latest' without specifying a specific Alpine version. The 'latest' tag can change, potentially introducing unvetted updates or breaking changes. This reduces reproducibility and security auditability. Fix: Pin to a specific Alpine version (e.g., 'alpine:3.19' or 'alpine:3.20'). Regularly update and test with new versions rather than relying on 'latest'. - Medium · Potential Unencrypted JWT Signing Keys in Repository —
auth/jwt/token/test/sample_key and auth/jwt/token/test/sample_key.pub. The file structure shows JWT token test files with sample keys: 'auth/jwt/token/test/sample_key' and 'auth/jwt/token/test/sample_key.pub'. While marked as test files, actual private keys should never be stored in version control, even for testing purposes. Fix: Replace committed keys with generated test fixtures or use gitignore for sensitive files. Document key generation procedures for developers. Use environment variables or secrets management for production keys. - Low · Missing SECURITY.md Completeness —
SECURITY.md. The SECURITY.md file appears truncated in the provided content ('Impact of t' is incomplete). While the file exists and follows good practice by directing security reports away from public issues, the incomplete documentation may not fully address security policy details. Fix: Complete the SECURITY.md file with comprehensive coverage: supported versions, patch timeline, security advisory process, and any relevant security architecture documentation. - Low · No Explicit Code Signing or Verification —
undefined. The .goreleaser.yaml and release.yml workflows are present, but there's no explicit evidence of binary signing, checksum verification, or prov Fix: undefined
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.