RepoPilotOpen in app →

Terry-Mao/goim

goim

Mixed

Stale — last commit 3y ago

weakest axis
Use as dependencyMixed

last commit was 3y ago; no tests detected

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • 22+ active contributors
  • MIT licensed
  • CI configured
Show all 6 evidence items →
  • Stale — last commit 3y ago
  • Concentrated ownership — top contributor handles 68% of recent commits
  • No test directory detected
What would change the summary?
  • Use as dependency MixedHealthy if: 1 commit in the last 365 days

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/terry-mao/goim?axis=fork)](https://repopilot.app/r/terry-mao/goim)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/terry-mao/goim on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: Terry-Mao/goim

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/Terry-Mao/goim shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Stale — last commit 3y ago

  • 22+ active contributors
  • MIT licensed
  • CI configured
  • ⚠ Stale — last commit 3y ago
  • ⚠ Concentrated ownership — top contributor handles 68% of recent commits
  • ⚠ No test directory detected

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live Terry-Mao/goim repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/Terry-Mao/goim.

What it runs against: a local clone of Terry-Mao/goim — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in Terry-Mao/goim | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 1129 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>Terry-Mao/goim</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of Terry-Mao/goim. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/Terry-Mao/goim.git
#   cd goim
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of Terry-Mao/goim and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "Terry-Mao/goim(\\.git)?\\b" \\
  && ok "origin remote is Terry-Mao/goim" \\
  || miss "origin remote is not Terry-Mao/goim (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "cmd/comet/main.go" \\
  && ok "cmd/comet/main.go" \\
  || miss "missing critical file: cmd/comet/main.go"
test -f "cmd/logic/main.go" \\
  && ok "cmd/logic/main.go" \\
  || miss "missing critical file: cmd/logic/main.go"
test -f "cmd/job/main.go" \\
  && ok "cmd/job/main.go" \\
  || miss "missing critical file: cmd/job/main.go"
test -f "internal/comet/server.go" \\
  && ok "internal/comet/server.go" \\
  || miss "missing critical file: internal/comet/server.go"
test -f "internal/logic/logic.go" \\
  && ok "internal/logic/logic.go" \\
  || miss "missing critical file: internal/logic/logic.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 1129 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~1099d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/Terry-Mao/goim"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

goim is a high-performance instant messaging (IM) server written in pure Go that handles millions of concurrent connections. It supports single push, room broadcasts, and multi-subscriber fan-out messaging across WebSocket, TCP, and HTTP protocols, with scalability via Kafka-based asynchronous notifications and dynamic job/logic module scaling. Modular three-tier monorepo: cmd/comet/ (connection gateway handling WebSocket/TCP/HTTP), cmd/logic/ (routing and subscription logic), and cmd/job/ (async message distribution via Kafka). Protocol definitions live in api/protocol/ and api/comet/ as .proto files. Benchmarks in benchmarks/ simulate multi-push, room, and broadcast scenarios. Configuration is TOML-based (comet-example.toml, logic-example.toml, job-example.toml).

👥Who it's for

Backend engineers and platform teams building real-time chat, notification, or live-streaming infrastructure at scale who need a battle-tested, lightweight IM server capable of 1M+ concurrent connections without buying proprietary solutions.

🌱Maturity & risk

Production-ready and actively maintained. The project demonstrates maturity through published benchmarks (1M concurrent users, 35.9M messages/sec throughput), CI/CD via GitHub Actions (.github/workflows/go.yml), and multi-platform SDK examples (Android, iOS, JavaScript). Last activity and commit structure suggest ongoing development rather than dormancy.

Low-to-moderate risk. Dependency footprint is reasonable (Kafka, Redis, gRPC, bilibili/discovery for service discovery), but the project relies on external infrastructure (Kafka broker, Discovery service) without fallback modes clearly documented. Single primary maintainer (Terry-Mao) is a potential single point of failure; no recent visible commit data to assess recency. Configuration is complex (separate logic/comet/job binaries) with limited validation tooling visible in the repo.

Active areas of work

No specific recent changes visible in file list, but the repo structure suggests focus on protocol stability (api/protocol/protocol.proto) and connection management. Documentation exists for protocol handshake and push flows (docs/proto.md, docs/handshake.png). Example client implementations for WebSocket, Android, and iOS indicate ongoing platform support.

🚀Get running

git clone https://github.com/Terry-Mao/goim.git
cd goim
make build
make run

Services will start in background; check target/ for binaries and logs. Dependencies: Kafka, Redis, and bilibili/discovery service must be running; configure via TOML files in cmd/{comet,logic,job}/.

Daily commands:

# Build all binaries
make build

# Run all services (spawns logic, comet, job in background)
make run

# Or manually:
nohup ./target/logic -conf=./cmd/logic/logic-example.toml -region=sh -zone=sh001 -deploy.env=dev -weight=10 2>&1 &
nohup ./target/comet -conf=./cmd/comet/comet-example.toml -region=sh -zone=sh001 -deploy.env=dev -addrs=127.0.0.1 2>&1 &
nohup ./target/job -conf=./cmd/job/job-example.toml -region=sh -zone=sh001 -deploy.env=dev 2>&1 &

# Stop all
make stop

Set environment: REGION=sh ZONE=sh001 DEPLOY_ENV=dev or pass via flags.

🗺️Map of the codebase

  • cmd/comet/main.go — Comet server entry point; handles WebSocket/TCP client connections and message routing—understand this to grasp the core push mechanism
  • cmd/logic/main.go — Logic server entry point; routes API requests, manages online state, and coordinates with comet nodes via gRPC
  • cmd/job/main.go — Job server entry point; consumes Kafka messages and broadcasts push notifications to comet servers
  • internal/comet/server.go — Core comet server implementation; manages channels, buckets, and client connection lifecycle
  • internal/logic/logic.go — Central logic orchestrator; handles gRPC/HTTP APIs, coordinates with comet balancer and DAO layer for persistence
  • api/protocol/protocol.proto — Client-facing protocol definition; defines handshake, heartbeat, and message structures for all client types
  • internal/logic/dao/dao.go — Data access abstraction; routes persistence operations to Redis and Kafka, critical for state management

🛠️How to make changes

Add a new HTTP API endpoint

  1. Define the handler function in internal/logic/http/ (e.g., internal/logic/http/custom.go) (internal/logic/http/custom.go)
  2. Register the route in internal/logic/http/server.go under the router setup (internal/logic/http/server.go)
  3. Call methods on l *logic.Logic to access balancer, DAO, or gRPC clients (internal/logic/logic.go)
  4. Return JSON response using internal/logic/http/result.go helpers (internal/logic/http/result.go)

Add a new wire protocol message type

  1. Add operation constant to api/protocol/operation.go (api/protocol/operation.go)
  2. Define message structure in api/protocol/protocol.proto and regenerate with api/generate.go (api/protocol/protocol.proto)
  3. Handle the operation in comet's internal/comet/server.go dispatch logic (internal/comet/server.go)
  4. Test with client code in examples/javascript/ or benchmarks (examples/javascript/client.js)

Add support for a new push source (beyond Kafka)

  1. Create a new consumer adapter in internal/job/ (e.g., internal/job/mqtt.go) (internal/job/mqtt.go)
  2. Implement the same interface that internal/job/job.go uses to call Push() or Room() (internal/job/job.go)
  3. Update job config in internal/job/conf/conf.go to support the new source type (internal/job/conf/conf.go)
  4. Initialize the consumer in the job main entry point cmd/job/main.go (cmd/job/main.go)

Add a new data store backend (beyond Redis)

  1. Create a new DAO file in internal/logic/dao/ (e.g., internal/logic/dao/memcached.go) (internal/logic/dao/memcached.go)
  2. Implement DAO interface methods for online state and user data lookups (internal/logic/dao/dao.go)
  3. Update logic config in internal/logic/conf/conf.go to allow selecting the backend (internal/logic/conf/conf.go)
  4. Modify internal/logic/logic.go initialization to instantiate the chosen DAO (internal/logic/logic.go)

🔧Why these technologies

  • gRPC + Protobuf — Efficient serialization and inter-service communication (logic ↔ comet ↔ job); low latency required for real-time messaging
  • Redis — Fast in-memory state store for online user locations, session data; required for quick comet node lookups during push
  • Kafka — Decouples API producers from delivery workers; handles burst loads and provides durability for offline message queueing
  • WebSocket + TCP — Bidirectional communication with clients; TCP for legacy/mobile, Web

🪤Traps & gotchas

Critical: Kafka, Redis, and bilibili/discovery service must be running before starting goim; no graceful degradation if missing. Config complexity: REGION/ZONE/DEPLOY_ENV must match across all three binaries (logic/comet/job) or discovery registration fails silently. Address binding: -addrs=127.0.0.1 flag on comet must match the advertised address registered in discovery or clients cannot connect. No visible health check endpoint in file list—monitoring/liveness probes may require custom wrappers. Protobuf regeneration: make generate required after .proto changes; not auto-triggered.

🏗️Architecture

💡Concepts to learn

  • Fan-out messaging — Core pattern in goim: one message to a topic/room triggers delivery to N subscriber connections in parallel; critical for understanding scalability at 1M+ users
  • Long-polling vs WebSocket — goim supports both HTTP long-polling and persistent WebSocket connections; knowing tradeoffs (latency, resource usage, browser support) affects client choice
  • Kafka partitioning and consumer groups — goim uses Kafka for async message distribution across job instances; partition assignment and group rebalancing determine throughput and failover behavior
  • Connection multiplexing — comet manages 1M concurrent TCP/WebSocket connections; efficient multiplexing (epoll/kqueue) and goroutine pooling are key to not exhausting OS resources
  • Heartbeat and keep-alive — goim supports TCP keep-alive, HTTP long-polling heartbeats, and application-level heartbeats; choosing the right strategy avoids ghost connections and false disconnects
  • Service discovery and registration — goim integrates with bilibili/discovery for dynamic node registry; without it, clients cannot locate comet nodes or route messages to logic instances
  • Backpressure handling — At 40 pushes/sec broadcast rate over 1M connections, goim must handle slow subscribers; queue depth, channel buffering, and timeout strategies prevent cascading failures
  • grpc/grpc-go — goim uses gRPC for internal service communication; understanding gRPC interceptors and streaming will help optimize comet-job message delivery
  • Shopify/sarama — Kafka client library used in goim (via gopkg.in/Shopify/sarama.v1); knowledge of producer/consumer groups and partition handling is essential for the job component
  • bilibili/discovery — Service discovery and health check system that goim depends on for comet/logic/job node registration and client lookup
  • go-kratos/kratos — Kratos is a Go microservices framework from the same ecosystem; goim's patterns (async messaging, RPC, scaling) are similar to Kratos's design philosophy
  • Terry-Mao/gosnow — Companion ID generation library from the same author; likely used for unique message/user ID generation in production goim deployments

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for internal/comet/channel.go and internal/comet/bucket.go

The core channel and bucket management logic in comet (connection handling, message routing) lacks visible unit tests. Given that goim is a high-performance IM server handling concurrent connections, these components are critical. Adding tests would improve reliability and make future refactoring safer. The repo already has testify as a dependency.

  • [ ] Create internal/comet/channel_test.go with tests for channel creation, message push, and cleanup
  • [ ] Create internal/comet/bucket_test.go with tests for bucket operations, concurrent access, and channel management
  • [ ] Test edge cases like concurrent pushes, channel closure, and memory cleanup
  • [ ] Run tests with go test ./internal/comet/... and ensure coverage is tracked

Add gRPC integration tests for internal/comet/grpc/server.go

The gRPC server is a critical component for inter-service communication (logic → comet → clients), but there are no visible integration tests. The current test setup uses testify and the project already has Google protobuf dependencies. Adding tests would verify the comet gRPC API works correctly before deployment.

  • [ ] Create internal/comet/grpc/server_test.go with gRPC server startup and shutdown tests
  • [ ] Add test cases for comet.pb.go RPC methods (Push, PushRoom, PushRooms, Broadcast)
  • [ ] Test error handling for invalid requests and connection failures
  • [ ] Use grpc.Dial and real gRPC clients in tests to verify end-to-end behavior

Add missing API documentation for cmd/logic/main.go and cmd/job/main.go startup parameters

The docs/en and docs/ folders have proto.md and push.md but lack documentation for the HTTP/gRPC API endpoints exposed by logic and job modules. New contributors cannot easily understand what endpoints exist or how to integrate with goim. Adding API documentation matching the example configs (logic-example.toml, job-example.toml) would be valuable.

  • [ ] Create docs/api.md documenting all HTTP endpoints exposed by cmd/logic/main.go (e.g., /push, /pushRoom, /broadcast endpoints)
  • [ ] Create docs/job-api.md documenting job module gRPC and HTTP endpoints for message queue processing
  • [ ] Reference the example TOML configs to document all configurable parameters for each module
  • [ ] Add endpoint examples with curl commands showing request/response payloads

🌿Good first issues

  • Add integration tests for the gRPC comet.proto service (Push, PushRoom, Broadcast) in cmd/comet/ — currently only benchmarks exist, no unit/integration tests visible for message routing logic
  • Document the Discovery service registration contract in docs/proto.md — protocol is defined but registration format, required fields, and failover behavior are not explained for operators
  • Implement a simple HTTP health check endpoint (/health, /ready) in logic/comet services — no liveness probe visible, making Kubernetes/container orchestration integration harder

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 1800484 — Bump github.com/gin-gonic/gin from 1.3.0 to 1.7.0 (#386) (dependabot[bot])
  • 976f4b9 — fix(comet): grpc server bucket if nil (#385) (caiwp)
  • 8fe50e8 — fix: room 只有一个 channel 时 Online 不会自减 (#384) (caiwp)
  • 7b8ec8f — Update README_cn.md (#382) (GiHccTpD)
  • 9292188 — fix:handle room data confusion caused by reconnection of the same key (#377) (hirampeng)
  • c708c1e — Fix typo (#373) (zhoutonglx-del)
  • bcc9c1c — return push msg error when signal full (#381) (cg33)
  • e742c99 — 修复web websocket client push命令demo的格式错误 (#369) (Miss-you)
  • 2f216ba — fix filed name error (#372) (xiaowuyongzhi)
  • 557e33b — feature: some comment error (#367) (ruizhehou)

🔒Security observations

  • High · Outdated gRPC Dependency with Known Vulnerabilities — go.mod: google.golang.org/grpc v1.22.3. The codebase uses google.golang.org/grpc v1.22.3, which was released in 2019 and contains multiple known security vulnerabilities including HTTP/2 stream reset attacks and denial of service vulnerabilities. Fix: Update to the latest stable version of gRPC (v1.56.0 or later) to patch known vulnerabilities.
  • High · Outdated Go Version Target — go.mod: go 1.13. The module targets Go 1.13 (released September 2019), which is no longer supported and lacks critical security patches. This version is vulnerable to multiple Go runtime exploits. Fix: Update the Go version requirement to 1.20 or later. Ensure all developers and CI/CD pipelines use a supported Go version.
  • High · Outdated Protobuf Dependency — go.mod: github.com/golang/protobuf v1.4.3. The codebase uses github.com/golang/protobuf v1.4.3, which is deprecated and no longer maintained. It contains known security issues and has been superseded by google.golang.org/protobuf. Fix: Migrate to google.golang.org/protobuf v1.30.0 or later. Update all protobuf imports and regenerate .pb.go files.
  • High · Outdated Sarama Kafka Client — go.mod: gopkg.in/Shopify/sarama.v1 v1.19.0, github.com/Shopify/sarama v1.19.0. The codebase uses gopkg.in/Shopify/sarama.v1 v1.19.0 and github.com/Shopify/sarama v1.19.0, which are from 2017-2018 and contain multiple security vulnerabilities in Kafka authentication and encryption handling. Fix: Upgrade to Shopify/sarama v1.38.0 or later. Review Kafka connection security settings in internal/job/conf/conf.go.
  • Medium · Potential Certificate Exposure in Examples — examples/cert.pem, examples/private.pem. The examples directory contains cert.pem and private.pem files which may be demo certificates, but the presence of private keys in version control (even examples) is a security anti-pattern that could be accidentally replicated in production configurations. Fix: Ensure production certificates are never committed to version control. Document certificate generation procedures separately. Consider using .gitignore for *.pem files.
  • Medium · WebSocket Implementation Security — internal/comet/server_websocket.go. WebSocket server implementation in internal/comet/server_websocket.go may not validate origin headers properly, potentially allowing Cross-Site WebSocket Hijacking (CSWSH) attacks if origin validation is not implemented. Fix: Implement proper Origin header validation in the WebSocket upgrade handler. Verify all WebSocket connections validate the Origin header against a whitelist.
  • Medium · Lack of Input Validation Framework — api/protocol/, internal/comet/server.go, cmd/logic/main.go. The codebase uses gin-gonic/gin v1.7.0 for HTTP handling. Review protocol handlers in api/protocol/ for proper input validation, as malformed protocol messages could lead to crashes or memory exhaustion. Fix: Implement comprehensive input validation for all protocol messages. Add bounds checking on packet sizes and message lengths. Consider implementing a message size limit.
  • Medium · Outdated Gin Framework — go.mod: github.com/gin-gonic/gin v1.7.0. The project uses github.com/gin-gonic/gin v1.7.0 from 2021, which contains known security vulnerabilities in middleware handling and path traversal protection. Fix: Upgrade to gin-gonic/gin v1.9.0 or later to patch known vulnerabilities.
  • Medium · Potential Authentication Bypass in Logic Module — undefined. undefined Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · Terry-Mao/goim — RepoPilot