RepoPilotOpen in app →

centrifugal/centrifugo

Scalable real-time messaging server in a language-agnostic way. Self-hosted alternative to Pubnub, Pusher, Ably, socket.io, Phoenix.PubSub, SignalR. Set up once and forever.

Healthy

Healthy across all four use cases

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 5d ago
  • 5 active contributors
  • Apache-2.0 licensed
Show all 6 evidence items →
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 84% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/centrifugal/centrifugo)](https://repopilot.app/r/centrifugal/centrifugo)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/centrifugal/centrifugo on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: centrifugal/centrifugo

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/centrifugal/centrifugo shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 5d ago
  • 5 active contributors
  • Apache-2.0 licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 84% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live centrifugal/centrifugo repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/centrifugal/centrifugo.

What it runs against: a local clone of centrifugal/centrifugo — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in centrifugal/centrifugo | Confirms the artifact applies here, not a fork | | 2 | License is still Apache-2.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 35 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>centrifugal/centrifugo</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of centrifugal/centrifugo. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/centrifugal/centrifugo.git
#   cd centrifugo
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of centrifugal/centrifugo and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "centrifugal/centrifugo(\\.git)?\\b" \\
  && ok "origin remote is centrifugal/centrifugo" \\
  || miss "origin remote is not centrifugal/centrifugo (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(Apache-2\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"Apache-2\\.0\"" package.json 2>/dev/null) \\
  && ok "license is Apache-2.0" \\
  || miss "license drift — was Apache-2.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "internal/app/node.go" \\
  && ok "internal/app/node.go" \\
  || miss "missing critical file: internal/app/node.go"
test -f "internal/app/engine.go" \\
  && ok "internal/app/engine.go" \\
  || miss "missing critical file: internal/app/engine.go"
test -f "internal/api/api.go" \\
  && ok "internal/api/api.go" \\
  || miss "missing critical file: internal/api/api.go"
test -f "internal/client/client.go" \\
  && ok "internal/client/client.go" \\
  || miss "missing critical file: internal/client/client.go"
test -f "internal/config/config.go" \\
  && ok "internal/config/config.go" \\
  || miss "missing critical file: internal/config/config.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 35 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~5d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/centrifugal/centrifugo"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Centrifugo is a self-hosted, scalable real-time messaging server that delivers messages instantly to connected clients over WebSocket, HTTP-streaming, SSE, GRPC, and WebTransport. It provides a production-ready PUB/SUB system with channel subscriptions, message history, and automatic reconnect recovery—positioning itself as an open-source alternative to Pusher, Ably, and Socket.io that decouples real-time transport from backend business logic. Monolithic Go binary organized by subsystem: internal/admin/ handles admin API and web UI, internal/api/ contains the main bidirectional protocol handlers (with generated code in *_gen.go files), internal/apiproto/ wraps protobuf definitions (.proto + generated .pb.go), and supporting packages handle broker integrations (Redis, NATS), authentication (JWT in cristalhq/jwt), and observability (OpenTelemetry). The Dockerfile and docker-compose.yml show production deployment patterns.

👥Who it's for

Backend engineers and platform teams building chat apps, live notifications, collaborative tools, or real-time dashboards who need a self-hosted messaging backbone that integrates with any language via HTTP/GRPC APIs. Also DevOps engineers scaling horizontally across Redis or NATS clusters.

🌱Maturity & risk

Production-ready and actively maintained: the codebase is substantial (~1.35M lines of Go), has comprehensive test coverage across internal/admin/handlers_test.go, internal/api/api_test.go, and related test files, runs CI via .github/workflows/test.yml and .github/workflows/release.yml, and uses .goreleaser.yml for automated releases. The v6 version signal and detailed CHANGELOG.md indicate a mature, versioned project with backwards-compatibility discipline.

Moderate risk from Go 1.26+ version constraint and heavy dependency on centrifugal/centrifuge (a foundational internal library that could see breaking changes). The project has many external integrations (AWS SQS, Azure Service Bus, Google Pub/Sub, PostgreSQL, Kafka, NATS, Redis) which expand the attack surface and operational complexity; however, the active CI/CD pipeline and release automation mitigate deployment risk. No obvious single-maintainer signal from the file structure, suggesting a maintainable project.

Active areas of work

Active development on v6 with focus on protocol efficiency and transport diversity (WebTransport support via quic-go/webtransport-go is newly integrated). The internal/apiproto/ package shows ongoing code generation work (see generate.sh and generate_tags.sh), suggesting protocol evolution. OTEL instrumentation additions (go.opentelemetry.io/contrib/instrumentation) indicate observability improvements in flight.

🚀Get running

git clone https://github.com/centrifugal/centrifugo.git
cd centrifugo
go mod download
make build
./centrifugo --help

(or use docker-compose up for a full stack with Redis per docker-compose.yml)

Daily commands: Development: make build (see Makefile); start with ./centrifugo config.json pointing to a Redis instance. Docker: docker-compose up (defined in docker-compose.yml). Production: Use official Docker image or build from Dockerfile; pass config via flags or env vars (parsed by spf13/viper and hashicorp/go-envparse).

🗺️Map of the codebase

  • internal/app/node.go — Core Node struct managing the entire Centrifugo server lifecycle, message routing, and subscription handling—fundamental to understanding the server architecture.
  • internal/app/engine.go — Engine abstraction for pub/sub operations and message broadcasting across nodes—essential for multi-node cluster and messaging guarantees.
  • internal/api/api.go — Primary API handler orchestrating server-side commands (publish, subscribe, kick, etc.)—the main entry point for all client interactions.
  • internal/client/client.go — Client connection handler managing WebSocket/transport lifecycle, subscription state, and bidirectional protocol—critical for understanding client semantics.
  • internal/config/config.go — Configuration schema and parsing for all server settings—required reading for deployment and customization.
  • internal/apiproto/api.proto — Protocol Buffer definition for the bidirectional client-server protocol—defines the contract all transports must implement.
  • internal/app/run.go — Server initialization and startup orchestration—shows how all components wire together at boot time.

🛠️How to make changes

Add a New Server API Command

  1. Define the command in the protocol buffer at the request/response level (internal/apiproto/api.proto)
  2. Regenerate Go stubs by running the generate script in apiproto (internal/apiproto/generate.sh)
  3. Implement the handler method in the API handler (e.g., HandleMyCommand) (internal/api/handler.go)
  4. Add the dispatch logic in api.go to route the command to your handler (internal/api/api.go)
  5. Add unit tests following the _test.go naming convention (internal/api/api_test.go)

Add a New Pub/Sub Engine Backend

  1. Study the engine interface in internal/app/engine.go to understand Broker, Presence, History contracts (internal/app/engine.go)
  2. Create a new file (e.g., internal/engine/myengine.go) implementing the Engine interface (internal/app/engine.go)
  3. Register your engine factory in the config parsing logic (internal/config/config.go) (internal/config/config.go)
  4. Add configuration struct fields for your engine in config.go (e.g., MyEngineConfig) (internal/config/config.go)
  5. Wire the engine instantiation in internal/app/run.go during Node creation (internal/app/run.go)

Add Admin API Endpoint

  1. Define the request/response in the admin handler or extend internal/admin/handlers.go (internal/admin/handlers.go)
  2. Register the HTTP route in the mux setup (internal/app/mux.go) (internal/app/mux.go)
  3. Implement the handler function accessing Node state via the server context (internal/admin/handlers.go)
  4. Write tests following the test file conventions (internal/admin/handlers_test.go)

Add Client-Side Request Handler

  1. Add the new command type to the protocol buffer (internal/apiproto/api.proto) (internal/apiproto/api.proto)
  2. Implement the command handler in the client handler (internal/client/handler.go) (internal/client/handler.go)
  3. Update the client dispatch logic to route the command to your handler (internal/client/handler.go)
  4. Test the handler with integration tests in handler_test.go (internal/client/handler_test.go)

🔧Why these technologies

  • Protocol Buffers (protobuf) — Language-agnostic, compact, versioned protocol definition for bidirectional client-server communication and gRPC support.
  • Go (1.26+) — Goroutines enable handling millions of concurrent connections efficiently; compiled binaries for easy deployment.
  • Redis/Nats/RabbitMQ (pluggable engines) — Abstracts pub/sub backend, enabling both standalone and clustered deployments with optional distributed messaging.
  • gRPC + HTTP/WebSocket — Multiple transport options (WebSocket, SSE, HTTP-streaming, gRPC) allow clients to choose best fit for their platform.
  • JWT (via cristalhq/jwt) — Stateless client authentication without session backend; supports expiry and signature verification.

⚖️Trade-offs already made

  • In-memory client storage on each node vs. distributed registry

    • Why: Lower latency and simplicity for single-node deployments; horizontal scaling via load-balanced nodes.
    • Consequence: Requires external pub/sub engine (Redis/Nats) for multi-node message broadcast; clients may reconnect to different nodes on failover.
  • Protocol Buffer for both REST and gRPC

    • Why: Single schema definition; HTTP/REST uses JSON encoding of proto messages.
    • Consequence: Adds code generation step; slightly verbose JSON representation vs. hand-optimized endpoints.
  • No built-in user authentication/session management

    • Why: Keeps Centrifugo stateless and focused on real-time delivery; authentication delegated to backend via proxy.
    • Consequence: Operators must implement backend token validation; requires initial handshake to exchange credentials.
  • Pluggable history/presence backends

    • Why: Allows users to choose Redis, Nats, or in-memory based on scale and cost.
    • Consequence: Adds abstraction layer complexity; more setup required for features like presence and message history.

🚫Non-goals (don't propose these)

  • Does not provide user authentication or account management—authentication is delegated to backend services.
  • Does not persist messages to a real-time database—history is optional and backend-specific.
  • Does not handle business logic or application state—Centrifugo is transport-only.
  • Does not auto-scale horizontally without an external pub/sub engine (Redis/Nats/RabbitMQ).
  • Does not guarantee message ordering across multi-node clusters when using certain engines.

🪤Traps & gotchas

Required services: Centrifugo needs an external broker (Redis/NATS/etc.) to be useful; running without one limits to single-node mode, losing scalability. Config discovery: uses spf13/viper with environment variable parsing via hashicorp/go-envparse, so prefix conventions matter (CENTRIFUGO_ prefix expected). Code generation: changes to .proto files require running internal/apiproto/generate.sh before testing; the generated *_gen.go files are committed and must stay in sync. gRPC reflection: enabled by default in production, which can leak API schema; consider disabling in secure deployments. Protobuf version: uses buf.yaml for lint/generation, requires protoc toolchain to be installed locally for development.

🏗️Architecture

  • centrifugal/centrifuge — Core protocol library that Centrifugo wraps; defines the bidirectional messaging semantics and connection state machine
  • centrifugal/protocol — Protocol buffer definitions shared across Centrifugo and official SDKs; necessary for maintaining client-server wire format compatibility
  • socketio/socket.io — Direct competitor solving the same real-time messaging problem; useful for feature/design parity comparison
  • phoenixframework/phoenix_pubsub — Elixir-native alternative for the same use case; shows how PUB/SUB abstractions work in functional languages vs. Go
  • hashrocket/ws-public-registry — Example WebSocket-based registry; useful reference for understanding Centrifugo's connection lifecycle and subscription multiplexing patterns

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for internal/apiproto/decode.go and encode.go

The apiproto package handles protocol buffer encoding/decoding which is critical for all client-server communication. Currently there are generated test files (decode_request_gen.go) and encode files, but the core decode.go and encode.go files lack comprehensive unit test coverage. Given that Centrifugo supports multiple transport protocols (WebSocket, HTTP-streaming, gRPC, WebTransport), robust encoding/decoding tests are essential to prevent protocol-level regressions.

  • [ ] Create internal/apiproto/decode_test.go with test cases covering edge cases (empty payloads, malformed data, large messages)
  • [ ] Create internal/apiproto/encode_test.go with test cases for all message types and error conditions
  • [ ] Add benchmarks for encode/decode operations to catch performance regressions
  • [ ] Reference the existing raw_test.go patterns and generated test files for consistency
  • [ ] Test integration with apiproto/errors.go error handling

Add integration tests for WebTransport protocol support in internal/api/

The codebase has WebTransport support (quic-go/webtransport-go dependency and quic-go/quic-go) but the internal/api directory shows handler.go, handler_gen.go, and handler_legacy.go for other protocols. There's no visible WebTransport handler or corresponding tests. WebTransport is a modern, emerging protocol that needs dedicated test coverage to ensure proper stream handling, connection lifecycle, and error scenarios specific to this protocol.

  • [ ] Create internal/api/handler_webtransport.go following the pattern of existing protocol handlers
  • [ ] Create internal/api/webtransport_test.go with tests for connection establishment, message streaming, and disconnection
  • [ ] Test bidirectional communication and backpressure handling specific to WebTransport's stream-based model
  • [ ] Verify integration with the API consuming layer (internal/api/consuming.go)
  • [ ] Document WebTransport-specific configuration in a readme or handler comments

Add functional tests for message broker integrations in internal/broker/

The go.mod shows Centrifugo supports multiple message brokers (NATS, Kafka via franz-go, AWS SQS, Azure Service Bus, Google Cloud Pub/Sub). However, the file listing doesn't show a dedicated internal/broker/ directory with corresponding test files. Given the complexity of distributed pub/sub and the variety of brokers, there should be integration tests that verify each broker backend correctly handles subscribe, publish, and unsubscribe operations without regressions.

  • [ ] Audit internal directory structure to locate broker implementations (likely in internal/app/ or a dedicated broker package)
  • [ ] Create comprehensive integration tests using Docker containers or testcontainers for each broker (NATS, Kafka, SQS, Service Bus, Pub/Sub)
  • [ ] Test scenarios: message ordering, delivery guarantees, error handling, connection recovery
  • [ ] Add CI workflow step or make target to run broker tests (potentially marked as integration tests to skip in fast CI)
  • [ ] Document broker test setup requirements in a CONTRIBUTING.md or broker-specific readme

🌿Good first issues

  • Add integration tests for all broker backends (Redis, NATS, Kafka) in internal/; currently only unit tests exist and production failure modes aren't covered by CI.
  • Document the code generation workflow: add a script internal/apiproto/README.md explaining how to modify .proto files, when to regenerate, and how to validate generated code matches the schema.
  • Implement OpenTelemetry span coverage for the full request lifecycle in internal/api/handler.go; currently only high-level instrumentation exists via go.opentelemetry.io/contrib/instrumentation, missing detailed traces for subscription, publish, and reconnect flows.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 4f1f3ce — up deps (FZambia)
  • aa252e1 — prepare v6.7.1 (#1130) (FZambia)
  • 6949bdb — feat(consuming): Kafka MSK IAM support for STS AssumeRole (#1129) (samir-is-here)
  • 08ec63b — Set LocalStack image version to 4.14 (#1119) (FZambia)
  • c7241be — Bump google.golang.org/grpc from 1.79.2 to 1.79.3 (#1116) (dependabot[bot])
  • 56034fe — latest deps (FZambia)
  • dab80fe — fix possible SSRF via unverified JWT claims (FZambia)
  • da3bdf0 — prepare v6.6.3 release (#1114) (FZambia)
  • 2259d53 — Fix wrong dep update, make vuln, alpine 3.22 (#1113) (FZambia)
  • 1e582fd — Up deps, Go 1.26.1, better lint DX (#1112) (FZambia)

🔒Security observations

  • High · Outdated Go Version — go.mod. The project specifies 'go 1.26' in go.mod, which appears to be a future/invalid Go version. This suggests either a configuration error or the use of an unreleased Go version. Standard Go versions follow semantic versioning (1.20, 1.21, etc.). This may indicate build process issues or version management problems. Fix: Update to a stable, current Go version (e.g., 1.22 or later). Verify the Go version used in CI/CD pipelines matches the go.mod specification.
  • High · Pre-release Centrifuge Dependency — go.mod - centrifugal/centrifuge dependency. The project depends on a pre-release/development version of centrifuge: 'v0.38.1-0.20260329064207-3fabb233edf4'. Using pre-release dependencies in production can introduce unstable code, security vulnerabilities, and behavioral inconsistencies not yet vetted by the community. Fix: Use stable released versions of centrifuge (e.g., v0.38.1 or later stable release). Only use pre-release versions for development/testing with appropriate version pinning and testing protocols.
  • Medium · Missing SBOM and Dependency Audit — Repository root - missing SBOM generation. While the project has dependency scanning mechanisms mentioned in SECURITY.md, there is no visible Software Bill of Materials (SBOM) in the repository structure. This limits transparency about the complete dependency tree and makes it harder for users to assess supply chain risks. Fix: Implement SBOM generation as part of the release pipeline using tools like syft or cyclonedx. Include SBOM in release artifacts and maintain a dependency audit log.
  • Medium · Docker Image Base Layer Not Pinned — Dockerfile. The Dockerfile uses 'alpine:3.22' without a specific digest hash. While Alpine is a secure base image, using version tags without digest pinning allows for potential supply chain attacks if the tag is retagged with different content. Fix: Pin the Alpine base image to a specific digest: 'FROM alpine:3.22@sha256:...' to ensure image immutability and prevent tag retagging attacks.
  • Medium · Non-root User Privilege Not Verified in Runtime — Dockerfile - USER directive. While the Dockerfile creates a non-root user 'centrifugo' and switches to it, there's no explicit verification that the application enforces this at runtime. The application could potentially escalate privileges if not properly designed. Fix: Add runtime security checks in the application code to verify it runs as non-root. Include assertions or logging to confirm UID/GID. Consider using read-only root filesystem (--read-only flag) in container runtime.
  • Medium · Development Services Exposed in docker-compose.yml — docker-compose.yml. The docker-compose.yml exposes multiple development/testing services (postgres:5432, nats:4222, pubsub-emulator:8085, localstack:4566) without network segmentation. If this file is used in production or on accessible systems, it creates direct exposure of these services. Fix: Restrict docker-compose services to internal networks only. Document that this is development-only. Use separate compose files for production (e.g., docker-compose.prod.yml) with proper network isolation and no exposed ports for internal services.
  • Medium · Missing Security Headers Configuration Documentation — internal/admin/handlers.go, internal/api/handler.go - unclear if headers are set. The file structure shows admin handlers and HTTP handlers but no visible security header configuration or documentation. The README and SECURITY.md don't mention HSTS, CSP, CORS, or other critical security headers. Fix: Implement and document security headers in HTTP responses: X-Content-Type-Options, X-Frame-Options, Strict-Transport-Security, Content-Security-Policy. Provide examples in configuration documentation.
  • Medium · No Visible Rate Limiting or DDoS Protection — internal/api/handler.go, internal. As a real-time messaging server with exposed APIs and multiple transport handlers, there's no visible rate limiting configuration or DDoS protection mechanisms documented. This could allow abuse of the API endpoints. Fix: undefined

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · centrifugal/centrifugo — RepoPilot