RepoPilotOpen in app →

nsqio/nsq

A realtime distributed messaging platform

Healthy

Healthy across all four use cases

Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 10mo ago
  • 20 active contributors
  • MIT licensed
Show 4 more →
  • CI configured
  • Tests present
  • Slowing — last commit 10mo ago
  • Concentrated ownership — top contributor handles 51% of recent commits

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/nsqio/nsq)](https://repopilot.app/r/nsqio/nsq)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/nsqio/nsq on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: nsqio/nsq

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/nsqio/nsq shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across all four use cases

  • Last commit 10mo ago
  • 20 active contributors
  • MIT licensed
  • CI configured
  • Tests present
  • ⚠ Slowing — last commit 10mo ago
  • ⚠ Concentrated ownership — top contributor handles 51% of recent commits

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live nsqio/nsq repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/nsqio/nsq.

What it runs against: a local clone of nsqio/nsq — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in nsqio/nsq | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 330 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>nsqio/nsq</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of nsqio/nsq. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/nsqio/nsq.git
#   cd nsq
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of nsqio/nsq and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "nsqio/nsq(\\.git)?\\b" \\
  && ok "origin remote is nsqio/nsq" \\
  || miss "origin remote is not nsqio/nsq (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "internal/protocol/protocol.go" \\
  && ok "internal/protocol/protocol.go" \\
  || miss "missing critical file: internal/protocol/protocol.go"
test -f "apps/nsqd/main.go" \\
  && ok "apps/nsqd/main.go" \\
  || miss "missing critical file: apps/nsqd/main.go"
test -f "apps/nsqlookupd/main.go" \\
  && ok "apps/nsqlookupd/main.go" \\
  || miss "missing critical file: apps/nsqlookupd/main.go"
test -f "internal/http_api/http_server.go" \\
  && ok "internal/http_api/http_server.go" \\
  || miss "missing critical file: internal/http_api/http_server.go"
test -f "internal/pqueue/pqueue.go" \\
  && ok "internal/pqueue/pqueue.go" \\
  || miss "missing critical file: internal/pqueue/pqueue.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 330 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~300d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/nsqio/nsq"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

NSQ is a realtime distributed messaging platform written in Go that handles billions of messages per day at scale. It provides a decentralized, fault-tolerant message queue with guaranteed delivery, operating as a core infrastructure component that abstracts away data format concerns (supports JSON, Protocol Buffers, MsgPack, etc) and offers binaries with zero runtime dependencies. Monolithic structure: core broker logic in apps/nsqd/, lookup service in apps/nsqlookupd/, admin UI in apps/nsqadmin/, with utility apps (nsq_tail, nsq_to_file, nsq_to_http, nsq_to_nsq) as standalone Go binaries in apps/. Benchmarking tools in bench/ with Python harness, contrib/ contains example configs and packaging specs, and the internal messaging protocol is language-agnostic.

👥Who it's for

DevOps engineers and backend teams operating high-throughput distributed systems who need a reliable, easy-to-deploy message broker without complex configuration. Also developers building systems at companies like Bitly, Life360, Segment, and Moz that process massive message volumes with strict delivery guarantees.

🌱Maturity & risk

Highly mature and production-ready: the project has proven deployments across major tech companies, comprehensive CI/CD via .github/workflows/test.yml, is written entirely in Go with excellent binary distribution (Linux, Darwin, FreeBSD, Windows), and has stable official releases. The codebase is actively maintained with recent commits and clear documentation at nsq.io.

Low risk for production use given its maturity, but contributors should note: the master branch is explicitly development code that may be unstable, there's a single-language codebase (Go-heavy) which concentrates expertise, and minimal external dependencies visible in the codebase reduces supply-chain risk but increases maintenance burden. Admin UI and tooling are tightly coupled to the broker.

Active areas of work

Unable to determine from file structure alone—check the GitHub Actions workflows in .github/workflows/test.yml and recent commits to identify active work. The presence of multiple utility applications (nsq_to_http, nsq_to_nsq) suggests ongoing ecosystem expansion, and the existence of both main_test.go files indicates continuous testing infrastructure.

🚀Get running

git clone https://github.com/nsqio/nsq.git
cd nsq
make

The Makefile orchestrates the build. For Python dependencies (benchmarking), install requirements from bench/requirements.txt: pip install -r bench/requirements.txt. Binaries will be created in the bin/ directory.

Daily commands:

# Build all binaries
make
# Start nsqd (broker) on default port 4150
./bin/nsqd
# Start nsqlookupd (discovery) on default port 4160
./bin/nsqlookupd
# Start nsqadmin (web UI) on default port 4171
./bin/nsqadmin --lookupd-http-address=127.0.0.1:4161

Access admin UI at http://localhost:4171. Use apps/to_nsq or apps/nsq_tail for message publishing/consumption.

🗺️Map of the codebase

  • internal/protocol/protocol.go — Core NSQ wire protocol implementation—defines message framing, commands, and responses that all clients and servers must speak
  • apps/nsqd/main.go — Primary message broker daemon entry point; orchestrates topic/channel management, message persistence, and client connections
  • apps/nsqlookupd/main.go — Discovery service entry point; maintains cluster topology and enables producers/consumers to locate nsqd instances
  • internal/http_api/http_server.go — HTTP API infrastructure used by all daemons for control plane operations, monitoring, and administrative tasks
  • internal/pqueue/pqueue.go — Priority queue implementation critical for deferred/delayed message scheduling across the broker
  • internal/auth/authorizations.go — Authentication and authorization framework for access control across NSQ services
  • go.mod — Dependency manifest; shows all external libraries NSQ relies on for protocol, HTTP, and infrastructure

🛠️How to make changes

Add a New Control Plane HTTP Endpoint

  1. Define your HTTP handler function in the appropriate app daemon (e.g., apps/nsqd/main.go) with signature func(w http.ResponseWriter, req *http.Request) (apps/nsqd/main.go)
  2. Register the route using the HTTP server's router pattern in the same main.go file or a dedicated routing section (apps/nsqd/main.go)
  3. Use internal/http_api/api_request.go utilities (NewAPIRequest) to parse and validate query parameters and body (internal/http_api/api_request.go)
  4. Return responses using internal/http_api/api_response.go helpers (SendResponse, SendError) to ensure consistent JSON/plaintext formatting (internal/http_api/api_response.go)
  5. Add authorization checks using internal/auth/authorizations.go if the endpoint requires permission control (internal/auth/authorizations.go)

Add a New Command to the Wire Protocol

  1. Define the command name constant in internal/protocol/names.go (e.g., const NEW_COMMAND = "NEW_CMD") (internal/protocol/names.go)
  2. Implement marshalling and unmarshalling logic in internal/protocol/protocol.go for your command's binary format (internal/protocol/protocol.go)
  3. Add the command handler in the nsqd daemon (apps/nsqd/main.go) to process incoming client requests (apps/nsqd/main.go)
  4. Update client libraries (if applicable) or utilities to emit the new command using the marshalling functions from protocol.go (internal/protocol/protocol.go)
  5. Add tests to verify serialization round-trips and command handling behavior (internal/protocol/protocol.go)

Add a New Consumer Utility Application

  1. Create a new directory under apps/ with your utility name (e.g., apps/nsq_my_consumer/) (apps)
  2. Create main.go in your new app directory with a main() function that parses nsqlookupd addresses to discover nsqd instances (apps/nsq_to_file/nsq_to_file.go)
  3. Use the protocol.go client handshake and message frame parsing to receive messages from nsqd brokers (internal/protocol/protocol.go)
  4. Create options.go to define command-line flags (topic, channel, lookupd-http-address) using the flag package (apps/nsq_to_file/options.go)
  5. Log activity using internal/lg/lg.go and optionally emit metrics via internal/statsd/client.go for monitoring (internal/lg/lg.go)
  6. Add your app binary target to the Makefile build section (Makefile)

Implement Deferred Message Scheduling

  1. Use internal/pqueue/pqueue.go to maintain a min-heap of deferred messages sorted by target delivery time (internal/pqueue/pqueue.go)
  2. In the nsqd message pump loop (apps/nsqd/main.go), periodically drain ready messages from the priority queue for delivery (apps/nsqd/main.go)
  3. Add a DEFER_MSEC wire protocol command in internal/protocol/names.go and handle it in your nsqd command dispatcher (internal/protocol/names.go)
  4. Track deferred message counts and timing in internal/statsd/client.go metrics for visibility (internal/statsd/client.go)

🔧Why these technologies

  • Go — Goroutines enable efficient concurrent client handling; standard library HTTP/net packages provide robust protocol implementation; compiles to single distributable binary for operational simplicity
  • TCP-based binary protocol — Low-latency, ordered delivery guarantees; efficient serialization for high-throughput message systems; enables long-lived client connections with minimal overhead
  • Disk-backed queue (file-based) — Provides message durability and replay capability; enables recovery after broker restarts; allows backpressure handling when in-memory queue is full
  • HTTP REST API — Decouples control plane (admin, monitoring) from data plane; enables integration with standard tools and dashboards; allows language-agnostic cluster management
  • StatsD metrics — Enables lightweight observability without heavy instrumentation; integrates with standard monitoring stacks (Graphite, InfluxDB

🪤Traps & gotchas

NSQ requires at least two services running simultaneously (nsqd and nsqlookupd) for normal operation—starting only nsqd will not be discoverable. The admin UI (nsqadmin) requires explicit --lookupd-http-address flag pointing to nsqlookupd's HTTP port (default 4161, not the TCP port 4160). Messages are not persistent by default and rely on in-memory queues, so broker restarts lose undelivered messages unless explicitly configured. The protocol is custom binary (not HTTP), so standard HTTP clients cannot directly publish/consume—use official client libraries or utility apps.

🏗️Architecture

💡Concepts to learn

  • Decentralized Topology with Discovery Service — NSQ eliminates single points of failure by having multiple nsqd brokers register with nsqlookupd; understanding how producers and consumers discover brokers dynamically is essential to operating NSQ clusters
  • At-Least-Once Delivery Guarantee — NSQ promises messages are delivered at least once via explicit ACKs and requeue on consumer failure—this means your consumer code must be idempotent to handle duplicates, a critical design constraint for NSQ users
  • Consumer Groups and Channels — NSQ uses channels (internal queues) within topics to shard messages among independent consumer groups; this is how multiple services consume the same topic without interfering, central to NSQ's multi-tenant design
  • Binary Protocol over TCP — NSQ uses a custom frame-based binary protocol (not HTTP/gRPC) for efficiency; contributors modifying broker communication or building new clients must understand frame structure, command types (PUB, SUB, RDY), and framing
  • RDY Flow Control — NSQ's RDY (ready) mechanism allows consumers to signal backpressure—setting RDY to 0 pauses delivery, essential for graceful shutdown and preventing consumer overload; critical to implement correctly in custom clients
  • In-Memory Ephemeral Queues — Unlike Kafka, NSQ does not persist messages to disk by default; this trades durability for simplicity and speed, making NSQ suitable for transient event streams but requiring application-level persistence for critical data
  • Lookupd HTTP API for Dynamic Discovery — nsqlookupd exposes HTTP endpoints that clients query to bootstrap broker lists; the protocol-agnostic design means any language can implement discovery by making simple HTTP calls, enabling NSQ's polyglot ecosystem
  • rabbitmq/rabbitmq-server — Direct competitor: also a distributed message broker, but Erlang-based with stronger persistence guarantees and wider plugin ecosystem; NSQ prioritizes simplicity and operational ease over feature richness
  • apache/kafka — Alternative distributed messaging platform focused on event streaming and log aggregation; Kafka has stronger ordering guarantees and durability per partition, while NSQ favors simpler topology and lower operational overhead
  • nsqio/go-nsq — Official Go client library for NSQ—essential for developers building producers and consumers in Go; implements the NSQ protocol handshake and reconnection logic
  • nsqio/pynsq — Official Python client library for NSQ—enables Python applications to publish and consume messages, listed in contributed ecosystem documentation
  • prometheus/prometheus — Monitoring and alerting companion: NSQ clusters should be monitored via Prometheus to track queue depth, message rates, and broker health—nsqadmin provides HTTP endpoints suitable for scraping

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for internal/auth/authorizations.go

The internal/auth directory only contains authorizations.go with no corresponding _test.go file visible in the repo structure. Given NSQ's critical role in message distribution, authentication and authorization logic requires robust test coverage. This would improve security confidence and prevent authorization bypass regressions.

  • [ ] Create internal/auth/authorizations_test.go with tests for authorization validation logic
  • [ ] Add test cases for edge cases (empty credentials, invalid tokens, permission boundaries)
  • [ ] Ensure test coverage aligns with internal/clusterinfo/*_test.go patterns already in the codebase
  • [ ] Run coverage.sh to verify coverage improvement

Add GitHub Actions workflow for Python dependency security scanning

The repo contains Python dependencies (tornado==4.3, paramiko==1.16.0, boto==2.39.0 in bench/requirements.txt) but .github/workflows/test.yml likely doesn't scan for vulnerable versions. These are outdated packages with known CVEs. A dedicated workflow would prevent accidental use of vulnerable dependencies in benchmarking and tooling.

  • [ ] Create .github/workflows/python-security.yml with actions/setup-python@v4
  • [ ] Add steps to run 'pip install safety' and 'safety check' against bench/requirements.txt
  • [ ] Update bench/requirements.txt to pin secure versions or add security scanning to CI
  • [ ] Document Python dependency maintenance in CONTRIBUTING.md

Add integration tests for nsq_to_http app (apps/nsq_to_http/)

The nsq_to_http application has nsq_to_http_test.go but lacks integration tests validating HTTP endpoint behavior, error handling, and message delivery guarantees. Compared to nsqadmin/main_test.go patterns, the HTTP bridge component would benefit from end-to-end tests covering connection pooling, retries, and various HTTP response codes.

  • [ ] Extend apps/nsq_to_http/nsq_to_http_test.go with integration tests using httptest.Server
  • [ ] Add test cases for HTTP error scenarios (5xx, timeouts, connection failures)
  • [ ] Add test cases validating message ordering and delivery semantics
  • [ ] Verify tests run in CI pipeline via .github/workflows/test.yml

🌿Good first issues

  • Add unit tests for apps/nsq_to_file/strftime.go: the strftime formatting logic has no visible test coverage (no *_test.go file)—write tests for various time format strings and edge cases like month boundaries
  • Document the NSQ protocol handshake in apps/nsqd/README.md: while apps/nsqlookupd/README.md exists, nsqd's README is sparse; document the IDENTIFY → SUB → RDY message sequence that clients must follow for proper initialization
  • Create integration test for apps/nsq_to_nsq/: this forwarding tool lacks a dedicated *_test.go file unlike nsq_to_http—add tests verifying message ordering, duplicate handling, and topic remapping across NSQ clusters

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 9ea5071 — Merge pull request #1504 from c3llus/feat/issue-1499/new-max-defer-delay (mreiferson)
  • d182ee2 — feat(nsqd): added new max defer timeout options (c3llus)
  • 3103474 — Merge pull request #1301 from jehiah/topology_aware_msg_delivery_1301 (jehiah)
  • 1cd6297 — nsqd: add experiment topology-aware-consumption for preferred msg consumption (jehiah)
  • 8e7f8d1 — Merge pull request #1502 from mreiferson/jsbuild (jehiah)
  • a1811e4 — *: compile js (mreiferson)
  • dcec1ee — Merge pull request #1503 from mreiferson/build (mreiferson)
  • 933e89b — *: update build versions (mreiferson)
  • b28375d — Merge pull request #1500 from likid1412/error_check (mreiferson)
  • 73d42c4 — Merge pull request #1498 from johnou/bugfix/nsqadmin_graphite_target_param (mreiferson)

🔒Security observations

  • High · Outdated Python Dependencies with Known Vulnerabilities — bench/requirements.txt. The requirements.txt file specifies severely outdated Python packages: tornado==4.3 (released 2014), paramiko==1.16.0 (released 2016), and boto==2.39.0 (released 2015). These versions contain multiple known CVEs including remote code execution, authentication bypass, and denial of service vulnerabilities. Tornado 4.3 has CVE-2014-9720 (CSRF vulnerability), paramiko 1.16.0 has CVE-2018-7750 (command injection), and boto 2.39.0 is end-of-life with deprecated AWS authentication methods. Fix: Update to latest stable versions: tornado>=6.3, paramiko>=3.0, and boto3 (successor to boto). Review and test compatibility with current codebase. Consider removing unused dependencies from production builds.
  • High · Exposed Service Ports in Docker Container — Dockerfile, line: EXPOSE 4150 4151 4160 4161 4170 4171. The Dockerfile exposes multiple ports (4150, 4151, 4160, 4161, 4170, 4171) without any ingress controls or security documentation. These ports serve critical NSQ services (nsqd, nsqlookupd, nsqadmin) that could be exposed to the public internet if deployment documentation is not carefully followed. No network policies or firewall rules are defined in the provided configuration. Fix: Document port exposure requirements clearly. Implement network segmentation in deployment (NetworkPolicies in Kubernetes, security groups in AWS, etc.). Consider requiring explicit port mapping in docker run commands. Implement authentication on exposed APIs.
  • Medium · Missing Security Headers Configuration — internal/http_api/http_server.go. Based on the file structure showing http_api components (internal/http_api/http_server.go), there is no evidence of security headers (HSTS, CSP, X-Frame-Options, etc.) being configured. The API server likely lacks protection against common web vulnerabilities. Fix: Implement comprehensive security headers in HTTP responses. Add HSTS (Strict-Transport-Security), X-Content-Type-Options: nosniff, X-Frame-Options: DENY, and Content-Security-Policy headers. Enable HTTPS enforcement.
  • Medium · Potential Information Disclosure via Verbose Build Flags — Dockerfile, RUN CGO_ENABLED=0 make line. The Dockerfile uses '-ldflags='-s -w'' which strips symbols and disables DWARF debugging information. However, the build process itself may leak information, and there's no evidence of build artifact scanning or verification mechanisms. Fix: Maintain symbol stripping as done. Add build artifact verification (checksums/signatures). Implement secure build pipeline scanning with tools like Trivy. Document the official build process.
  • Medium · Alpine Linux Base Image Without Regular Updates — Dockerfile, line: FROM alpine:latest. The Dockerfile uses 'FROM alpine:latest' which uses the latest tag. While alpine is lightweight, the 'latest' tag can cause non-deterministic builds and may not receive critical security patches quickly. No version pinning is specified. Fix: Pin Alpine version to a specific release (e.g., alpine:3.18.4). Implement regular image rebuilds and scans using tools like Trivy or Grype. Use a container image scanning service in CI/CD pipeline.
  • Low · Missing Code Review and Security Testing in CI/CD — .github/workflows/test.yml. The .github/workflows/test.yml file exists but its contents are not provided. Standard test workflows may not include security-focused checks like SAST, dependency scanning, or license compliance verification. Fix: Add security scanning to CI/CD: implement SAST tools (gosec, staticcheck), dependency scanning (dependabot, snyk), container scanning, and license compliance checks. Require passing security checks before merge.
  • Low · Potential Insecure Deserialization in Protocol Handler — internal/protocol/protocol.go. The internal/protocol/protocol.go file suggests custom protocol handling. Without review of the actual implementation, custom binary protocol parsing can be vulnerable to deserialization attacks. Fix: Conduct thorough security review of protocol parsing code. Implement bounds checking and

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · nsqio/nsq — RepoPilot