RepoPilotOpen in app →

hibiken/asynq

Simple, reliable, and efficient distributed task queue in Go

Healthy

Healthy across the board

weakest axis
Use as dependencyHealthy

Permissive license, no critical CVEs, actively maintained — safe to depend on.

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 2d ago
  • 32+ active contributors
  • Distributed ownership (top contributor 28% of recent commits)
Show all 6 evidence items →
  • MIT licensed
  • CI configured
  • Tests present

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Healthy" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Healthy
[![RepoPilot: Healthy](https://repopilot.app/api/badge/hibiken/asynq)](https://repopilot.app/r/hibiken/asynq)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/hibiken/asynq on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: hibiken/asynq

Generated by RepoPilot · 2026-05-09 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/hibiken/asynq shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

GO — Healthy across the board

  • Last commit 2d ago
  • 32+ active contributors
  • Distributed ownership (top contributor 28% of recent commits)
  • MIT licensed
  • CI configured
  • Tests present

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live hibiken/asynq repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/hibiken/asynq.

What it runs against: a local clone of hibiken/asynq — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in hibiken/asynq | Confirms the artifact applies here, not a fork | | 2 | License is still MIT | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 32 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>hibiken/asynq</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of hibiken/asynq. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/hibiken/asynq.git
#   cd asynq
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of hibiken/asynq and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "hibiken/asynq(\\.git)?\\b" \\
  && ok "origin remote is hibiken/asynq" \\
  || miss "origin remote is not hibiken/asynq (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
  && ok "license is MIT" \\
  || miss "license drift — was MIT at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "server.go" \\
  && ok "server.go" \\
  || miss "missing critical file: server.go"
test -f "client.go" \\
  && ok "client.go" \\
  || miss "missing critical file: client.go"
test -f "processor.go" \\
  && ok "processor.go" \\
  || miss "missing critical file: processor.go"
test -f "internal/rdb/rdb.go" \\
  && ok "internal/rdb/rdb.go" \\
  || miss "missing critical file: internal/rdb/rdb.go"
test -f "servemux.go" \\
  && ok "servemux.go" \\
  || miss "missing critical file: servemux.go"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 32 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~2d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/hibiken/asynq"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

Asynq is a Go library that provides a distributed task queue backed by Redis, allowing applications to queue tasks for asynchronous processing across multiple worker servers. It handles task scheduling, retries, automatic recovery from crashes, and guarantees at-least-once execution semantics—solving the problem of reliable background job processing in microservice architectures. Monolithic single-package design: core task queue logic in root-level files (client.go, asynq.go, aggregator.go, forwarder.go), internal utilities organized under internal/base/, internal/context/, internal/errors/, and internal/log/. Proto-based serialization suggested by google.golang.org/protobuf dependency. Example usage in example_test.go and benchmark suite in benchmark_test.go.

👥Who it's for

Go backend engineers and DevOps teams building distributed systems who need to offload long-running operations (email sending, image processing, report generation) to background workers without blocking request handlers. Also used by infrastructure teams managing high-availability task processing across multiple machines.

🌱Maturity & risk

Production-ready and actively maintained. The codebase is well-structured with 1.05M lines of Go code, comprehensive test coverage (asynq_test.go, client_test.go, etc.), CI/CD via GitHub Actions (build.yml, benchstat.yml), and clear versioning in CHANGELOG.md. Last activity and Go 1.24.0 support indicate active development with modern practices.

Low risk for core functionality. Dependency surface is small (go-redis, robfig/cron, protobuf, uuid) with stable upstream libraries. Single-maintainer (@hibiken) pattern visible in repo structure could be a long-term concern, but the project has sufficient documentation and community activity (Gitter chat reference). No obvious breaking changes in recent history based on file stability.

Active areas of work

Active development with periodic releases tracked in CHANGELOG.md. Go version support updated to 1.24.0. Features like task aggregation (aggregator.go), health checks (healthcheck.go), and heartbeat monitoring (heartbeat.go) suggest ongoing work on reliability and observability. GitHub Actions workflows indicate continuous integration and performance benchmarking focus.

🚀Get running

Clone and build: git clone https://github.com/hibiken/asynq.git && cd asynq && go mod download && go build ./.... Verify with tests: go test ./.... Requires a running Redis instance (see example_test.go for client configuration). Check example_test.go and docs/ for concrete usage patterns.

Daily commands: Start Redis on localhost:6379, then run: go test ./... -v for unit tests, or use example_test.go as reference: write a client that enqueues tasks (client.go), then create a server with registered handlers (asynq.go Server type). See Makefile for available targets.

🗺️Map of the codebase

  • server.go — Core server implementation that coordinates task processing, manages worker pools, and orchestrates the main event loop—essential to understanding how tasks flow through the system.
  • client.go — Client API for enqueuing tasks into Redis queues; the entry point for task submission that every contributor must understand to add task-related features.
  • processor.go — Task processor that dequeues and executes tasks with worker management; critical for understanding how work is distributed and concurrency is controlled.
  • internal/rdb/rdb.go — Redis database abstraction layer handling all queue persistence operations; fundamental to understanding data storage and retrieval patterns.
  • servemux.go — Handler multiplexer that routes tasks to registered handlers; required to understand how task types map to processing logic.
  • asynq.go — Main package entry point with core type definitions like Task, Result, and Config; foundational API surface that all components depend on.
  • internal/proto/asynq.proto — Protocol buffer definitions for task serialization; defines the wire format for all inter-process communication.

🛠️How to make changes

Add a New Task Type Handler

  1. Define your task creation in client code: instantiate a Task with a type name (e.g., 'email_notification') and payload JSON (client.go)
  2. Register a handler in your server initialization using mux.HandleFunc(typeName, func(ctx context.Context, task *Task) error { ... }) (servemux.go)
  3. The handler receives the task context with retry info, deadline, and other metadata via the asynq.Context helper (context.go)
  4. Return nil on success, or an error (e.g., asynq.SkipRetry) to control retry behavior; processor catches panics (processor.go)

Add Scheduled/Delayed Task Execution

  1. Create a Task and call client.Enqueue() with ProcessAt(time) or ProcessIn(duration) option (client.go)
  2. Enqueue stores the task in Redis' scheduled queue (a sorted set keyed by execution time) (internal/rdb/rdb.go)
  3. Server runs the Forwarder goroutine which periodically moves due tasks from scheduled queues to active processing queues (forwarder.go)
  4. Once in an active queue, the processor dequeues and executes normally (processor.go)

Add a Periodic/Recurring Task (Cron)

  1. In server setup, use server.RegisterPeriodicTask() with a cron schedule (e.g., '0 0 * * *' for daily at midnight) (server.go)
  2. The PeriodicTaskManager parses cron expressions and determines the next execution time (periodic_task_manager.go)
  3. At the appropriate time, the manager enqueues a fresh Task instance; the scheduler coordinates actual submission (scheduler.go)
  4. Normal processor flow handles execution; handler registration and error handling remain the same as regular tasks (processor.go)

Add Custom Task Inspection/Monitoring

  1. Use the public Inspector API to query queue stats, task lists, and server state without modifying core logic (inspector.go)
  2. Inspector delegates to internal RDB inspection methods for data retrieval from Redis (internal/rdb/inspect.go)
  3. For custom dashboards, call Inspector methods and format results; the example CLI tool shows this pattern (tools/asynq/cmd/stats.go)
  4. Task context at runtime (inside handlers) provides retry count, remaining timeout, and task ID for logging/tracing (context.go)

🔧Why these technologies

  • Redis (sorted sets, lists, hashes) — Provides atomic queue operations, scheduled task storage (sorted sets by time), and fast in-memory persistence; supports distributed worker coordination without complex consensus protocols.
  • Protocol Buffers (asynq.proto) — Compact binary serialization for task payloads reduces network overhead and storage in Redis; supports schema evolution without breaking compatibility.
  • Go goroutines & channels — Lightweight concurrency enables thousands of concurrent workers and background services (scheduler, forwarder, janitor) with minimal resource overhead.
  • go-redis client (github.com/redis/go-redis/v9) — High-performance Redis client with connection pooling, pipelining, and RESP3 support; community-standard choice for Go Redis integration.
  • robfig/cron (v3) — Battle-tested cron expression parser for periodic task scheduling; handles complex recurrence patterns without reinventing scheduling logic.

⚖️Trade-offs already made

  • Redis-only backend (no pluggable broker abstraction)

    • Why: Simplifies API and codebase; Redis' atomic operations (BLPOP, ZADD with NX) are essential for reliable task distribution.
    • Consequence: Tight coupling to Redis; cannot easily swap for other backends (RabbitMQ, Kafka). Trade-off favors simplicity and correctness over flexibility.
  • Goroutine-per-task worker pool (up to MaxConcurrency limit)

    • Why: Maps naturally to Go's concurrency model and enables fine-grained timeout/cancellation per task.
    • Consequence: High memory overhead for very large worker counts (>10k tasks/sec on single server); horizontally scaling via multiple servers is the recommended approach.
  • Client enqueue is synchronous and blocking

    • Why: Provides immediate acknowledgment that task reached Redis; simpler error semantics for callers.
    • Consequence: Network latency blocks the enqueuing goroutine; high-throughput producers should batch or use async patterns (channels + dedicated goroutine).
  • Periodic tasks require explicit server registration (no central cron registry)

    • Why: Avoids distributed lock complexity and allows per-server periodic task customization.
    • Consequence: Must register the same periodic tasks on all servers to ensure they run; no

🪤Traps & gotchas

Redis connection is required and must be running before server starts (no in-memory fallback). Task payloads are serialized as bytes—ensure consistent encoding between task enqueueing and handler deserialization. Context deadlines (via context.go) are task-specific and require explicit timeout options when enqueueing. The aggregator.go feature requires careful tuning of grace periods to avoid excessive latency. Protocol Buffer dependencies mean regenerating Go code if proto definitions change (not visible in repo root, may be in internal/). Heartbeat monitoring assumes worker processes are identifiable; container-based deployments need careful config.

🏗️Architecture

💡Concepts to learn

  • At-Least-Once Delivery Semantics — Core guarantee in Asynq—tasks won't be lost but may execute multiple times, requiring idempotent handlers; critical to understand for correctness in distributed systems
  • Redis as a Message Broker — Asynq uses Redis lists and sorted sets for queuing and persistence; understanding Redis data structures is essential for debugging queue behavior
  • Worker Pool Pattern — Asynq spawns fixed-size goroutine pools to process tasks concurrently; understanding pool saturation and backpressure is critical for production tuning
  • Task Aggregation / Batching — Asynq's aggregator.go feature groups similar tasks for batch processing (e.g., bulk DB inserts); essential for optimizing high-volume scenarios
  • Exponential Backoff Retry Strategy — Failed tasks are retried with increasing delays via forwarder.go; understanding retry configurations prevents cascade failures
  • Heartbeat and Crash Detection — heartbeat.go monitors worker liveness; tasks are automatically moved back to queue if worker crashes—critical for fault tolerance
  • Weighted and Strict Priority Queues — Asynq supports multiple queue priorities via weighted or strict ordering; understanding queue priority prevents starvation of low-priority tasks
  • google/uuid — Direct dependency for task ID generation in asynq; understanding how asynq uses UUID helps with distributed tracing
  • redis/go-redis — The underlying Redis client library used throughout client.go and internal/base/base.go
  • RichardKnox/asynqmon — Official web UI companion for Asynq cluster inspection and task management
  • robfig/cron — Dependency for periodic task scheduling feature mentioned in README
  • temporalio/temporal — Alternative distributed task orchestration system with stronger guarantees; useful comparison for understanding Asynq's design trade-offs

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive integration tests for aggregator.go in aggregator_test.go

The aggregator is a critical component for task batching and optimization, but aggregator_test.go likely lacks coverage for edge cases like concurrent aggregation, timeout handling, and failure scenarios. This would improve reliability for users relying on task aggregation features.

  • [ ] Review aggregator.go to identify untested code paths (concurrent writes, edge cases in grouping logic)
  • [ ] Add test cases in aggregator_test.go for: concurrent task aggregation, timeout expiration during aggregation, partial failures, and grace shutdown scenarios
  • [ ] Add benchmarks in benchmark_test.go for aggregator performance under high concurrency
  • [ ] Verify coverage using go test -cover ./... and aim for >90% coverage on aggregator.go

Add CI workflow for Redis version compatibility testing in .github/workflows/

Asynq depends on Redis as its backend, but there's no visible workflow testing against multiple Redis versions (6.x, 7.x, 8.x). This could catch breaking changes or feature gaps early. The build.yml likely only tests against a single Redis version.

  • [ ] Create .github/workflows/redis-compat.yml with a matrix strategy for Redis versions 6.2, 7.x, and 8.x using official redis Docker images
  • [ ] Configure the workflow to run full test suite (go test ./...) against each Redis version
  • [ ] Add a job to test internal/rdb/rdb_test.go specifically, as this is the core Redis interaction layer
  • [ ] Document tested Redis versions in README.md under a 'Requirements' or 'Compatibility' section

Add missing unit tests for inspector.go and internal/rdb/inspect.go

The inspector package provides critical introspection APIs for monitoring queues and tasks, but inspector_test.go and internal/rdb/inspect_test.go may lack coverage for complex querying scenarios, filtering, and error handling paths that users depend on for production observability.

  • [ ] Review inspector.go to identify untested methods (GetQueueStats, ListTasks, GetTaskInfo with various filters)
  • [ ] Add tests in inspector_test.go for filtering by task state, pagination edge cases, and non-existent queue handling
  • [ ] Add tests in internal/rdb/inspect_test.go for edge cases in Lua script execution and Redis response parsing
  • [ ] Add tests for concurrent inspector operations to catch race conditions
  • [ ] Ensure all public inspector API methods have corresponding test cases with both success and failure paths

🌿Good first issues

  • Add comprehensive integration tests for aggregator.go with Redis cluster mode support—currently only single-instance tests visible in aggregator_test.go
  • Expand healthcheck.go with more detailed error context and status codes—currently minimal, could add liveness/readiness probe endpoints
  • Document task serialization best practices in docs/ with examples showing how to safely evolve proto schemas without breaking existing queued tasks

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 23905a2 — Merge pull request #1104 from Bahtya/fix/pubsub-connection-leak (kamikazechaser)
  • 5586efe — test: add test for CancelationPubSub error path (Bahtya)
  • dd3c923 — fix: close pubsub connection on Subscribe error in CancelationPubSub (Bahtya)
  • f81c78e — Merge pull request #1092 from NilPuig/fix/memory-usage-nil-guard (kamikazechaser)
  • 2fd155e — Fix nil guard for MEMORY USAGE in memoryUsageCmd Lua script (NilPuig)
  • d704b68 — Prepare release (docs): v0.26.0 (#1084) (kamikazechaser)
  • a8db5b5 — docs: update CHANGELOG (kamikazechaser)
  • e4248e2 — Add --tls option to dash command (#1073) (thomas4019)
  • c4876e7 — Feature: Add Headers Support to Tasks (#1070) (joejoe-am)
  • dd2c3de — feat: done add-username-cli (#1083) (vlle)

🔒Security observations

The asynq codebase demonstrates a generally solid security posture as a Go-based distributed task queue library. The primary concerns are around operational security (Redis TLS enforcement), input validation on task data, and protobuf deserialization safety. The dependencies are well-maintained and from reputable sources. No obvious hardcoded secrets or SQL injection vulnerabilities were identified. The main recommendations focus on ensuring secure deployment practices, comprehensive input validation, and proper configuration management. The library itself follows Go security best practices, but users must implement proper operational security measures when deploying it.

  • Medium · Dependency on Redis without TLS Enforcement — go.mod, internal/rdb/rdb.go, client.go, server.go. The codebase uses github.com/redis/go-redis/v9 as a core dependency for distributed task queue operations. While the library supports TLS, the codebase does not appear to enforce or mandate TLS connections to Redis based on the file structure analysis. Redis connections could potentially be established over unencrypted channels in default configurations, exposing task data in transit. Fix: Enforce TLS for all Redis connections in production. Document and enforce TLS configuration requirements in client and server initialization code. Add configuration validation to reject non-TLS connections in secure environments.
  • Medium · Protobuf Code Generation Security — internal/proto/asynq.pb.go, internal/proto/asynq.proto. The project uses google.golang.org/protobuf v1.36.10 with internal/proto/asynq.proto for serialization. Generated protobuf code (asynq.pb.go) may contain auto-generated code that could be vulnerable to deserialization attacks if untrusted data is processed without proper validation. Fix: Validate all deserialized protobuf messages before processing. Implement message size limits to prevent denial of service attacks. Keep google.golang.org/protobuf updated to the latest patch version. Use code review processes for protobuf definitions.
  • Medium · Insufficient Input Validation on Task Data — processor.go, client.go, servemux.go, internal/rdb/rdb.go. As a task queue system, asynq processes tasks from Redis. The codebase may accept and process arbitrary task payloads without comprehensive validation. Malformed or malicious task data could potentially cause crashes or unexpected behavior in worker processors. Fix: Implement comprehensive input validation for all task data received from Redis. Validate task type, payload size, and structure before processing. Implement size limits and sanitization for task arguments and results.
  • Low · Indirect Dependencies Without Pinned Versions — go.mod. The go.mod file includes indirect dependencies (github.com/cespare/xxhash/v2, github.com/dgryski/go-rendezvous) that are not explicitly pinned in the require section, only in indirect comments. This could lead to version drift if dependency management is not careful. Fix: Periodically audit indirect dependencies. Use 'go get -u' cautiously in controlled environments. Consider using 'go mod tidy' regularly and reviewing go.sum changes. Implement dependency scanning tools in CI/CD pipeline.
  • Low · Hardcoded Configuration Defaults — asynq.go, client.go, server.go. While no obvious hardcoded secrets were detected, distributed systems often have default configurations that may not be secure in production. The codebase should ensure sensible security defaults for Redis authentication, timeouts, and connection settings. Fix: Document all configuration options and their security implications. Provide secure defaults (e.g., connection timeouts, maximum retries). Require explicit authentication setup for Redis. Implement validation to prevent insecure configurations in production.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Healthy signals · hibiken/asynq — RepoPilot