sourcegraph/conc
Better structured concurrency for go
Healthy across the board
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1w ago
- ✓14 active contributors
- ✓Distributed ownership (top contributor 39% of recent commits)
Show all 6 evidence items →Show less
- ✓MIT licensed
- ✓CI configured
- ⚠No test directory detected
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/sourcegraph/conc)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/sourcegraph/conc on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: sourcegraph/conc
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sourcegraph/conc shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across the board
- Last commit 1w ago
- 14 active contributors
- Distributed ownership (top contributor 39% of recent commits)
- MIT licensed
- CI configured
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live sourcegraph/conc
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/sourcegraph/conc.
What it runs against: a local clone of sourcegraph/conc — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in sourcegraph/conc | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 38 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sourcegraph/conc. If you don't
# have one yet, run these first:
#
# git clone https://github.com/sourcegraph/conc.git
# cd conc
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of sourcegraph/conc and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sourcegraph/conc(\\.git)?\\b" \\
&& ok "origin remote is sourcegraph/conc" \\
|| miss "origin remote is not sourcegraph/conc (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "waitgroup.go" \\
&& ok "waitgroup.go" \\
|| miss "missing critical file: waitgroup.go"
test -f "pool/pool.go" \\
&& ok "pool/pool.go" \\
|| miss "missing critical file: pool/pool.go"
test -f "panics/panics.go" \\
&& ok "panics/panics.go" \\
|| miss "missing critical file: panics/panics.go"
test -f "stream/stream.go" \\
&& ok "stream/stream.go" \\
|| miss "missing critical file: stream/stream.go"
test -f "iter/iter.go" \\
&& ok "iter/iter.go" \\
|| miss "missing critical file: iter/iter.go"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 38 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~8d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sourcegraph/conc"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
conc is a Go library that provides safer, more ergonomic abstractions for concurrent programming, replacing and extending the standard library's sync.WaitGroup and sync.Pool. It offers structured concurrency primitives like WaitGroup, Pool (concurrency-limited task runner), ResultPool (collector of concurrent results), ErrorPool, and ContextPool, eliminating boilerplate and making it harder to misuse goroutines. Modular single-package structure: root-level waitgroup.go and stream/ package provide core WaitGroup enhancement and streaming iteration, while pool/ subdirectory contains 6 variants (Pool, ResultPool, ErrorPool, ResultErrorPool, ContextPool, ResultContextPool) to cover different use cases. Each variant is independently tested with parallel _test.go files. The panics/ and iter/ subpackages handle panic recovery and result iteration utilities.
👥Who it's for
Go backend engineers at companies like Sourcegraph who need to spawn and coordinate many concurrent tasks (API handlers, background workers, batch processing) without manually managing sync.WaitGroup plumbing or reinventing error/result collection patterns.
🌱Maturity & risk
Actively maintained and production-ready. The repo has comprehensive test coverage (test files parallel every module), GitHub Actions CI running on every commit (go.yml, bench.yml, main.yml), Go 1.20+ minimum version pinned in go.mod, and minimal dependencies (only testify for assertions). Recent activity and clean test output signal an actively developed project.
Very low risk. Only two dependencies (testify v1.8.1 for testing, zero runtime dependencies), clean dependency tree with no transitive bloat, and the library is a pure abstraction layer over Go's standard library concurrency primitives—no external services or complex state. Main risk is Go version constraints: requires Go 1.20+, so users on older versions cannot adopt.
Active areas of work
No specific PR/milestone data visible in provided snapshot, but the test coverage and CI setup (bench.yml, main.yml, go.yml) indicate ongoing maintenance. The .golangci.yml file shows active linting discipline. Library appears stable rather than rapidly evolving—focus is on refinement rather than new major features.
🚀Get running
git clone https://github.com/sourcegraph/conc.git
cd conc
go mod download
make test # or 'go test ./...'
Daily commands:
make test # Run all tests
make bench # Run benchmarks (see .github/workflows/bench.yml)
make lint # Lint with golangci-lint
go test -v ./... # Verbose test output
🗺️Map of the codebase
waitgroup.go— Core WaitGroup abstraction that provides the foundation for all concurrency patterns in the library; every contributor must understand its API and contract.pool/pool.go— Base worker pool implementation that all other pool types build upon; defines the pattern for job submission and worker lifecycle.panics/panics.go— Panic handling mechanism used throughout the codebase to safely capture and propagate panics across goroutines; critical for error safety.stream/stream.go— Iterator-based stream processing abstraction that enables composable concurrent operations; key to the library's higher-level API.iter/iter.go— Core iterator interface and utilities that underpin the stream layer; foundation for lazy evaluation in concurrent pipelines.pool/error_pool.go— Error-aware pool pattern used frequently; demonstrates how the library handles and aggregates errors from concurrent operations.
🧩Components & responsibilities
- WaitGroup (waitgroup.go) (sync.WaitGroup, panics.Guard) — Tracks lifecycle of spawned goroutines and aggregates panics/errors from them
- Failure mode: If panics are not captured, application crashes; if Wait() is never called, errors are silently dropped
- Worker Pool (pool/pool.go and variants) (sync.WaitGroup, sync.Mutex (in pool internals), panics.Guard) — Manages bounded number of worker goroutines and queues incoming jobs for execution
- Failure mode: Job queue overflow if not consumed; worker panic crashes the pool; context cancellation may leave jobs unprocessed
- Panic Guard (panics/panics.go) (recover() built-in, error interfaces) — Captures panics from goroutine execution and stores them for later retrieval
- Failure mode: If Guard is not used, panics propagate and crash the goroutine; panics.Try without proper error checking loses panic info
- Stream & Iterator (stream/stream.go, iter/iter.go) (Iterator interface, worker pools, panics handling) — Provides functional composition of concurrent data transformations via lazy evaluation
- Failure mode: Iterator exhaustion without proper cleanup may leak goroutines; map/filter errors are aggregated in final Wait()
🔀Data flow
Application code→Pool.Exec(job)— Job function submitted to worker pool for concurrent executionPool.Exec()→Worker goroutine via panics.Guard— Job is wrapped in panic guard and executed in a spawned goroutineWorker goroutine→panics.Guard (panic capture)— If job panics, panic is caught and stored; if it returns, result/
🛠️How to make changes
Add a new pool variant with custom return type and error handling
- Create a new file in pool/ directory (e.g., pool/custom_pool.go) that embeds or wraps the appropriate base pool type (
pool/custom_pool.go) - Define a public Exec method that accepts a job function and integrates panic handling via panics.Guard (
pool/custom_pool.go) - Implement a Wait or similar method that returns your custom result/error type, following the pattern in pool/result_error_pool.go (
pool/custom_pool.go) - Add comprehensive unit tests in pool/custom_pool_test.go covering success, error, and panic cases (
pool/custom_pool_test.go)
Create a new concurrent data transformation using Stream
- Implement a new iterator type in iter/custom_iter.go that yields data items (follows iter.Iter interface from iter/iter.go) (
iter/custom_iter.go) - Add transformation logic in a new file (e.g., iter/custom_transform.go) that takes an iterator and returns a transformed iterator (
iter/custom_transform.go) - Chain your iterator with stream.New() and .Map() or other operations from stream/stream.go in your application code (
stream/stream.go) - Test with iter/custom_transform_test.go, validating concurrent execution and panic propagation (
iter/custom_transform_test.go)
Extend WaitGroup with custom synchronization logic
- Embed conc.WaitGroup in a new struct (e.g., in your application) and wrap its Go() method (
waitgroup.go) - Use panics.Try to safely execute goroutines and capture panics or errors via waitgroup.Wait() (
panics/try.go) - Implement custom Wait logic that aggregates panics and errors according to your application's needs (
waitgroup.go) - Test panic propagation and error handling in your application's test file (
waitgroup_test.go)
🔧Why these technologies
- Go goroutines and channels — Lightweight concurrency primitives; the library provides ergonomic abstractions over raw goroutines to reduce synchronization boilerplate
- sync.WaitGroup — Standard library primitive for goroutine lifecycle management; conc wraps it with panic safety and better error handling
- Context for cancellation — Enables timeout and cancellation propagation across concurrent operations; supported via context_pool variants
- Iterator pattern (iter.Iter interface) — Enables lazy, composable data transformations; pairs with worker pools for efficient concurrent map/filter operations
⚖️Trade-offs already made
-
Panic capture and propagation via panics.Guard
- Why: Goroutines panic independently; without capture, panics crash the entire process. Library provides safe panic handling.
- Consequence: Slight runtime overhead per goroutine; requires explicit Wait() call to surface panics to caller
-
Separate pool types (Pool, ErrorPool, ContextPool, ResultPool, etc.) instead of unified generic pool
- Why: Pre-Go 1.18 generics; clarity and zero-cost abstractions for different use cases
- Consequence: Code duplication across pool variants; adds conceptual surface area but each type is simple and focused
-
Iterator-based streams instead of channels
- Why: Channels require explicit goroutines for generation; iterators enable lazy evaluation and easier resource cleanup
- Consequence: Iterators are pull-based; less natural for event-driven or continuous streams compared to channels
-
No built-in cancellation token; uses context.Context
- Why: Aligns with Go standard library conventions; context is familiar to all Go developers
- Consequence: Requires context parameter in context-aware variants; adds boilerplate for simple cases
🚫Non-goals (don't propose these)
- Not a replacement for Go's standard sync package (only supplements it with better ergonomics)
- Not intended for real-time or hard real-time systems (goroutine scheduling is non-deterministic)
- Not a distributed system coordination library (purely in-process concurrency)
- Does not provide lock-free data structures (focuses on task-level concurrency, not fine-grained data parallelism)
🪤Traps & gotchas
None obvious from the provided data. Go module version is locked to 1.20+ (requires generics), so Go 1.17 or earlier users will get immediate compile errors. The panics/ package changes the semantics of panic handling (recovers instead of propagating), which is powerful but non-obvious—users must read panics/try.go to understand when panics are swallowed vs. re-raised.
🏗️Architecture
💡Concepts to learn
- Structured Concurrency — This repo implements the structured concurrency paradigm—ensuring child goroutines are scoped to parent, preventing orphaned goroutines and making cancellation explicit
- Goroutine Pools / Worker Pools — The core pattern in pool/ modules—bounded queues of reusable goroutines prevent unbounded goroutine explosion in high-concurrency scenarios
- Panic Recovery in Concurrent Systems — panics/try.go demonstrates controlled panic recovery across goroutine boundaries—essential for isolating failure in one task from crashing the whole application
- Generics (Go 1.18+) — ResultPool and iter patterns rely heavily on Go generics for type-safe result collectors without runtime reflection or interface{} casting
- Context Propagation — ContextPool and stream modules show how to thread context.Context through concurrent goroutines for cancellation, timeouts, and deadline enforcement
- Semaphore Pattern — Underlying mechanism in pool.Pool to limit concurrency—uses a buffered channel as a semaphore to enforce max goroutine count
- Iterator / Functional Composition — iter/iter.go and iter/map.go provide functional-style chaining over async results, allowing declarative result pipelines instead of manual loops
🔗Related repos
golang/sync— Official Go team's sync extensions (also provides errgroup and other concurrency helpers); conc competes with and improves on patterns heretsenart/vegeta— Go HTTP load testing tool that relies on coordinating thousands of concurrent requests—a user of pool patterns conc solvesgrpc-ecosystem/go-grpc-middleware— Uses structured concurrency for RPC interceptor chains; could leverage conc patterns for concurrent interceptor compositionuber-go/zap— High-performance logging library often paired with conc for safe async logging in concurrent workloads
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add benchmark comparison tests in bench.yml workflow
The repo has a bench.yml workflow file but no visible benchmarking code in the file structure. The concurrency primitives (WaitGroup, error pools, result pools, streams) are performance-critical. A dedicated benchmark suite comparing conc primitives against stdlib sync.WaitGroup and goroutine patterns would demonstrate value to users and catch performance regressions.
- [ ] Create benchmarks/ directory with benchmark files for each package (waitgroup, pool variants, stream, iter)
- [ ] Add bench_test.go files in pool/, stream/, iter/ directories using Go's testing.B
- [ ] Update .github/workflows/bench.yml to run benchmarks on PR and store results for comparison
- [ ] Add benchmark results to README.md showing performance improvements over stdlib
Add integration tests combining multiple concurrency primitives
Current test files are isolated by package (pool_test.go, stream_test.go, iter_test.go, waitgroup_test.go). Real-world usage combines these primitives together. Adding integration tests would catch edge cases when WaitGroup, error pools, and streams interact, and serve as usage examples for new contributors.
- [ ] Create integration_test.go in the root directory
- [ ] Add test cases combining: WaitGroup + error pool, stream + panics recovery, iter + result pools
- [ ] Test concurrent error handling across multiple primitives
- [ ] Add examples showing real-world patterns (fan-in/fan-out, error aggregation, etc.)
Add missing test coverage for panics/try.go error path handling
The panics package includes try.go (panic recovery) which is critical for structured concurrency safety, but try_test.go may not fully cover edge cases around panic propagation, stack unwinding, and interaction with error handling in pools. Enhanced tests would ensure robustness of the error recovery mechanism.
- [ ] Review current panics/try_test.go for coverage gaps using 'go test -cover ./panics'
- [ ] Add tests for: nested panic recovery, panic with nil values, panic during concurrent execution in pools
- [ ] Add tests for panic recovery in context of error pools (error_pool.go) and result pools (result_error_pool.go)
- [ ] Add test case verifying stack traces are preserved through panic recovery
🌿Good first issues
- Add benchmarks for pool.ResultPool vs. sync.WaitGroup + manual result slice pattern to demonstrate performance win—the bench.yml workflow exists but concrete comparative benchmarks are missing from .github/workflows/
- Extend iter/map.go with FilterMap or other functional combinators (FlatMap, Reduce) that are common in result processing chains—currently only map and basic iteration exist
- Add integration examples to README showing real patterns like 'fetch 100 URLs concurrently, collect non-nil errors, return first result'—README is brief on actual usage beyond function signatures
⭐Top contributors
Click to expand
Top contributors
- @camdencheek — 39 commits
- @bobheadxi — 39 commits
- @sashamelentyev — 10 commits
- [@Craig Pastro](https://github.com/Craig Pastro) — 2 commits
- @miparnisari — 1 commits
📝Recent commits
Click to expand
Recent commits
5f936ab— bump minimum go version for main (camdencheek)a3ac5f2— bump minimum go version (camdencheek)1124809— add Makefile and run benchmarks as part of PRs (#130) (miparnisari)e454401— Exclusively use go1.20 multierrors (#127) (camdencheek)4c5c70a— Reusable pools (#129) (camdencheek)8427ccd— Make result order deterministic (#126) (camdencheek)4afefce— Multierror: join errors at the end (#132) (camdencheek)30a99cd— simplify collection of errors (camdencheek)a68c69f— remove obsolete build directives (camdencheek)b7b9417— add test for nil callback in stream (camdencheek)
🔒Security observations
The conc repository demonstrates a strong security posture overall. As a structured concurrency library for Go, it has minimal attack surface. No hardcoded secrets, injection vulnerabilities, or dangerous patterns were identified in the file structure. Dependencies are minimal and well-maintained (testify is a reputable testing library with no known vulnerabilities). The primary concerns are environmental: the target Go version (1.20) is outdated and no longer receives security updates, and dependencies could be updated to more recent versions. The project includes proper testing infrastructure via GitHub Actions workflows. No Docker configuration issues are present as this is a library package rather than an application. Recommendation: Update Go version and refresh dependencies to maintain long-term security.
- Low · Outdated Go Version —
go.mod. The project targets Go 1.20, which is no longer receiving security updates. Go 1.20 reached end-of-life on August 8, 2024. Using outdated Go versions may expose the project to known security vulnerabilities in the Go runtime and standard library. Fix: Update the go.mod to target a currently supported Go version (1.22 or later). Review and test compatibility with the latest stable Go release. - Low · Testify Dependency Version Not Pinned to Patch —
go.mod. The testify dependency is specified as v1.8.1, which is from October 2022 and is relatively old. While not a direct security risk for a testing library, it's good practice to keep dependencies current. No security advisories are known for this specific version, but newer versions may contain important fixes. Fix: Consider updating github.com/stretchr/testify to the latest version (v1.9.0 or later) to benefit from recent improvements and bug fixes.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.