sindresorhus/p-limit
Run multiple promise-returning & async functions with limited concurrency
Slowing — last commit 3mo ago
worst-casePermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
Scorecard "Branch-Protection" is 0/10; Scorecard "Token-Permissions" is 0/10
- ✓Last commit 3mo ago
- ✓5 active contributors
- ✓MIT licensed
- ✓CI configured
- ✓Tests present
- ⚠Slowing — last commit 3mo ago
- ⚠Small team — 5 top contributors
- ⚠Single-maintainer risk — top contributor 90% of commits
- ⚠Scorecard: default branch unprotected (0/10)
What would change the verdict?
- →Deploy as-is WAIT → GO if: bring "Branch-Protection" to ≥3/10 (see scorecard report)
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard
Embed this verdict
[](https://repopilot.app/r/sindresorhus/p-limit)Paste into your README — the badge live-updates from the latest cached analysis.
Onboarding doc
Onboarding: sindresorhus/p-limit
Generated by RepoPilot · 2026-05-05 · Source
Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/sindresorhus/p-limit shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
Verdict
WAIT — Slowing — last commit 3mo ago
- Last commit 3mo ago
- 5 active contributors
- MIT licensed
- CI configured
- Tests present
- ⚠ Slowing — last commit 3mo ago
- ⚠ Small team — 5 top contributors
- ⚠ Single-maintainer risk — top contributor 90% of commits
- ⚠ Scorecard: default branch unprotected (0/10)
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests + OpenSSF Scorecard</sub>
Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live sindresorhus/p-limit
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/sindresorhus/p-limit.
What it runs against: a local clone of sindresorhus/p-limit — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in sindresorhus/p-limit | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | Last commit ≤ 121 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of sindresorhus/p-limit. If you don't
# have one yet, run these first:
#
# git clone https://github.com/sindresorhus/p-limit.git
# cd p-limit
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of sindresorhus/p-limit and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "sindresorhus/p-limit(\\.git)?\\b" \\
&& ok "origin remote is sindresorhus/p-limit" \\
|| miss "origin remote is not sindresorhus/p-limit (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 121 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~91d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/sindresorhus/p-limit"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
TL;DR
p-limit is a lightweight ES module that enables running promise-returning and async functions with controlled concurrency. It acts as a task queue/throttle that prevents more than a specified number of promises from executing simultaneously, solving the problem of resource exhaustion when running thousands of async operations in parallel. Simple two-file library: index.js (implementation), index.d.ts (TypeScript definitions), test.js (test suite), plus benchmark.js for performance testing. No monorepo or complex internal structure—the entire module is a single default export function that returns a limit instance with methods like .map(), .activeCount, .clearQueue(), and a configurable .concurrency property.
Who it's for
JavaScript/Node.js developers (Node 20+) who need to batch and rate-limit concurrent async operations—common in data processing pipelines, web scrapers, API clients, and batch job systems where unbounded concurrency would exhaust file descriptors or network connections.
Maturity & risk
Highly mature and production-ready. At v7.3.0 with TypeScript definitions included, full test coverage via ava in test.js, CI/CD via .github/workflows/main.yml, and as a Sindre Sorhus project with significant ecosystem adoption (keywords indicate comparison to bluebird). Single maintainer but extremely stable with no breaking changes visible in recent versions.
Minimal risk: only one direct dependency (yocto-queue@^1.2.1 for queue management), well-scoped API surface, and no external services. Single-maintainer status is typical for Sindre Sorhus packages and has not prevented widespread adoption. No obvious breaking changes between versions based on the API design.
Active areas of work
No specific recent activity visible in the repo data provided, but the project is mature and stable; focus is likely on maintenance and minor feature requests rather than major development. The recipes.md file suggests documentation of advanced usage patterns is being curated.
Get running
git clone https://github.com/sindresorhus/p-limit.git
cd p-limit
npm install
npm test
Daily commands:
npm test # Run xo linter, ava tests, and tsd type tests
npm run benchmark # Run performance benchmarks (node benchmark.js)
Map of the codebase
- index.js: Core implementation: the pLimit factory function and limit instance methods (queue handling, activeCount tracking, clearQueue, concurrency getter/setter)
- index.d.ts: TypeScript definitions for the public API, defines type signatures for pLimit(), limit(), limit.map(), and configuration options including rejectOnClear
- test.js: Comprehensive test suite using ava; tests concurrency enforcement, queue clearing, activeCount/pendingCount tracking, error handling, and the .map() convenience method
- package.json: Declares the only production dependency (yocto-queue) and specifies Node 20+ as minimum. Exports both .js and .d.ts files.
- index.test-d.ts: TypeScript definition tests using tsd; ensures the type definitions are accurate and that TypeScript users get correct inference
How to make changes
Start with index.js (the main implementation ~100 lines) to understand the core queue and concurrency logic. For type changes, edit index.d.ts. For new tests, add cases to test.js following ava syntax. For type tests, modify index.test-d.ts. Do NOT edit package.json dependencies lightly—yocto-queue is the only prod dependency.
Traps & gotchas
Avoid nesting limited functions: calling the same limit() function inside a task already limited by it creates a deadlock (documented in README as 'Warning'). Use separate limiters for inner tasks. The rejectOnClear option defaults to false, meaning pending tasks remain unresolved—explicitly enable it if using Promise.all() and calling clearQueue().
Concepts to learn
- Semaphore (concurrency control) — p-limit implements a semaphore pattern—a counter-based mechanism that grants N slots for concurrent execution; understanding this is key to grasping why activeCount and pendingCount matter
- Promise queuing and microtask scheduling — The implementation must defer promise execution via queuing and manage when tasks dequeue as earlier ones complete; this requires understanding how Promise microtasks interact with the event loop
- Closure and function wrapping — The limit(fn, ...args) pattern returns a wrapped promise that delays fn's execution; mastering closures over the concurrency state is essential to understanding the implementation
- FIFO queue (yocto-queue dependency) — p-limit depends on yocto-queue for task sequencing; understanding FIFO ordering ensures predictable task execution fairness in your code
- AbortError and task cancellation — The rejectOnClear option rejects pending tasks with AbortError; knowing how to handle this exception is critical for proper cleanup in Promise.all() chains
- Deadlock prevention in concurrent systems — The README warns against nesting limit() calls within limited tasks; understanding why this causes deadlock (all slots blocked waiting for inner tasks) is a core concurrency lesson
Related repos
sindresorhus/p-queue— More feature-rich alternative with priority queues, pause/resume, timeout, and retry capabilities; p-limit is simpler and more minimalsindresorhus/p-map— Companion for processing iterables with concurrency; p-limit.map() is simpler but p-map has more control and accepts additional optionsgetify/rar.js— Rate-limiting approach using tokens; solves the same concurrency problem but with token bucket semantics instead of fixed-slot queueingsindresorhus/awesome— Sindre Sorhus's curated list of Node.js packages; p-limit appears in async section as canonical concurrency control reference
PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add comprehensive TypeScript test coverage for edge cases in index.d.ts
The repo uses tsd for TypeScript definition testing (index.test-d.ts exists), but reviewing the test patterns suggests opportunities to add stricter type tests for the rejectOnClear option and various concurrency configurations. This ensures type safety for the options object pattern mentioned in the README but not fully type-tested.
- [ ] Review index.test-d.ts for existing TypeScript tests
- [ ] Add tsd tests for pLimit() with object options (concurrency + rejectOnClear combination)
- [ ] Add tsd tests for invalid concurrency values (0, negative, non-integer) to verify type errors
- [ ] Add tsd tests for the limit function return type with various async function signatures
- [ ] Run 'npm test' to ensure tsd validation passes
Add benchmarking test suite comparing p-limit concurrency levels against real-world scenarios
The repo has benchmark.js and scripts/benchmarker.js, but no automated benchmark regression tests in the CI pipeline (main.yml). Adding parameterized benchmark tests with thresholds would catch performance regressions and provide concrete data for optimization decisions.
- [ ] Review benchmark.js and scripts/benchmarker.js to understand current benchmark structure
- [ ] Create a new benchmark test file (e.g., benchmark.test.js) using ava with multiple concurrency levels (1, 5, 10, 50)
- [ ] Add assertions to track execution time and throughput metrics with reasonable thresholds
- [ ] Update .github/workflows/main.yml to run benchmark tests on each pull request
- [ ] Document benchmark results in a new BENCHMARKS.md section
Add clearQueue() method tests and abort behavior validation in test.js
The README documents clearQueue() with rejectOnClear option, but the current test.js likely has incomplete coverage for this critical feature. The AbortError rejection behavior and queue clearing edge cases need explicit test scenarios to prevent regressions.
- [ ] Review test.js to identify existing clearQueue() tests
- [ ] Add ava test: 'clearQueue() rejects pending promises with AbortError when rejectOnClear is true'
- [ ] Add ava test: 'clearQueue() silently clears queue when rejectOnClear is false'
- [ ] Add ava test: 'clearQueue() does not affect already-running promises'
- [ ] Add ava test: 'clearQueue() prevents queued promises from starting after clear'
- [ ] Run 'npm test' to verify all tests pass
Good first issues
- Add performance benchmarks comparing p-limit to alternatives (throat, p-queue) in benchmark.js and document results in recipes.md—currently only structure exists without comparative analysis
- Expand test.js coverage for edge cases: test rejectOnClear with AbortError propagation in Promise.all contexts, and test limit.concurrency setter behavior when reducing concurrency mid-flight
- Add example recipes in recipes.md for common patterns: timeout-with-retry, exponential backoff, and queue priority handling using multiple limit instances
Top contributors
- @sindresorhus — 57 commits
- @jdmarshall — 2 commits
- @LinusU — 2 commits
- @davbrito — 1 commits
- @liuhanqu — 1 commits
Recent commits
9f52583— Fix CI (sindresorhus)886bda5— 7.3.0 (sindresorhus)8907801— AddrejectOnClearoption (sindresorhus)ce9d71c— Add test for shared context provider pattern (sindresorhus)24503f9— Document recursive limiter deadlocks (sindresorhus)9bce97a— Add recipes documentation (sindresorhus)870db0f— Support options object inpLimit()(sindresorhus)1fb1407— Fix benchmarks (sindresorhus)7bdd25c— 7.2.0 (sindresorhus)d76231b— Make.map()method accept an iterable, not just array (#98) (davbrito)
Security observations
The p-limit package has a strong security posture. It's a well-maintained, minimal utility library with a single production dependency (yocto-queue). No hardcoded secrets, injection vulnerabilities, or critical misconfigurations were detected. The main recommendations focus on dependency management best practices: ensure lock files are used and regularly audit dependencies for vulnerabilities. The package follows secure defaults and has clear, well-documented APIs with no apparent security anti-patterns in the structure.
- Low · Dependency on yocto-queue without lock file visibility —
package.json - dependencies.yocto-queue. The package depends on yocto-queue (^1.2.1) with a caret version constraint, allowing minor and patch updates automatically. Without visibility into a lock file (package-lock.json or yarn.lock), there's a potential risk of installing a compromised version of this dependency during installation. Fix: Ensure a lock file (package-lock.json or yarn.lock) is committed to the repository and used during CI/CD. Consider using npm ci instead of npm install in production environments. Monitor yocto-queue for security advisories. - Low · Development dependencies may contain vulnerabilities —
package.json - devDependencies. Multiple development dependencies (xo, ava, tsd, etc.) are included with caret version constraints. While dev dependencies are less critical than production dependencies, they should still be monitored for vulnerabilities as they execute during build and test phases. Fix: Regularly run 'npm audit' to check for vulnerabilities in dev dependencies. Consider pinning to specific versions rather than using caret ranges for better control over updates.
LLM-derived; treat as a starting point, not a security audit.
Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.