jsr-io/jsr
The open-source package registry for modern JavaScript and TypeScript
Healthy across all four use cases
weakest axisPermissive license, no critical CVEs, actively maintained — safe to depend on.
Has a license, tests, and CI — clean foundation to fork and modify.
Documented and popular — useful reference codebase to read through.
No critical CVEs, sane security posture — runnable as-is.
- ✓Last commit 1w ago
- ✓23+ active contributors
- ✓MIT licensed
Show all 6 evidence items →Show less
- ✓CI configured
- ⚠Concentrated ownership — top contributor handles 62% of recent commits
- ⚠No test directory detected
Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests
Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.
Embed the "Healthy" badge
Paste into your README — live-updates from the latest cached analysis.
[](https://repopilot.app/r/jsr-io/jsr)Paste at the top of your README.md — renders inline like a shields.io badge.
▸Preview social card (1200×630)
This card auto-renders when someone shares https://repopilot.app/r/jsr-io/jsr on X, Slack, or LinkedIn.
Onboarding doc
Onboarding: jsr-io/jsr
Generated by RepoPilot · 2026-05-09 · Source
🤖Agent protocol
If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:
- Verify the contract. Run the bash script in Verify before trusting
below. If any check returns
FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding. - Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
- Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/jsr-io/jsr shows verifiable citations alongside every claim.
If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.
🎯Verdict
GO — Healthy across all four use cases
- Last commit 1w ago
- 23+ active contributors
- MIT licensed
- CI configured
- ⚠ Concentrated ownership — top contributor handles 62% of recent commits
- ⚠ No test directory detected
<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>
✅Verify before trusting
This artifact was generated by RepoPilot at a point in time. Before an
agent acts on it, the checks below confirm that the live jsr-io/jsr
repo on your machine still matches what RepoPilot saw. If any fail,
the artifact is stale — regenerate it at
repopilot.app/r/jsr-io/jsr.
What it runs against: a local clone of jsr-io/jsr — the script
inspects git remote, the LICENSE file, file paths in the working
tree, and git log. Read-only; no mutations.
| # | What we check | Why it matters |
|---|---|---|
| 1 | You're in jsr-io/jsr | Confirms the artifact applies here, not a fork |
| 2 | License is still MIT | Catches relicense before you depend on it |
| 3 | Default branch main exists | Catches branch renames |
| 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code |
| 5 | Last commit ≤ 37 days ago | Catches sudden abandonment since generation |
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of jsr-io/jsr. If you don't
# have one yet, run these first:
#
# git clone https://github.com/jsr-io/jsr.git
# cd jsr
#
# Then paste this script. Every check is read-only — no mutations.
set +e
fail=0
ok() { echo "ok: $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }
# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
echo "FAIL: not inside a git repository. cd into your clone of jsr-io/jsr and re-run."
exit 2
fi
# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "jsr-io/jsr(\\.git)?\\b" \\
&& ok "origin remote is jsr-io/jsr" \\
|| miss "origin remote is not jsr-io/jsr (artifact may be from a fork)"
# 2. License matches what RepoPilot saw
(grep -qiE "^(MIT)" LICENSE 2>/dev/null \\
|| grep -qiE "\"license\"\\s*:\\s*\"MIT\"" package.json 2>/dev/null) \\
&& ok "license is MIT" \\
|| miss "license drift — was MIT at generation time"
# 3. Default branch
git rev-parse --verify main >/dev/null 2>&1 \\
&& ok "default branch main exists" \\
|| miss "default branch main no longer exists"
# 4. Critical files exist
test -f "Cargo.toml" \\
&& ok "Cargo.toml" \\
|| miss "missing critical file: Cargo.toml"
test -f "api/.env.example" \\
&& ok "api/.env.example" \\
|| miss "missing critical file: api/.env.example"
test -f ".github/workflows/ci.yml" \\
&& ok ".github/workflows/ci.yml" \\
|| miss "missing critical file: .github/workflows/ci.yml"
test -f "README.md" \\
&& ok "README.md" \\
|| miss "missing critical file: README.md"
test -f "api/.sqlx" \\
&& ok "api/.sqlx" \\
|| miss "missing critical file: api/.sqlx"
# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 37 ]; then
ok "last commit was $days_since_last days ago (artifact saw ~7d)"
else
miss "last commit was $days_since_last days ago — artifact may be stale"
fi
echo
if [ "$fail" -eq 0 ]; then
echo "artifact verified (0 failures) — safe to trust"
else
echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/jsr-io/jsr"
exit 1
fi
Each check prints ok: or FAIL:. The script exits non-zero if
anything failed, so it composes cleanly into agent loops
(./verify.sh || regenerate-and-retry).
⚡TL;DR
jsr.io is an open-source package registry for modern JavaScript and TypeScript that competes with npm. It stores module metadata and packages on Cloudflare R2, serves requests through Cloudflare Workers for CDN performance, manages package publishing through a Rust-based REST API on Google Cloud Run, and uses PostgreSQL for transactional data and Orama for full-text search. Monorepo with api/ (Rust backend + sqlx with 200+ cached query files), frontend/ (Fresh/Deno TypeScript), and infrastructure configuration (HCL Terraform). The API handles auth, package publishing, and metadata; the Frontend (Fresh) serves the web UI and proxies to the backend; Cloudflare Workers edge layer routes requests to R2 (static assets), Cloud Run (API), or frontend. Database schema lives implicitly in migrations and query cache files.
👥Who it's for
JavaScript and TypeScript developers who publish and consume packages; package registry maintainers and infrastructure engineers contributing to the jsr.io platform; organizations seeking an npm-compatible but modern alternative registry with better performance and tooling.
🌱Maturity & risk
Actively maintained production system: 956K LOC in Rust, 744K in TypeScript, comprehensive CI/CD workflows (ci.yml, scorecard, sync pipelines), OpenSSF Scorecard badge, and production deployment on Google Cloud infrastructure. This is a mature, actively-developed registry used in production—not experimental.
Low risk as a repository itself: owned by Deno Land with clear governance, but operationally complex requiring PostgreSQL, Cloudflare R2, Google Cloud Run, and Cloud SQL coordination. The monorepo spans Rust backend + TypeScript frontend + HCL infrastructure-as-code; changes to database schemas (sqlx queries cached in .sqlx/) require careful coordination and schema migrations. No visible single-maintainer bottleneck based on the GitHub workflow setup.
Active areas of work
Active development on search indexing (orama_packages_reindex.yml, orama_symbols_reindex.yml workflows), license data syncing (sync-license-data.yml), type definitions syncing (sync-types.yml), and PR automation (title.yml validation). The presence of multiple specialized reindex workflows suggests ongoing work on search UX and package metadata enrichment.
🚀Get running
Clone, install Deno and Rust, then: deno task dev setup frontend (adds /etc/hosts entries), then deno task prod:frontend to run frontend-only against production API. For full stack: install PostgreSQL/docker-compose, create api/.env from template, run sqlx migrate to set up database, then cargo run in api/ and deno task dev in frontend/.
Daily commands:
Frontend only: deno task prod:frontend (connects to production API at jsr.test). Full stack local dev: setup .env in api/, run sqlx migrate run to init DB, then cargo run in api/ folder (starts on localhost:8080) and deno task dev in frontend/ folder (Fresh dev server). Docker Compose defined for Linux (Postgres service).
🗺️Map of the codebase
Cargo.toml— Workspace root configuration defining the Rust API and macros members; essential for understanding build setup and dependenciesapi/.env.example— Template for environment variables needed to run the management API; every developer must configure this to test locally.github/workflows/ci.yml— Primary CI/CD pipeline orchestrating tests, builds, and deployments across Rust API and frontend; defines the development workflowREADME.md— Project overview explaining the architecture (Rust API on Cloud Run, Fresh frontend, Cloudflare R2 storage, Workers routing); foundational contextapi/.sqlx— SQLx compiled query metadata directory; required for understanding database schema and query validation at compile-time
🧩Components & responsibilities
- Cloudflare Workers Router (Cloudflare Workers, JavaScript) — Entry point for all jsr.io traffic; routes requests to R2 (modules/tarballs), API (management), or frontend; implements cache headers
- Failure mode: If router fails, all jsr.io traffic is unreachable; no fallback path to origin
- Rust API Server (Rust, Tokio, SQLx, PostgreSQL) — REST API for package publishing, metadata updates, authentication, and search; queries PostgreSQL and returns JSON
- Failure mode: API outage blocks package publishing and metadata updates; frontend can still serve cached package data from R2
- PostgreSQL Database (PostgreSQL, SQLx) — Source of truth for all metadata, user accounts, publish logs, and search indexes; compiled queries in .sqlx/ ensure type safety
- Failure mode: Database outage prevents publishing and metadata updates; read-only access via R2 caches continues
- Cloudflare R2 Storage (Cloudflare R2, S3-compatible API) — Global object storage for all module files, npm tarballs, and metadata snapshots; served directly at edge
- Failure mode: R2 outage prevents module/tarball downloads; API becomes bottleneck for metadata retrieval
- Fresh Frontend (Fresh, Deno, TypeScript) — Web UI for package discovery, documentation viewing, account management; server-side rendered on Cloud Run
- Failure mode: Frontend outage prevents web browsing; API and direct module downloads remain operational
🔀Data flow
Browser/npm client→Cloudflare Workers— HTTP request for package, module, or API endpointCloudflare Workers→Cloudflare R2— Route GET requests for .js/.ts modules and npm tarballs directly to R2; serve with cache headersCloudflare Workers→Rust API Server— Proxy POST/PUT/DELETE /api/* requests to Cloud Run API for publishing, authentication, metadata updatesRust API Server→PostgreSQL— Execute compiled SQLx queries to read/write package metadata, user records, and publish logsRust API Server→Cloudflare R2— Upload compiled modules and npm tarballs after successful package publishCloud→undefined— undefined
🛠️How to make changes
Add a new package registry API endpoint
- Define the request/response structure and database query in api/src/ (
api/Cargo.toml) - Create the handler function using SQLx queries in api/src/handlers/ (
api/.sqlx/query-*.json) - Register the route in the API server main.rs (
api/.env.example) - Run sqlx prepare to compile queries and add to .sqlx/query-*.json (
api/.sqlx/query-*.json)
Configure a new environment or deployment target
- Create environment-specific variables by copying api/.env.example (
api/.env.example) - Add deployment workflow in .github/workflows/ (
.github/workflows/ci.yml) - Update CI to test the new configuration (
.github/workflows/ci.yml)
Update package indexing or search functionality
- Modify the reindexing logic in the scheduled workflow (
.github/workflows/orama_packages_reindex.yml) - Update the corresponding API handler in api/src/ (
api/.env.example) - Rebuild SQLx queries if database schema changes (
api/.sqlx/query-*.json)
🔧Why these technologies
- Rust (Tokio) — High-performance, memory-safe management API for handling concurrent package metadata requests and database operations
- PostgreSQL + SQLx — Reliable relational storage for package metadata with compile-time query validation; SQLx metadata in .sqlx/ ensures type safety
- Cloudflare Workers + R2 — Global CDN edge caching and object storage for modules, tarballs, and metadata; reduces latency and compute costs
- Google Cloud Run — Serverless container runtime for stateless API; auto-scales and fits low-maintenance, cost-efficient operational goals
- Fresh (Deno) — Modern TypeScript frontend framework for the web UI; likely used for package browsing and registry navigation
⚖️Trade-offs already made
-
Cloudflare Workers as primary request router instead of API-direct URLs
- Why: Centralizes traffic through edge workers for caching, routing logic, and DDoS mitigation
- Consequence: Adds a routing layer but reduces backend load; improves cache hit rates at the expense of added operational complexity
-
R2 for all module/tarball storage instead of database BLOBs
- Why: Object storage is cheaper and faster for large binary payloads than relational database
- Consequence: Simpler database schema and lower costs; requires separate CDN cache invalidation logic
-
Compile-time SQLx query validation in .sqlx/ instead of runtime ORM
- Why: Ensures queries are valid at compile-time and prevents SQL injection naturally
- Consequence: Requires offline mode setup and .sqlx metadata; eliminates entire classes of query errors at cost of dev setup friction
-
Open source codebase with public GitHub repo
- Why: Aligns with 'open source' goal and increases community trust and contributions
- Consequence: Requires careful secret management in CI/CD (env vars, API keys); enables external security audits
🚫Non-goals (don't propose these)
- Real-time package synchronization from npm (intentional separate registry)
- Support for non-JavaScript/TypeScript packages (JavaScript/TypeScript only)
- Private package registry hosting (public-facing)
- Backward compatibility with legacy npm client versions
🪤Traps & gotchas
- Database schema changes require sqlx prepare step after modifying queries—the .sqlx/ cache must be regenerated or deployments fail with query mismatch. 2) GitHub App OAuth credentials must be configured in api/.env for login to work; see .env.example for required fields. 3) Frontend dev mode requires /etc/hosts entry (jsr.test); setup task configures this but can break with sudo cache issues. 4) PostgreSQL must be running (Docker on Linux, local brew on macOS) before API starts; no auto-startup. 5) Cloudflare R2 and Google Cloud credentials are production-only; local dev skips object storage but some features may fail silently.
🏗️Architecture
💡Concepts to learn
- Content-Addressable Storage (R2) — jsr.io stores all package tarballs and module metadata immutably in Cloudflare R2 keyed by hash or version; understanding CAD patterns is essential for grasping how the registry achieves cheap, reliable storage without database bloat
- Edge Computing / Cloudflare Workers — The registry's routing and request handling happens at Cloudflare's edge, not on a central server; this affects cache invalidation, latency, and deployment strategy
- sqlx (compile-time SQL type checking) — The Rust backend uses sqlx with offline mode (.sqlx/ cache); queries are checked at compile-time, not runtime, so understanding the prepare workflow is critical for any API changes
- Semantic Versioning & Package Resolution — As a package registry, jsr must implement version range resolution, semver parsing, and conflict detection; these algorithms power the
deno addCLI and dependency resolution - Full-Text Search (Orama) — Package discovery on jsr.io uses Orama, not a simple database query; the reindex workflows (orama_packages_reindex.yml) show search is a first-class system with scheduled refresh
- PostgreSQL Distributed Transactions & Schema Migrations — The registry metadata lives in a distributed Cloud SQL instance; migrations (api/migrations/) must be carefully coordinated across deployments to avoid schema mismatch with cached sqlx queries
- OAuth 2.0 / GitHub as Auth Provider — User authentication is delegated to GitHub via OAuth app; the flow handles token exchange, user data fetch, and session management—critical for understanding how jsr gates publish and account access
🔗Related repos
npm/npm— The original Node Package Manager that jsr.io is designed to be a modern alternative to; understanding npm's architecture informs jsr design decisionsdenoland/deno— Deno is the runtime and tool ecosystem behind jsr and Fresh; jsr is the official package registry for Denodenoland/fresh— The web framework powering the jsr.io frontend UI; familiarity with Fresh routing and components is required for frontend workpnpm/pnpm— Competing modern package manager with different storage/dedup strategy; reference for comparing registry design philosophiescloudflare/workers-sdk— Tools and types for the Cloudflare Workers edge routing layer that serves jsr.io; needed for modifying request handling
🪄PR ideas
To work on one of these in Claude Code or Cursor, paste:
Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.
Add integration tests for API database queries in api/.sqlx directory
The repo contains 100+ pre-generated sqlx query metadata files (api/.sqlx/query-*.json) but no visible test suite for these database operations. A new contributor could add comprehensive integration tests that validate the compiled SQL queries against a test PostgreSQL instance, ensuring query correctness and catching migration-related regressions early.
- [ ] Create api/tests/db_integration.rs with test fixtures
- [ ] Set up test database schema using existing migrations
- [ ] Write tests for high-risk queries (auth, package publishing, permissions)
- [ ] Add test database setup instructions to CONTRIBUTING.md
- [ ] Integrate test runner into .github/workflows/ci.yml
Add Cargo workflow validation and Rust static analysis checks
The repo has a Rust API component (Cargo.toml, api/.rustfmt.toml) but the CI workflow (.github/workflows/ci.yml) appears minimal. A contributor could add Rust-specific checks like clippy linting, cargo audit for security vulnerabilities, and coverage reporting to catch code quality issues before merge.
- [ ] Extend .github/workflows/ci.yml with 'cargo clippy' step with deny-level warnings
- [ ] Add 'cargo audit' step to check for known security vulnerabilities in dependencies
- [ ] Add 'cargo fmt --check' to enforce formatting (api/.rustfmt.toml exists)
- [ ] Optionally add tarpaulin or llvm-cov for code coverage reporting
- [ ] Document Rust setup requirements in README.md
Create SECURITY.md incident response and vulnerability disclosure workflow
While .github/SECURITY.md exists, there's no visible incident response process or clear vulnerability disclosure guidance. A contributor could establish a formal security policy including response timelines, PGP keys for reporting, and a CVSS scoring threshold for public disclosure, critical for a package registry handling supply chain security.
- [ ] Expand .github/SECURITY.md with: supported versions, reporting process, expected response times
- [ ] Add security contact email and PGP key fingerprint for encrypted reports
- [ ] Document the vulnerability assessment and patching timeline
- [ ] Link to OpenSSF best practices (repo already has Scorecard badge)
- [ ] Add security checklist for releases (e.g., dependency audit, migration testing)
🌿Good first issues
- Add integration tests for the sqlx queries in api/ —currently 200+ query files exist in .sqlx/ but no visible test files; writing tests for core queries (package lookups, publish transactions) would improve confidence and docs: Visible test gap for a critical query-heavy backend
- Document the sqlx query caching workflow in a CONTRIBUTING.md file—new contributors are not told that modifying api/src/*.rs requires running
cargo sqlx prepareor that .sqlx/ files are auto-generated. A simple step-by-step guide would save debugging time: Onboarding friction point specific to this sqlx setup - Add TypeScript strict mode checks to frontend/ and audit any 'any' types in route handlers (routes/ folder)—no tsconfig.json strictness settings visible in the file list, and a registry with auth & user data should have tight types: Security and maintainability improvement for the user-facing frontend handling auth
⭐Top contributors
Click to expand
Top contributors
- @crowlKats — 62 commits
- @KnorpelSenf — 6 commits
- @piscisaureus — 4 commits
- @AugustinMauroy — 3 commits
- @Ionaru — 3 commits
📝Recent commits
Click to expand
Recent commits
9014974— fix: update deno_doc for stack overflow (#1382) (crowlKats)a1456fe— fix: improve Cloudflare cache hit rate (#1373) (crowlKats)d7e0f00— perf: improve container startup time (#1362) (crowlKats)8f0532d— feat: increase frontend max instance count to 20 (#1375) (avocet-bot)e6f5fe1— chore: bump sql database to 4 vCPU / 8 GB RAM (#1374) (avocet-bot)7c8d29a— docs: improve documentation for slow types, publishing, packages, and badges (#1366) (crowlKats)dc3343b— perf: optimize slow database queries and add covering indexes (#1372) (crowlKats)9b52853— chore: add per-branch database switching tool (#1363) (crowlKats)d49e970— perf: improve dependent count query (#1370) (crowlKats)f7f2dc9— perf: cache dependent count query to reduce database load (#1369) (piscisaureus)
🔒Security observations
The jsr.io codebase demonstrates reasonable security practices with OpenSSF Scorecard integration and structured workflows. Primary concerns are around environment configuration exposure, dependency management, and GitHub Actions security. The use of Rust (memory-safe) and compile-time SQL verification (SQLx) provides good security foundations. Key recommendations: implement automated dependency auditing, ensure proper secret management practices, audit all GitHub Actions workflows, and maintain strict .gitignore policies for sensitive files. The infrastructure choices (Cloudflare R2, Google Cloud Run) are security-conscious. No critical vulnerabilities were identified from the file structure analysis, but continuous monitoring of dependencies and access controls is essential.
- Medium · Environment Configuration File Exposure —
api/.env.example. The presence of .env.example file indicates environment variable usage. If actual .env files containing secrets are committed to version control or exposed in builds, sensitive credentials could be leaked. Fix: Ensure .env files are in .gitignore, use secrets management systems (e.g., Google Cloud Secret Manager, HashiCorp Vault), and implement automated secret scanning in CI/CD pipelines. - Medium · SQLx Cached Query Files —
api/.sqlx/ (multiple query cache files). The .sqlx directory contains numerous cached query JSON files. While SQLx provides compile-time SQL verification, these cached files could potentially contain sensitive schema information or data patterns. If these files are exposed in production artifacts, they could leak database structure details. Fix: Exclude .sqlx cache files from production builds and container images. Review gitignore policies. Consider using SQLx offline mode carefully and ensure compiled binaries don't expose schema information. - Medium · Workspace Configuration Visibility —
Cargo.toml, Cargo.lock. The Cargo.toml workspace configuration and Cargo.lock file are committed to the repository. While necessary for reproducibility, ensure dependencies are regularly audited for known vulnerabilities. Fix: Implement automated dependency scanning using tools like 'cargo audit', 'cargo-crev', or integration with GitHub's Dependabot. Review and update dependencies regularly. Consider using security audit in CI/CD pipeline. - Low · Multiple Workflow Files Without Visibility Review —
.github/workflows/ (ci.yml, orama_packages_reindex.yml, orama_symbols_reindex.yml, scorecard.yml, sync-license-data.yml, sync-types.yml, title.yml). Several GitHub Actions workflows are present (.github/workflows/). Without reviewing their content, potential issues could exist such as: insecure use of secrets, running untrusted code, or insufficient access controls. Fix: Review all workflow files for: proper secret handling, OIDC token usage where applicable, pinned action versions, and principle of least privilege. Enable GitHub Actions security features (required reviews, branch protection). - Low · License Configuration File —
.licenserc.json. The presence of .licenserc.json suggests automated license compliance checking. While beneficial, misconfiguration could allow unvetted licenses or dependencies. Fix: Review license compliance rules regularly. Ensure only approved licenses are whitelisted. Integrate license scanning into CI/CD to prevent non-compliant dependencies. - Low · VSCode Configuration Committed —
.vscode/settings.json, .vscode/tailwind.json. VSCode settings are committed to the repository (.vscode/settings.json, .vscode/tailwind.json). While generally safe, this could enforce settings that weaken security practices if misconfigured. Fix: Review committed VSCode settings for security-related configurations. Consider moving security-sensitive settings to documentation rather than enforced configurations.
LLM-derived; treat as a starting point, not a security audit.
👉Where to read next
- Open issues — current backlog
- Recent PRs — what's actively shipping
- Source on GitHub
Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.