RepoPilotOpen in app →

faisalman/ua-parser-js

UAParser.js - The Essential Web Development Tool for User-Agent Detection. Detect Browsers, OS, Devices, Bots, Apps, AI Crawlers, and more. Run in Browser (client-side) or Node.js (server-side).

Mixed

Single-maintainer risk — review before adopting

weakest axis
Use as dependencyConcerns

copyleft license (AGPL-3.0) — review compatibility; top contributor handles 91% of recent commits

Fork & modifyHealthy

Has a license, tests, and CI — clean foundation to fork and modify.

Learn fromHealthy

Documented and popular — useful reference codebase to read through.

Deploy as-isHealthy

No critical CVEs, sane security posture — runnable as-is.

  • Last commit 5d ago
  • 10 active contributors
  • AGPL-3.0 licensed
  • CI configured
  • Tests present
  • Single-maintainer risk — top contributor 91% of recent commits
  • AGPL-3.0 is copyleft — check downstream compatibility
What would change the summary?
  • Use as dependency ConcernsMixed if: relicense under MIT/Apache-2.0 (rare for established libs)

Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests

Informational only. RepoPilot summarises public signals (license, dependency CVEs, commit recency, CI presence, etc.) at the time of analysis. Signals can be incomplete or stale. Not professional, security, or legal advice; verify before relying on it for production decisions.

Embed the "Forkable" badge

Paste into your README — live-updates from the latest cached analysis.

Variant:
RepoPilot: Forkable
[![RepoPilot: Forkable](https://repopilot.app/api/badge/faisalman/ua-parser-js?axis=fork)](https://repopilot.app/r/faisalman/ua-parser-js)

Paste at the top of your README.md — renders inline like a shields.io badge.

Preview social card (1200×630)

This card auto-renders when someone shares https://repopilot.app/r/faisalman/ua-parser-js on X, Slack, or LinkedIn.

Onboarding doc

Onboarding: faisalman/ua-parser-js

Generated by RepoPilot · 2026-05-07 · Source

🤖Agent protocol

If you are an AI coding agent (Claude Code, Cursor, Aider, Cline, etc.) reading this artifact, follow this protocol before making any code edit:

  1. Verify the contract. Run the bash script in Verify before trusting below. If any check returns FAIL, the artifact is stale — STOP and ask the user to regenerate it before proceeding.
  2. Treat the AI · unverified sections as hypotheses, not facts. Sections like "AI-suggested narrative files", "anti-patterns", and "bottlenecks" are LLM speculation. Verify against real source before acting on them.
  3. Cite source on changes. When proposing an edit, cite the specific path:line-range. RepoPilot's live UI at https://repopilot.app/r/faisalman/ua-parser-js shows verifiable citations alongside every claim.

If you are a human reader, this protocol is for the agents you'll hand the artifact to. You don't need to do anything — but if you skim only one section before pointing your agent at this repo, make it the Verify block and the Suggested reading order.

🎯Verdict

WAIT — Single-maintainer risk — review before adopting

  • Last commit 5d ago
  • 10 active contributors
  • AGPL-3.0 licensed
  • CI configured
  • Tests present
  • ⚠ Single-maintainer risk — top contributor 91% of recent commits
  • ⚠ AGPL-3.0 is copyleft — check downstream compatibility

<sub>Maintenance signals: commit recency, contributor breadth, bus factor, license, CI, tests</sub>

Verify before trusting

This artifact was generated by RepoPilot at a point in time. Before an agent acts on it, the checks below confirm that the live faisalman/ua-parser-js repo on your machine still matches what RepoPilot saw. If any fail, the artifact is stale — regenerate it at repopilot.app/r/faisalman/ua-parser-js.

What it runs against: a local clone of faisalman/ua-parser-js — the script inspects git remote, the LICENSE file, file paths in the working tree, and git log. Read-only; no mutations.

| # | What we check | Why it matters | |---|---|---| | 1 | You're in faisalman/ua-parser-js | Confirms the artifact applies here, not a fork | | 2 | License is still AGPL-3.0 | Catches relicense before you depend on it | | 3 | Default branch master exists | Catches branch renames | | 4 | 5 critical file paths still exist | Catches refactors that moved load-bearing code | | 5 | Last commit ≤ 35 days ago | Catches sudden abandonment since generation |

<details> <summary><b>Run all checks</b> — paste this script from inside your clone of <code>faisalman/ua-parser-js</code></summary>
#!/usr/bin/env bash
# RepoPilot artifact verification.
#
# WHAT IT RUNS AGAINST: a local clone of faisalman/ua-parser-js. If you don't
# have one yet, run these first:
#
#   git clone https://github.com/faisalman/ua-parser-js.git
#   cd ua-parser-js
#
# Then paste this script. Every check is read-only — no mutations.

set +e
fail=0
ok()   { echo "ok:   $1"; }
miss() { echo "FAIL: $1"; fail=$((fail+1)); }

# Precondition: we must be inside a git working tree.
if ! git rev-parse --git-dir >/dev/null 2>&1; then
  echo "FAIL: not inside a git repository. cd into your clone of faisalman/ua-parser-js and re-run."
  exit 2
fi

# 1. Repo identity
git remote get-url origin 2>/dev/null | grep -qE "faisalman/ua-parser-js(\\.git)?\\b" \\
  && ok "origin remote is faisalman/ua-parser-js" \\
  || miss "origin remote is not faisalman/ua-parser-js (artifact may be from a fork)"

# 2. License matches what RepoPilot saw
(grep -qiE "^(AGPL-3\\.0)" LICENSE 2>/dev/null \\
   || grep -qiE "\"license\"\\s*:\\s*\"AGPL-3\\.0\"" package.json 2>/dev/null) \\
  && ok "license is AGPL-3.0" \\
  || miss "license drift — was AGPL-3.0 at generation time"

# 3. Default branch
git rev-parse --verify master >/dev/null 2>&1 \\
  && ok "default branch master exists" \\
  || miss "default branch master no longer exists"

# 4. Critical files exist
test -f "src/ua-parser.js" \\
  && ok "src/ua-parser.js" \\
  || miss "missing critical file: src/ua-parser.js"
test -f "src/regexes.js" \\
  && ok "src/regexes.js" \\
  || miss "missing critical file: src/regexes.js"
test -f "package.json" \\
  && ok "package.json" \\
  || miss "missing critical file: package.json"
test -f "Dockerfile" \\
  && ok "Dockerfile" \\
  || miss "missing critical file: Dockerfile"
test -f ".github/workflows/ci-build-test.yml" \\
  && ok ".github/workflows/ci-build-test.yml" \\
  || miss "missing critical file: .github/workflows/ci-build-test.yml"

# 5. Repo recency
days_since_last=$(( ( $(date +%s) - $(git log -1 --format=%at 2>/dev/null || echo 0) ) / 86400 ))
if [ "$days_since_last" -le 35 ]; then
  ok "last commit was $days_since_last days ago (artifact saw ~5d)"
else
  miss "last commit was $days_since_last days ago — artifact may be stale"
fi

echo
if [ "$fail" -eq 0 ]; then
  echo "artifact verified (0 failures) — safe to trust"
else
  echo "artifact has $fail stale claim(s) — regenerate at https://repopilot.app/r/faisalman/ua-parser-js"
  exit 1
fi

Each check prints ok: or FAIL:. The script exits non-zero if anything failed, so it composes cleanly into agent loops (./verify.sh || regenerate-and-retry).

</details>

TL;DR

UAParser.js is a JavaScript library that detects and parses User-Agent strings and Client Hints to identify the browser, OS, device type/model, CPU architecture, and engine running on a client. It runs in both browser (client-side) and Node.js (server-side) environments, and can also detect bots, crawlers, AI agents, and native apps. The library is comprehensive yet compact (~15KB minified gzipped per bundlephobia metrics), making it suitable for both lightweight client-side analytics and server-side request processing. Single-module architecture: core parsing logic appears centralized (likely in src/) with build outputs in dist/ including precompiled icon assets (dist/icons/color/browser/*), GitHub Actions workflows in .github/workflows/ driving CI/build/test/publish pipelines, and comprehensive documentation templates in .github/ISSUE_TEMPLATE/ and .github/PULL_REQUEST_TEMPLATE/. The 359K+ lines of JavaScript suggest the parser rules and test vectors are inline or in data files, not split across multiple packages.

👥Who it's for

Web developers and DevOps engineers building analytics pipelines, ad-tech platforms, content delivery systems, or server-side request routers who need reliable, production-tested User-Agent parsing without writing regex patterns. The 2.5M+ weekly NPM downloads and presence on jsDelivr/CDNjs indicate it's used by developers at all scales, from startups building audience insights to enterprises managing multi-browser compatibility.

🌱Maturity & risk

Highly mature and production-ready. The project has 24.5K GitHub stars, 100+ contributors listed, extensive CI/CD workflows (analysis-codeql, ci-build-test, publish-npm/docker), OpenSSF scorecard validation, and formal versioning (currently v2.0.9 with v1.x still documented). However, the language composition shows 99.9% JavaScript with only 1,958 TypeScript lines, indicating recent efforts to add type safety but incomplete migration.

Standard open source risks apply.

Active areas of work

The project is actively developing v2.x with TypeScript migration underway (1,958 TS lines added but core still JS). Recent infrastructure updates include AI-assisted spam detection workflow (ai-assistant-spam-detection.yml), OpenSSF scorecard integration, and Docker publishing automation. The README mentions 'Before upgrading from v0.7 / v1.0, please read CHANGELOG' and documents both v1.x (docs.uaparser.dev/v1) and v2.x (docs.uaparser.dev) simultaneously, indicating a transition phase.

🚀Get running

Clone the repository and install dependencies:

git clone https://github.com/faisalman/ua-parser-js.git
cd ua-parser-js
npm install

Then verify with npm test to run the test suite defined in package.json scripts. For browser usage, include the dist/ output via <script> tag or npm import; for Node.js, require('ua-parser-js') or ESM import.

Daily commands: Run tests: npm test. Build dist: npm run build (inferred from CI workflow ci-build-test.yml). Local development likely involves editing src/ files then running build/test cycles. Docker option available: docker build -t ua-parser-js . && docker run ua-parser-js npm test based on Dockerfile presence.

🗺️Map of the codebase

  • src/ua-parser.js — Core parser logic that detects browser, OS, device, engine, and CPU from user-agent strings; entry point for all parsing operations.
  • src/regexes.js — Comprehensive regex pattern library for matching user-agent tokens; backbone of all detection accuracy and must be maintained for new browsers/devices.
  • package.json — Project metadata, version, dependencies, and npm scripts; defines build targets, test commands, and distribution artifacts.
  • Dockerfile — Container configuration for running UAParser in isolated environments; critical for reproducible builds and CI/CD pipelines.
  • .github/workflows/ci-build-test.yml — GitHub Actions pipeline that validates all commits; enforces test coverage and build success before merging.
  • SECURITY.md — Security policy and responsible disclosure procedures; essential for contributors reporting vulnerabilities.
  • CONTRIBUTING.md — Contribution guidelines covering code standards, pull request process, and development environment setup.

🛠️How to make changes

Add support for a new browser

  1. Add a regex pattern to detect the browser's user-agent string in the browserMap object (src/regexes.js)
  2. Map the regex match to a browser name and version extraction logic within the same browserMap entry (src/regexes.js)
  3. Write a test case in the test suite to verify the browser is correctly detected with sample user-agent strings (test/ua-parser.test.js)
  4. Add an SVG icon for the browser in the dist/icons directory following the naming convention (dist/icons/color/browser/{browserName}.svg)
  5. Update the CHANGELOG.md with the new browser support as a feature addition (CHANGELOG.md)

Add detection for a new operating system or device type

  1. Create a new regex pattern in osMap or deviceMap for the OS/device signature (src/regexes.js)
  2. Define extraction logic to populate name, version, and other relevant properties (src/regexes.js)
  3. Add test cases with real user-agent strings from devices running that OS/type (test/ua-parser.test.js)
  4. Create corresponding icon assets if the type is visually represented (dist/icons/color/os/{osName}.svg)

Update the parser to handle new Client Hints headers

  1. Extend the client-hints parsing logic to map new Sec-CH-UA-* headers (src/client-hints.js)
  2. Add corresponding extraction methods for version, platform, and model fields (src/client-hints.js)
  3. Write integration tests comparing Client Hints results against user-agent fallback (test/client-hints.test.js)
  4. Document the new headers and their browser support in README.md (README.md)

Release a new version to npm

  1. Update the version number in package.json following semantic versioning (package.json)
  2. Document all changes and breaking changes in CHANGELOG.md (CHANGELOG.md)
  3. Create a git tag matching the version (e.g., v2.0.10) and push to trigger the publish-npm workflow (.github/workflows/publish-npm.yml)
  4. Verify the package appears on npm registry and the distribution files are correct (dist)

🔧Why these technologies

  • JavaScript (ES5+ with TypeScript definitions) — Enables execution in both browser and Node.js environments without transpilation overhead; pure regex-based approach avoids external dependencies.
  • Regex pattern matching — User-agent strings are unstructured text; regex is the standard approach for tokenization and feature extraction without requiring ML models.
  • Client Hints API (Sec-CH-UA headers) — Modern privacy-respecting alternative to user-agent strings; provides structured browser/OS metadata in a standardized format.
  • GitHub Actions CI/CD — Native integration with repository; enables automated testing, building, and publishing without external CI platforms.
  • Docker containerization — Provides reproducible Node.js environment for server-side parsing; simplifies deployment in containerized infrastructure.

⚖️Trade-offs already made

  • Regex-based detection instead of machine learning

    • Why: Reduces dependencies, ensures predictable performance, and allows offline operation; easier to debug and maintain.
    • Consequence: Must maintain large regex pattern library manually as new user-agents emerge; accuracy depends on pattern completeness.
  • Single codebase for browser and Node.js execution

    • Why: Reduces duplication and maintenance burden; allows code sharing between environments.
    • Consequence: Must carefully avoid environment-specific APIs; distribution size includes both browser and Node.js compatibility code.
  • No external runtime dependencies

    • Why: Minimizes security surface, keeps library lightweight, and avoids version conflicts.
    • Consequence: All detection logic must be implemented in pure JavaScript; cannot leverage specialized parsing libraries.
  • Distribute pre-built artifacts instead of source-only

    • Why: Faster CDN delivery and minified bundle size; easier for users who cannot run build tools.
    • Consequence: Build process must be well-documented; contributors must regenerate dist/ after code changes.

🚫Non-goals (don't propose these)

  • Real-time user-agent string database synchronization (patterns are static per release)
  • Handling authentication or permission-based user agent masking
  • Cross-origin user-agent collection or analytics aggregation
  • Predictive detection of future browser versions
  • Mobile app native code generation or SDKs (JavaScript/web-only)

🪤Traps & gotchas

  1. User-Agent strings are volatile—every major browser release may introduce new format variations, requiring regex rule updates. Test vectors must be kept current or detection will silently fail on new OS/browser combos. 2) Client Hints adoption is asymmetric across browsers; the library must parse both legacy UA strings and modern Client Hints headers simultaneously, and fallback behavior differs. 3) The 'bot' and 'AI crawler' detection features depend on rule patterns that can be easily spoofed; don't use this for security-critical decisions. 4) Minification/bundling: pre-compiled dist/ files can diverge from src/ if build step is skipped; always run npm run build after pulling changes before publishing.

🏗️Architecture

💡Concepts to learn

  • User-Agent String Parsing — The core problem this repo solves; understanding how browsers encode OS/device/engine metadata into a single HTTP header string is essential to grok why regex patterns are brittle and why Client Hints are the future
  • Client Hints — Modern replacement for User-Agent strings with explicit HTTP headers (Sec-CH-UA, Sec-CH-UA-Platform, etc.); v2.x of this library adds Client Hints support, so understanding the distinction and parsing differences is critical
  • Regular Expression Pattern Matching — The entire detection engine relies on regex patterns to extract browser/OS/device info from UA strings; performance and accuracy tuning requires understanding regex complexity and backtracking
  • Bot/Crawler Detection Spoofing — The library detects bots and AI crawlers via pattern matching, but these can be trivially spoofed by modifying User-Agent headers; understanding the limitations is critical for security-conscious applications
  • Browser Engine Fingerprinting — Identifying Webkit vs Gecko vs Chromium engine from UA strings enables feature detection and compatibility decisions; engine detection is a first-order concern in this library's output
  • Semantic Versioning in Breaking Changes — The CHANGELOG documents v0.7→v1.0→v2.x with explicit breaking changes; understanding the deprecation path is essential for production systems that must support multiple versions during migration
  • Cross-Platform JavaScript (Browser + Node.js) — This library runs in both client and server environments with different APIs (window.navigator in browser vs process.argv in Node); architectural decisions reflect dual-platform constraints
  • browserslist/browserslist — Complementary tool that uses User-Agent data to determine which browsers to target for transpilation; commonly paired with ua-parser-js in build pipelines
  • useragent/useragent — Direct competitor in Node.js/JavaScript space for User-Agent parsing; same problem domain but different implementation and API
  • bluesmoon/ua-parser — Original ua-parser project that inspired ua-parser-js; Java/C++ reference implementation with multi-language ports, shows the parsing algorithm lineage
  • matomo-org/device-detector — PHP alternative for User-Agent detection; developers choosing between ua-parser-js and server-side detection often evaluate this for backend fallback
  • Modernizr/Modernizr — Feature detection library often used alongside ua-parser-js for determining capability support; complementary pattern for browser compatibility checks

🪄PR ideas

To work on one of these in Claude Code or Cursor, paste: Implement the "<title>" PR idea from CLAUDE.md, working through the checklist as the task list.

Add comprehensive unit tests for browser icon asset coverage validation

The repo has an extensive dist/icons/color/browser directory with 50+ browser icons (SVG and PNG formats). There's currently no validation test to ensure all detected browsers have corresponding icon assets, or to catch missing/orphaned icon files. This would prevent broken icon references in production and catch icon file deletions during refactoring.

  • [ ] Create test file src/tests/icons.test.ts (or equivalent)
  • [ ] Parse all browser names from the UA detection regex/data files
  • [ ] Validate each detected browser has both .svg and .png files in dist/icons/color/browser/
  • [ ] Add test to detect orphaned icon files with no corresponding browser detection
  • [ ] Integrate test into ci-build-test.yml GitHub Action workflow
  • [ ] Document icon addition requirements in CONTRIBUTING.md

Add GitHub Action workflow for automated User-Agent test data updates

The repo detects User-Agent strings for hundreds of browser/OS/device combinations, but there's no automated workflow to validate against real-world UA changes or new browser releases. A scheduled workflow could fetch updated UA strings from public datasets (like ua-dataset or browser vendor updates) and create PRs when new patterns are detected, reducing manual maintenance burden.

  • [ ] Create .github/workflows/update-ua-data.yml with weekly schedule trigger
  • [ ] Implement script in src/scripts/fetch-ua-updates.ts to fetch latest UA patterns from ua-dataset or similar source
  • [ ] Add validation logic to compare new patterns against existing detection rules
  • [ ] Auto-create PR with diff if new critical patterns detected (e.g., new major Chrome/Firefox versions)
  • [ ] Document process in CONTRIBUTING.md under 'Maintaining UA Data'
  • [ ] Add optional manual trigger via workflow_dispatch

Add Client Hints detection integration tests with multiple browser environments

The repo advertises Client Hints support alongside traditional User-Agent detection, but the current test coverage likely focuses on UA strings. There's no dedicated test suite validating Client Hints parsing across different browser/Node.js versions that support it. This is critical since Client Hints have different availability profiles across browsers.

  • [ ] Create src/tests/client-hints.integration.test.ts for Client Hints parsing
  • [ ] Add test vectors for Chrome, Edge, Safari, Firefox Client Hints variations
  • [ ] Create test suite for Node.js Client Hints handling (via sec-ch-* headers)
  • [ ] Add test cases for Client Hints + User-Agent fallback scenarios
  • [ ] Document Client Hints test methodology in CONTRIBUTING.md
  • [ ] Consider extending ci-build-test.yml to run against multiple Node versions with different Client Hints support levels

🌿Good first issues

  • Add TypeScript definitions or convert remaining core .js files to .ts to complete the gradual migration visible in package composition (currently 99.9% JS with only 1,958 TS lines added). Start with src/index.js or the main parser entry point.
  • Audit dist/icons/color/browser/ SVG icons against current browser branding guidelines (Chrome, Firefox, Safari, Edge, etc.) and create a PR with updated assets; this is low-risk visual maintenance with clear acceptance criteria.
  • Expand the test coverage for newly detected AI crawlers and bot detection patterns (mentioned in package.json keywords but test presence unclear from file list); add test vectors in the test suite for GPT-Bot, Claudebot, Gemini-like crawlers.

Top contributors

Click to expand

📝Recent commits

Click to expand
  • 18d39b5 — CI: Add GitHub Actions workflow to publish to Docker Hub (faisalman)
  • 58b5a0c — Build: Add Dockerfile for container image build (faisalman)
  • 965c20d — CI: Update fuzz test (faisalman)
  • ad97fea — Build: Set sideEffects=false in package.json for tree shaking (#781) (denisx)
  • 312598f — CI: Add AI and spam detection to pull request workflow (faisalman)
  • 8f9e4dc — Test: Fix relative path and update done() callback in CLI test spec (faisalman)
  • eb809ec — Chore(license): Add THIRD_PARTY_NOTICES.md for third-party assets (faisalman)
  • 15fee5c — Security: Update Actions to pin SHA dependency, rename files, & remove unused workflow (faisalman)
  • 6b1978b — Chore: Update instructions reminding contributors to update enums when adding new detection rules (faisalman)
  • 4793b92 — Feat(extensions): Add new bots: Amzn-SearchBot, Amzn-User (faisalman)

🔒Security observations

The UAParser.js project demonstrates a generally solid security posture with established security policies, CI/CD security analysis, and vulnerability reporting mechanisms. However, there are several areas for improvement: (1) Docker container security could be enhanced by running as a non-root user and adding health checks, (2) the security policy documentation could provide more guidance for library consumers regarding input validation and ReDoS prevention, and (3) container image scanning should be integrated into the deployment pipeline. No critical vulnerabilities were detected in the visible configuration files, and the project maintains supported versions with active security patches. The main risks are operational/configuration-based rather than code-level vulnerabilities.

  • Medium · Multi-stage Docker Build Without User Privilege De-escalation — Dockerfile. The Dockerfile uses the official node:lts-alpine image but does not create a non-root user or drop privileges. The application runs as root, which violates the principle of least privilege and increases the impact of potential code execution vulnerabilities. Fix: Add a non-root user and switch to it before running the application. Example: 'RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001' and 'USER nodejs' before the ENTRYPOINT.
  • Medium · Missing Health Check in Docker Configuration — Dockerfile. The Dockerfile does not include a HEALTHCHECK instruction, making it difficult to monitor container health in orchestrated environments (Kubernetes, Docker Swarm, etc.). This can lead to deployment of unhealthy instances. Fix: Add a HEALTHCHECK instruction to verify the CLI or service is functioning correctly. Example: 'HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 CMD node ./script/cli.js --version || exit 1'
  • Low · Incomplete Security Policy Documentation — SECURITY.md. The SECURITY.md file lacks specific guidance on security best practices for library usage, such as validating user-agent input before processing or handling of malicious payloads. As a user-agent parser that processes untrusted input, additional security guidance would be beneficial. Fix: Enhance SECURITY.md with security recommendations for library consumers, including input validation, handling of edge cases, and protection against regex denial-of-service (ReDoS) attacks if regex patterns are used for parsing.
  • Low · Missing Security Headers and Container Scanning Configuration — .github/workflows/. The Docker image build does not appear to include container scanning or vulnerability assessment tools in the CI/CD pipeline (based on visible workflow files). While CodeQL and dependency analysis are present, additional container image scanning would improve security. Fix: Integrate container image scanning tools (e.g., Trivy, Grype, or Docker Scout) into the publish-docker.yml workflow to detect vulnerabilities in the final Docker image before publishing to a registry.
  • Low · No Explicit npm Audit Configuration — package.json. The package.json does not show explicit security audit configurations or remediation settings. While npm audit is standard, lack of documented configuration may lead to inconsistent security practices across development environments. Fix: Add npm audit configurations to package.json (e.g., 'npm-audit-resolver' or '.npmrc' settings) and document the security update policy in CONTRIBUTING.md.

LLM-derived; treat as a starting point, not a security audit.


Generated by RepoPilot. Verdict based on maintenance signals — see the live page for receipts. Re-run on a new commit to refresh.

Mixed signals · faisalman/ua-parser-js — RepoPilot